Intel’s Foundry Day Focuses on Advanced Packaging

Intel’s Foundry Day Focuses on Advanced Packaging

 By Jean S. Bozman

Intel Corp. made significant announcements in its diversification strategy on Feb. 21, 2024, showing the ways in which its Intel Foundry business will partner with semiconductor companies, design-software companies, and new customers to deliver advanced packaging for next-generation compute and networking processors.

In its Direct Connect event, a series of announcements surrounded the Intel Foundry announcement and newly announced partnerships, rounding a full day of Intel Foundry news. One of the day’s key messages is that Intel will provide a stable, secure, and consistent supply chain for a wide variety of microprocessors for data centers, networking, and consumer uses.

Central to this story is the idea that AI will drive a new era of compute, requiring heterogeneous compute engines co-resident on microprocessors; function-specific chiplets on microprocessors; and a new wave of software design tools to enable AI workloads on next-generation chips.

The rapid adoption of AI, including generative AI (e.g., ChatGPT) and operational AI to manage data and applications, is driving substantial growth in the semiconductor segment – giving Intel the opportunity to grow revenues and profits more quickly than during the pandemic years (2020-2023).

 

Things Are Changing in the World’s Data Centers

This emerging world of heterogeneity will require fast engines – and fast interconnects between them, along with customer adoption of networking and interconnect standards and the use of flexible EDA chip-design software from Synopsys, Siemens, Cadence, and others.

As a result, the deployment of next-gen systems will likely be very different than it has been in the 1990s, 2000s and 2010s. Intel envisions a world of AI-enabled compute engines that will be housed within a series of very dense racks – with new power/cooling technologies to prevent overheating for a world of AI clusters and support sustainability.

This vision fits with IDC’s view that net-new IT adoption will happen in cloud provider data centers about as much as it does in enterprise data centers, resulting in a 50/50 mix of new system deployments worldwide. In that world, distributed computing is based on a mix of onboard capabilities rather than traditional monolithic designs.

Further, the compute engines themselves will be deployed within enterprise data centers and large cloud provider (CSP) data centers located around the globe – making secure and consistent supply chains vital to the next generation of AI deployments around the world. That was the “mantra” of the Intel Foundry announcements on Feb. 21.

 

Geographic Considerations

The current distribution of large fabs and foundries around the world is largely centered in Asia – in China and Taiwan, Korea, and Southeast Asia. Intel is currently constructing new Foundry locations in western Europe and North America – places that have lots of microprocessor customers, and relatively fewer foundries nearby.

Today, the chips that are made in Intel’s fabs are used primarily by Intel, but the company hopes to shift to a partner model, in which Intel customers use EDA software to design their new chips, and Intel engineers make those new designs – doing so with advanced packaging, high quality and shortened time-to-markets.

Clearly, Intel is not alone in building new fabs, but it is planning and constructing fabs that will bring more foundries closer to the vendors and large enterprises that consume and deploy microprocessor chips. Sites either planned or under construction include fabs in Germany, Ireland, and Poland, as well as U.S.-based fabs in Oregon, New Mexico, and Arizona.

 

Intel Foundry Event Highlights

The daylong Intel Foundry conference spelled out the types of advanced packaging that will be required for microprocessors made by Intel (CORE and Xeon), and by its longtime competitors – some of whom could become customers of the new Intel Foundry business. Examples of advanced packaging include multiple function-specific chiplets linked by fast interconnects; “tiles” to package the components into 2-D and 3-D arrays; memory pooling, and low-power energy sources.

 

The Intel Foundry Business

 To set the tone for the day-long event, Intel renamed its Intel Foundries Services (IFS) business, which has design and manufacturing centers in the western U.S., Europe, and Asia. It is now named Intel Foundry – and is open for partnership with other tech companies that will design, but not manufacture, the future microprocessors and chiplets they design.

One of the day’s key messages is that Intel will provide a stable, secure, and consistent supply chain for a wide variety of microprocessors for data-centers, networking and consumer uses.

Now, the Intel Foundry business can be viewed as wafer-fabrication, advanced packaging and manufacturing resource for customers who design their own chips – while supporting a worldwide supply chain that has “hubs” across multiple continents. That contrasts with the oft-quoted global sites for foundries and fabs – which today shows that about 80% of semiconductor manufacturing is in Asia, and only 20% in the combination of U.S. and Europe, Intel said. Intel plans to close the gap – moving to a 50-50 mix of distributed and central-site IT expertise.

The phenomenon of co-opetition (cooperating and competing over time) will likely be an element of the world’s foundry opportunities for partnering; it already is. We’ve seen that pattern with other companies in the foundry business: Global Foundries, and with Samsung. Global Foundries operates fabs in upstate New York and Singapore – and Samsung operates fabs in Austin, Texas, and South Korea.

As Intel grows its business, the Intel Foundry sites may choose to partner with other longtime semiconductor companies, earning new foundry business by providing advanced packaging, design, and manufacturing expertise. In this way, companies that may formerly have been seen as Intel competitors may become foundry partners. One example, announced at the event, is that Intel will partner with ARM to serve mutual foundry customers.

Software-based design will be key to this new era of Intel Foundry business. That’s why Intel announced big design-software partnerships with Synopsys, Siemens (Mentor Graphics), and Cadence at this conference – and why all three of these companies have their former CEOs on Intel Foundry’s new Advisory Board.

Worldwide, the largest foundry business belongs to TSMC, with multiple fabs in Taiwan, and at last one that is under construction in Arizona. TSMC forged a model of fabricating chips that other companies – its customers – designed for themselves, or designed with the help of third-party firms.

 

The Foundry Business Model

Establishing and delivering the advanced manufacturing techniques requires agreed-upon standards, adoption of interoperating interconnects between chips, high-speed networking – and the arrival of net-new technologies, such as AI-enabled consumer devices for the Edge, and what Intel has announced as its AI PC for business desktops.

Other elements needed for an increasingly disaggregated, AI-enabled computing world include security standards that isolate and protect “known-good” IT platforms and security-oriented “guardrails.”

On-chip interoperability between the individual compute engines, including chiplets, will be needed to harness as many compute and networking links as possible. Given the changing “physics” for manufacturing with new materials, optical interconnects, glass substrates and links like UCIe and UltraEthernet – all are expected to accelerate the adoption of advanced packaging for Enterprise, Cloud and Edge systems.

Intel’s strategy is to leverage its multi-billion-dollar investments – including its own and funding from the U.S. federal CHIPS act to build out an arc of U.S. and European fabs. This will support local regional markets – and provide a de facto “follow-the-sun” manufacturing cycle around the world. At the Feb. 21 event, U.S. Secretary of Commerce Gina Raimondo, who spoke via video, outlined the series of CHIPS-program grants that Intel is leveraging as it builds two new Ohio manufacturing plants currently under construction.

Partnerships will be key in growing the business – including partnerships with design-software providers and semiconductor vendors. In a prime example of Intel Foundry’s emerging business model, Intel and ARM announced a new partnership for next-gen platform manufacturing. This was one of the most surprising, and closely guarded, secrets prior to the February launch event. In past years, industry observers saw ARM as competing with Intel in the mobile-phone and consumer markets in which ARM designs were made by other foundries; this perception will likely shift, given the Intel-ARM partnership announcement.

 

The Importance of AI

AI – and its rapid growth, were a constant theme throughout Intel’s messages about its product and technologies strategies. The reason is plain to see: The deep interest in AI, spurred by the emergence of ChatGPT in November, 2022, is driving remarkable IT growth following the worldwide pandemic of 2020 – 2022.

Interestingly, both Microsoft and OpenAI participated in the Intel Foundry Direct Connect event. Specifically, Microsoft CEO Satya Nadella provided a video for the keynote with Intel CEO Pat Gelsinger – and Open AI CEO Sam Altman provided his outlook for AI’s future in a closing 1:1 onstage discussion with Intel CEO Gelsinger.

In the keynote session, Microsoft CEO Nadella announced that Intel Foundry would be producing a processor that will be optimized for next-generation office software. The plan was specific enough to name the microprocessor – and the foundry process by which it will be manufactured. Microsoft’s big market reach will illustrate how the Foundry business model works.

The presence of both Microsoft and OpenAI may be doubly significant, in that Microsoft has already made substantial investments in OpenAI. Further, a potential $13 billion acquisition of OpenAI by Microsoft was widely discussed last year, prior to Sam Altman’s decision – within weeks — to remain in his role as OpenAI CEO.

Altman is known for his focus on the importance of AI research alongside building a product roadmap and a company. In his onstage conversation with Pat Gelsinger, at the close of the daylong Intel Foundry event, Sam Altman said he continues to believe in the promising future of AI – especially when it can be used to accelerate progress in important fields like human cognition, health care, medical research, and education.

Altman, who is known for his focus on the importance of AI research alongside building his company’s product roadmap and business plan, closed with this thought about the importance, and the future, of AI. “There are mega-new discoveries to find in front of us,” Altman said on-stage. “But the fundamentals of this, is [that] deep learning works and it gets predictably better with scale.”

 

 


Copyright ©Jean S. Bozman, 2024, All rights reserved.

My Fond Memories and Interactions with Dr. M. R. Pamidi (M.R.)

My Fond Memories and Interactions with Dr. M. R. Pamidi (M.R.)

I returned from India on February/15 after a five-week stay, and I called my colleague, M.R. Pamidi, on February 19 (President’s Day) in the mid-afternoon Eastern Time (my usual time to call M.R.) to check in with him, as I did regularly over the last many years. I even called him periodically from India since we had several active projects.

However, on President’s Day, M.R.’s son, Matt, picked up the phone and gave me the sad and shocking news that M.R. had seriously hurt himself after falling off a machine at the gym. Matt told me that the prognosis for M.R. was not very good. I kept in touch with Matt, and this past weekend, Matt informed me of M.R.’s unfortunate and tragic passing, and it has shaken me up.

I first met M.R. virtually about eight years back when, at the suggestion of an IDC analyst, he contacted me seeking opportunities to work on Cabot Partners’ projects. At that time, a small hardware company had just retained us to write a comprehensive custom report on how they could grow their business in the rapidly emerging area of artificial intelligence/machine learning (AI/ML). So, as I usually do with new partners, I put M. R. on this project. Over the next several months, I experienced firsthand M. R’s friendly demeanor, diligent work ethic, thorough research, and attention to detail.

So, as opportunities arose for the next eight years, we collaborated on various technical client projects in cloud computing, AI/ML, cybersecurity, and others. Often, I would call M.R. and ask him to create content on many of these emerging topics. I would request, “Please summarize the five top use cases of AI in Retail.” “Please create ten charts depicting our Client’s Strengths, Weaknesses, Opportunities, and Threats (SWOT) compared to a competitor,” or “Please write three pages describing the role of infrastructure in Augmented/Virtual Reality (AR/VR).”

In every case, M.R. delivered excellent work, always on time, and proactively followed up to ensure it was what I was looking for. Once, late in 2022, he informed me he could not start a project because his beloved wife, Mary, of over 46 years, had just passed after a prolonged illness. I felt awful then. But now I am devastated.

I only met M.R. once face-to-face (over two days) when we attended an Analyst event in the Bay area. M.R. was tall, lean, handsome, and looked like he was in his early 60s. Only when his wife, Mary, passed did he tell me that he was in his mid-seventies. It is truly stunning that he was highly productive, always eager to learn, and physically active (he worked out daily).

When a Hindu passes, it is customary to immerse their ashes in the water. Many of Cabot Partners’ clients’ assets, our website, blogs, and whitepapers, have M.R.’s work deeply ingrained in them. This work is an enduring legacy of his dedication, diligence, and determination.

Matt and his sister, Meera, are the legacy of his devotion to family. To Matt and Meera, my condolences on your profound and tragic loss. May the Lord Venkateshwara of the Seven Hills give you the strength and fortitude to bear this loss on top of the recent loss of your dear mother, and may He guide you and your family through this difficult and trying period. May M.R.’s soul rest in eternal peace!

HPE Intends to Acquire Juniper Networks: A Perspective

HPE Intends to Acquire Juniper Networks: A Perspective

by
Jean S. Bozman, President,

Cloud Architects Advisors LLC
and
Srini Chari, Ph.D., MBA and M. R. Pamidi, Ph.D.,
Cabot Partners

The Deal

On January 09, 2024, Hewlett Packard Enterprise (HPE) announced its intent to acquire Juniper Networks in an all-cash transaction for $40.00 per share, representing an equity value of approximately $14 billion. The proposed acquisition is subject to regulatory approvals in the US, the EU, and Asia. Upon completion of the transaction, Juniper CEO Rami Rahim will lead the combined HPE networking business, reporting to HPE President and CEO Antonio Neri.

The transaction is currently expected to close in late CY2024 or early CY2025, subject to receipt of regulatory approvals, approval of the transaction by Juniper shareholders, and satisfaction of other customary closing conditions.

Background Before the Deal

With the Juniper acquisition, HPE plans to strengthen its position in end-to-end computing, distributed and multi-cloud deployments for enterprise, high-performance computing (HPC), and AI workloads. The ability to scale up customers’ AI environments will be critical to adopting AI and AI-enabled enterprise workloads.

Already a top-tier provider of IT systems, servers, storage, and networking systems, HPE competes with Cisco, Dell Technologies, and IBM for enterprise deals, as shown by analyst data from multiple market research firms (including IDC, Statista, and IT Candor, among others). In 2024-2025, the speed of data access and transit in multi-cloud deployments is a priority for all AI, HPC, and enterprise solution providers. Those providers that optimize performance for Core, Cloud, and Edge have an edge to gain additional market share worldwide.

Juniper competes most closely with networking vendors, including Arista Networks, Cisco Systems, Extreme Networks, and Huawei Technologies. Industry estimates report that Juniper has more than 10% of the networking/switching market share – with between one-quarter to one-third of Cisco’s total share worldwide. Juniper supplies networking across the customer spectrum, including small/medium (SMB) and large enterprise customers, deploying network switches for use in data centers and public and private clouds.

HPE is already seeing growth in its networking business, including in the wireless LAN market, where HPE’s Aruba Networking systems have a double-digit market share, according to IDC.

If the deal is approved, Juniper’s long-held position in the networking world will fit with HPE’s strengths in datacenter and cloud computing. In a video interview on CNBC, HPE CEO Antonio Neri said that he expects HPE’s networking revenues to double as a result of the HPE/Juniper combination.

Goal to Solidify Leadership in AI and Enterprise Computing

HPE plans to leverage its strengths in HPC and AI computing to strengthen its position in multi-cloud enterprise solutions – especially for AI-enabled applications. End-to-end computing combines enterprise datacenter solutions and cloud solutions – blending them together by delivering services from a comprehensive portfolio of servers, software, storage, and networking.

An important aspect of HPE’s role as a provider is that a notable “slice” of its overall IT sales comes from cloud service providers (CSPs). These are generally not announced in Press Releases. However, CSPs are known for “building their own” IT infrastructures and customizing what goes into their system racks. Even so, many of the building blocks are shipped behind the scenes by vendors like HPE and Dell.

The “Why” of the Deal

HPE claims the combined new networking segment will increase from approximately 18% of total HPE revenue as of fiscal year 2023 to about 31% and contribute more than 56% of HPE’s total operating income. Further, a new generation of networking gear and switches could increase HPE’s as-a-service (aaS) offers. For aaS (e.g., database-as-a-service; storage-as-a-service), HPE builds and manages IT infrastructure on behalf of its customers – often on private clouds or on-premises (private) clouds.

The move to acquire Juniper for $14 billion in an all-cash deal appears to be driven by several factors. Other factors and goals:

  • Improve performance while reducing latency for hybrid cloud and multi-cloud networks.
  • Prevent another tech company from acquiring Juniper.
  • Ensure that Juniper Networking remains a U.S.-led company.
  • Grow opportunities for HPE’s extensive network of channel partners worldwide.

Here are more details on these points:

Performance: End-to-end performance is vital for a good customer experience while accessing online applications and data. On-premises and cloud customers will experience delays when accessing large data pools over a long-distance network as they accelerate their adoption of AI-enabled applications and data. They need these high levels of end-to-end performance even if they customize their offerings within their racks of servers.

Vendor competition: Acquisitions happen for many reasons, and one driver for an acquisition is that it will ensure that a rival vendor would not be able to acquire the products or technology of the acquired firm. One classic example was the bidding war between HP and Dell for 3PAR for networked storage in 2010.

Sovereignty: Although not often mentioned during public announcements, there are concerns in the US, Europe, and the European Union that some key technologies remain owned by companies within their geographic regions. Concerns about protecting Personally Identifiable Information (PII) and intellectual property (IP) data are often the top reasons for geographic preference. Some of HPE’s large customers include US federal agencies and EU scientific agencies, so that geographic considerations could emerge as a factor during government acquisition reviews.

 Channel partners: HPE has an enormous channel of global partners spanning all major geographies worldwide. HPE must keep its large channel fed by its partner organizations and product supply chain To maximize its business revenue and profitability. HPE competes closely with Dell in this way and typically flows new products to end customers through a combination of HPE direct and indirect channels.

 A Long History of Large Acquisitions

HPE has substantial experience acquiring large companies, making this acquisition of Juniper a likely candidate for completion in the coming months. HPE is a clear example of leveraging both “build” and “buy” strategies, depending on the company’s product needs and competitive roadmap.

HPE and HP before it (following the split of the historic HP company into two firms – HP and HPE — in 2014) both had active acquisition programs. Some of the “names” of early acquisitions, including those made by HP before it became HPE in 2014, are (in alphabetical order): 3Com (2009); 3PAR (2010); Aruba (2015); Compaq Computer in 2002 (which had previously acquired Digital Equipment Corp. (DEC) and Tandem Computers); Cray (2019); SGI (2016), and others.

Speed of action is often the reason for launching an acquisition strategy. Acquisitions allow a company to move faster in a given market to reduce development time and rapidly add the installed base of an acquired company. Following an acquisition, challenges include right-sizing the combined company and keeping costs in line, usually through reductions in force (RIFs) related to duplicated job roles in the two companies.

With this acquisition, HPE can improve its competitive position in the networking, AI, and multi-cloud enterprise market spaces.

Competition

HPE is one of the world’s largest IT providers, with nearly 20% share of the $100B worldwide server market, according to IDC, Statista, and IT Candor Data. HPE’s biggest server competitor is Dell Technologies; both vendors have over 15% of the market share. However, the exact percentage varies by market research reports and key market sectors, such as HPC and telcos, and the types of systems counted.

In the rapidly growing AI market, HPE stands to increase its share in several market segments, including the blended cloud and enterprise computing space, where high-performance networking is critical.

The Battle in the Blended Cloud + Enterprise Market Space

In the networking space, the leading vendors are Cisco, Juniper, Arista Networks, Huawei, Broadcom, Dell, HPE, and Extreme Networks. As mentioned earlier, HPE intends to move up the ladder by acquiring Juniper, one of several top networking and switching equipment providers, including Cisco, Broadcom, Huawei, and Extreme Networks.

However, the IT and networking equipment combination will drive more business for end-to-end solutions, spanning Core, Cloud, and Edge, per HPE’s stated strategy. HPE CEO Antonio Neri often speaks about this Core, Cloud, and Edge strategy at tech conferences, saying it is a priority strategy for the company. In a recent CNBC interview, he said he expects to accelerate the momentum of HPE customers’ Cloud and Edge deployments.

More on the Juniper Acquisition

HPE has a multi-faceted build-and-buy strategy. HPE will add Juniper’s products and services for the end-to-end infrastructure world while strongly partnering with vendor and service leaders in AI, HPC, and networking to provide integrated solutions.

HPE has already embarked on its strategy to provide end-to-end offerings on a cloud-like delivery model to capitalize on customer spending in the fast-growing cloud market. Of course, HPE has made many acquisitions in the past ten years to strengthen its offerings in AI, analytics, Big Data, cloud data management, HPC, storage, and software. With its extensive product portfolio – now broadened by Juniper’s networking products and services – HPE must continue to leverage the products it designed and brought to market.

We believe HPE will continue to make more acquisitions in 2024 and 2025, but they will likely not be as large as the proposed Juniper acquisition. Indeed, the company will likely continue buying smaller firms, including startups, with targeted solutions in the data center and cloud market. This is a long-established pattern for HPE, IBM, Microsoft, and other large firms that want to speed up product development – and to bring new technologies to the marketplace as quickly as possible.

What does this Acquisition Bring to the Table?

HPE is looking to strengthen its position in networking, end-to-end computing, distributed and multi-cloud deployments for enterprise, high-performance computing (HPC), and AI workloads. With the acquisition of Juniper, it plans to help enterprise customers “scale up” their AI-enabled workloads, just as it has done with HPC-enabled workloads in recent years following its acquisitions of SGI and Cray.

Until now, HPE’s networking portfolio has not generated enough networking revenue to keep pace with Cisco’s. With this acquisition, HPE would move toward closing that gap; Juniper had an estimated annual networking revenue of $2.6 billion in 2023. An essential Juniper product is the Mist AI platform for AI-powered network management software, which strengthens HPE’s portfolio and aligns with its focus on network management, automation, and orchestration.

We note here that HP OpenView, a consolidated product that leveraged HP’s acquisitions of Radia, Peregrine Systems, Mercury Interactive, and Opsware, also brought revenue growth. HPE OpenView eventually became part of HPE software, which HPE sold to MicroFocus. HPE has to fill the gap in its networking software portfolio related to the MicroFocus spinoff, and Juniper’s acquisition will help HPE achieve that goal. It will also enhance HPE’s cloud offering for multi-site, data-based workloads – HPE Greenlake.

HPE Greenlake

In the cloud-computing space, HPE is adding value with its GreenLake software and partnering with the CSPs, including Amazon AWS, Microsoft Azure, and Google Cloud. The company launched HPE GreenLake in 2017 and offers new  “pay-per-use” IT solutions for top customer workloads. Since then, GreenLake has evolved to support data warehouses, lakehouses, and other data resources accessed by end-to-end cloud services solutions.

GreenLake is an as-a-service (aaS) offering that brings cloud-like flexibility to data centers and other locations, such as satellite and remote offices. When customers sign up for GreenLake, HPE delivers a complete and preconfigured system that includes all the hardware and software necessary to be up and running almost immediately. Importantly, it helps convert a customer’s CapEx to OpEx because HPE manages the system throughout its entire lifecycle. In exchange, customers pay a monthly subscription fee based on a pay-for-use pricing structure similar to many cloud services.

However, HPE and Juniper must overcome several challenges if the deal closes.

HPE and Juniper’s Challenges in Datacenter Networking

Perhaps the biggest challenge is that Juniper’s growth has stalled recently. Juniper’s revenue in 2016 was $4.99 billion and grew by about 1% in fiscal 2022 to $5.3 billion. The company may not have been large enough to fuel large-scale marketing campaigns and outreach to gain new customers and quickly add new offers to its enterprise and cloud customers.

For its part, HPE had challenges competing with the networking incumbents – but it looks to gain a share in the blended market that will deliver end-to-end IT infrastructure that will support both enterprise and cloud applications and data in an end-to-end world. Further, HPE could add Juniper’s technology to existing HPE high-availability and backup/restore software offers, adding value to its current compute and storage solutions for data protection and data availability.

Key Takeaways

If regulatory authorities approve the acquisition in the US, the EU, and Asia, HPE will be solidifying its position with global enterprise companies and major CSPs as a provider of high-performance IT technology and services. The rapid growth of AI demands optimized computing, storage, and networking performance for the world’s largest networks. For customers, keeping pace with AI’s data, networking, and power/cooling requirements is critical to “scaling up” AI across corporate networks and the cloud in multi-cloud deployments.

 To deliver data services to their end users, both categories of customers (enterprise customers and cloud providers) need to access and update large “availability zones” around the world to maintain response times for enterprise and cloud applications and data. Any glitches or interruptions in the delivery of those round-the-world services will be noticeable – and noted – by the most valuable customer sets in the world.

We will eagerly await the outcome of HPE’s acquisition of Juniper, following a cycle of regulatory reviews by various world bodies, and we will provide an update when the deal is finalized.

 

AI Governance

AI Governance

by
Jean S. Bozman, President,

Cloud Architects Advisors LLC
and
M. R. Pamidi, Ph.D.,
Cabot Partners

AI and GenAI are still “center-stage” for many companies, large and small, in the wake of a global wave of rapid adoption for GenAI software that accelerated in 2023. Chatbots and large language models (LLMs) soared in popularity in 2023 following the release of GenAI products, including OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s Copilot chatbot, among others.

Business leaders worldwide are envisioning how they will apply AI to solving business challenges. They are deploying AI-enabled business solutions in retail, manufacturing, health care, and many other industry sectors.   However, securing and managing AI across business units in large enterprises are critical challenges they must address. In the wake of recent AI deployments, improved security can be an after-thought – or an add-on for some customers.

Now, AI-enabled silos must be monitored and managed as part of a company-wide tracking ability designed to manage applications securely and consistently across hybrid clouds and multi-clouds.

Proliferation, then Consistency Across the Organization

Building a consistent approach to AI deployments is a “must” for businesses to ensure that policies and security guardrails (standards and best practices) are in place applying them throughout a business organization. This approach will allow later audits for end-to-end data integrity, consistency, and management.

AI governance is not a one-size-fits-all solution. Instead, it is a context-specific and dynamic process that requires continuous monitoring and evaluation. In a governance model, all the managed AI systems should be viewable by IT’s central management dashboards. Ideally, the end users should involve a diverse and inclusive range of stakeholders, reaching AI systems accessed by developers, ethicists, policymakers, and business end users to ensure that a broad range of perspectives and interests are considered and balanced.

There is much homework to do before deploying AI governance. As a matter of design, AI governance should also cover the entire AI lifecycle – spanning the following elements: data collection and processing, model development and testing, deployment and maintenance, and, ultimately, the decommissioning of specified AI systems at their end of life (EOL).

Some of the key components of AI governance are:

  • Data governance: The management of data quality, security, privacy, and ethics to ensure that the data used to train and test AI models are accurate, reliable, representative, and respectful of human rights.
  • Ethical governance: The administration of ethical principles, values, and norms to ensure that the AI systems align with the moral and social expectations of the stakeholders and society at large.
  • Model governance: The management of model performance, robustness, fairness, and explainability to ensure that the AI models produce valid, reliable, unbiased, and interpretable results and decisions.
  • Operational governance: The tracking of operational risks, compliance, and auditability to ensure that the AI systems are deployed and used in a safe, secure, and lawful manner – and that their impacts and outcomes are monitored and evaluated.

The Whys of AI Governance

As noted above, securing and managing AI across business units in large enterprises are essential challenges when business applications and data leverage AI. Enterprise-wide AI governance helps organizations effectively to do the following:

  • Comply with existing and emerging regulations and standards that apply to AI, such as the EU’s General Data Protection Regulation (GDPR), the California Privacy Rights Act (CCPA), and the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (AIRIA) introduced by the U.S. Senate in November 2023.
  • Build trust and confidence among a range of stakeholders, such as customers, employees, partners, and regulators, by demonstrating transparency, “explainability” regarding data sources, lack of bias, and, ultimately, accountability for their AI systems.
  • Achieve their strategic goals and objectives by ensuring their AI systems align with an organization’s values, corporate mission, and strategic vision.

Building a consistent approach to AI deployments is a “must” for businesses that must ensure that policies and security guardrails are in place, applying them throughout a business organization.

IBM and Enterprise AI

As announced at IBM’s THINK conference in April 2023, IBM is taking an enterprise-wide approach to AI governance. IBM released its watsonx.ai development toolkit and data-monitoring software, watsonx.data, in Spring 2023. In December, IBM released its watsonx.governance software into general availability (GA) – making it a new product released and made available to customers in 2024.

Many readers will recall the IBM Watson AI software portfolio released in 2013. The new watsonx portfolio targets today’s business and IT environments, including new functionality for AI, hybrid cloud, and multi-cloud deployments. They should note that the new watsonx software portfolio, as announced last spring, has three deliverables:

  • IBM watsonx.ai enterprise studio platform for software developers;
  • IBM watsonx.data, which catalogs enterprise data in data warehouses, data lake-houses, and corporate data stores;
  • IBM watsonx.governance software to monitor and manage AI across the enterprise.

The release of watsonx.governance software supports an enterprise-wide view of AI activity and use cases – and a view of how their data center and cloud resources support the AI systems within the enterprise. This capability will help businesses to “scale up” their AI infrastructure consistently across the enterprise.

The December release has expanded capabilities for viewing LLM development, monitoring LLM metrics, and managing the lifecycle of LLMs throughout the software lifecycle. The software tools help avoid bias and data inaccuracies and prevent AI “drift” as the LLMs operate over months and years. IBM said it plans to expand its governance software to add support for third-party AI models from other vendors, starting in Q1 2024.

Kareem Yusuf, Ph.D., Senior Vice President, Product Management and Growth, IBM Software, described IBM watsonx.governance as a unified solution for businesses deploying and managing LLM and ML models. Governance software provides tools for increased visibility, providing software to “automate AI governance processes, monitor their models, and take corrective action.” Quoted in an IBM press release, Yusuf summed up the business value of customers’ governance-software deployments. “Company boards and CEOs are looking to reap the rewards from today’s more powerful AI models, but the risks due to a lack of transparency and inability to govern these models have been holding them back,” he said.

 Analyst Outlook for 2024

Across all industries and sectors, AI and GenAI are gaining attention and adoption in 2024 as customers think about how they will coordinate and secure AI deployment throughout their organizations.

Undoubtedly, AI is gaining traction to automate business processes and identify those processes that create bottlenecks or delays in supply chains that affect end customers. We note here that IBM is not alone in offering a governance model for AI. Competitors offering AI governance tools, and not just advisory and consulting services, include Amazon SageMaker, Credo AI, Dataiku DSS, MLOps platform, Google Vertex AI, Holistic AI, Microsoft AI Content Safety, and Microsoft Azure Machine Learning, Minitaur, and Qlik Staige. However, IBM’s decades-long experience in enterprise-wide data management and network management will be leveraged in its new AI software tools, helping IBM to be on the short list of end-to-end AI software management providers.

The excitement around AI, which exploded in 2023, is increasing the need among enterprise customers worldwide to harness, monitor, and manage AI capabilities at the data center’s Core, the Cloud, and the Edge. The ability to monitor and manage AI across sites will be vital to enterprise customers of all sizes. They must reduce business risk while scaling up customers’ business processes as regulations governing AI use and legal compliance proliferate, growing with customers’ rapid adoption of AI across the world’s geographic regions in 2024.

IT Infrastructure Predictions for 2024

IT Infrastructure Predictions for 2024
by
Jean S. Bozman, President,

Cloud Architects Advisors LLC
and
Srini Chari, Ph.D., MBA and M. R. Pamidi, Ph.D.,
Cabot Partners

This year’s Cabot Partners’ Predictions forecast a more varied and heterogeneous computing environment in 2024 to address the incredible growth in AI and Generative AI (GenAI).

AI will be a game-changer that will cause business and IT managers to “think differently” about how they will provide system infrastructure to new AI-enabled applications. Equally important is how AI is causing business units and IT organizations to reconsider, reconstruct, and redeploy workloads into a mixed environment of on-premises systems and off-premises cloud services.

The chief characteristics of this evolving computing/storage environment are:

  • AI’s explosive growth will continue in 2024 with advancements in AI-powered automation, personalization, and predictive analytics, often overshadowed by the sheer excitement and hype surrounding GenAI.
  • Large Language Model (LLM) software will continue to grow in 2024. ChatGPT, introduced by OpenAI in November 2022, will face increasing competition from Google Gemini, Meta Llama, Anthropic, Cohere, and others. At the same time, Microsoft, which partnered with OpenAI, has made it clear that ChatGPT will be part of many Microsoft software products as a co-pilot for key functionality. We expect that GenAI, as a software category, will not replace writers and graphic designers who will use it to speed up their creative process by supporting faster content development. AI software will continue to grow and be used for a wide range of purposes (e.g., software development, application functionality, and data management) throughout the IT infrastructure.
  • Software-defined infrastructure will play a dominant role in managing hybrid clouds and multi-cloud deployments. Powerful management software will oversee the increasingly complex infrastructure – spanning the Core, the Cloud, and the Edge – ensuring that end-to-end security is in place, protecting data across a corporation, government agency, or organization. Importantly, AI and GenAI will bring senior business management and IT management closer together so that new systems are not “silos” – but rather part of a more comprehensive end-to-end design.
  • The worlds of on-premises computing and Cloud Computing will merge in 2024. Cloud providers paved the way for distributed computing, linking dense computing hubs. This approach to scalable, distributed IT will now spread to a wider variety of customer sites, especially those hosting on-prem-only compute resources for security and compliance reasons.
  • Heterogeneous, mixed-chip systems will gain ground. In 2024, more customers will leverage mixed-processing platforms for their public and private cloud deployments. Workloads will gravitate to the CPUs, DPUs, GPUs, IPUs, NPUs, and TPUs that run the applications best – so customers won’t select any single platform. Instead, they will choose multiple platforms, each chosen to run specific job types.
  • Faster interconnects will be a “must” in this evolving world of AI and multi-cloud computing. It will include new “connectors” – including optical interconnects, and new Ethernet-plus interconnects now under discussion in open-standards industry consortia. Without faster interconnects, multi-cloud computing would be too slow to support rapid data transfers that would hamper end-to-end delivery of data-based services across the entire enterprise.
  • Chiplets will be key to the new multi-vendor infrastructure. A chiplet is a tiny integrated circuit containing a well-defined subset of functionality, designed to be combined with other chiplets that reside on the same system board. The move to chiplets is gaining steam and will accelerate in 2024. Chiplets will link function-specific compute tasks. We expect their use will grow in vendors’ products, supported by new foundry processes in semiconductor factories (e.g., AMD, ARM, Intel, NVIDIA, and TSMC). More focus will be on leveraging chiplets, which support a more flexible infrastructure. One often-overlooked benefit is that adding new system features via chiplets can help reduce yield problems for chip manufacturers and foundries making very large integrated circuits.
  • Data, data, data – a key foundation for the AI world. Data placement, integrity, and optimization will be a focus for multi-cloud systems, guided by customers’ needs to support AI, GenAI, and HPC. Tapping reservoirs, or piles, of data across modular platforms will help reduce power/cooling requirements for those workloads that traditionally have been running on monolithic or standalone systems.
  • Data privacy and security breaches will continue to cause havoc in all business sectors as cyberattacks and ransomware attacks become more sophisticated. Preventive and countermeasures are being adopted in the private and public sectors – but that has not prevented disruptions to business and financial damage in the 2023 attacks. To counter these threats, we expect to see trends like AI-powered threat detection and prevention, increased focus on data privacy and compliance, and zero-trust security models to grow in 2024.
  • Liquid Cooling, a familiar feature in mainframes and Cray supercomputers, is being reborn for the AI age. Providers of systems and racks are using clear solvents rather than water, which can damage electronic circuits over time. We believe a new generation of liquid-cooling designs will increasingly be part of customer and Cloud Service Provider (CSP) data centers. Liquid cooling will not be a universal “fix,” so it will appear first for the densest computing and storage infrastructure. As we have seen in 2023 intros by large systems vendors (e.g., Dell Technologies, Lenovo, HPE, and IBM), liquid-cooled racks will be deployed alongside air-cooled racks, given the multiple types of workloads customers will be running. Major oil companies, including BP and ExxonMobil, have jumped on the liquid-cooling bandwagon by developing and marketing their brand of liquid-cooling solvents for use in liquid-cooling racks.
  • Co-opetition – long a feature of open systems – will be evident across the tech industry as a strategy to make multi-cloud computing “work” while supporting high-performance workloads (e.g., AI, Gen AI, and HPC). Unusual combinations will result, as has already been the case for Microsoft + Red Hat, Oracle + VMware, Intel + IBM, and the like. Enterprise customers like co-opetition because it prevents or reduces “vendor lock-in” that would otherwise reduce choice and increase prices for IT solutions.
  • Quantum Computing will steadily gain adoption in 2024. One driver for adoption will be to ensure that traditional security methods won’t get compromised by someone else’s quantum computers – even though wide adoption in enterprise organizations is not expected before 2030. It’s important to note that the special physical “environment” that enables quantum computing (very low-temperature cooling and an expensive and high-maintenance physical environment) will lead to quantum-as-a-service – supported by fast connections to quantum-compute “clouds” accessing supercooled quantum infrastructure.
  • Variations on the “as-a-service” theme will gain ground as vendors offer AI as a Service, Data as a Service, Quantum as a Service, and Storage as a Service (StaaS). Given the rapid introduction of new hardware and hardware specifications (e.g., processors and interconnects), as-a-service offers will reduce update pressures for on-prem enterprise data centers by reducing CapEx costs for system acquisition and system maintenance.

Analysis
Business customers are eager to update their blended enterprise data centers’ IT environment – adding new technology to modernize existing systems and infusing all systems with distributed software-defined cloud computing. Older systems will become more cloud-like by supporting software-defined infrastructure that allows them to scale up applications and capacity across IT resources. So, private clouds may reside on-premises, but they will undoubtedly coordinate workloads with public cloud services – with increased cloud deployments over a five-year IT time horizon.

As a result, IT silos will likely decrease in 2024, while multi-cloud deployments will expand in many enterprise organizations. Of course, these technology changes must be virtually “transparent” to business users and business units. As a result, 2024 will see business units and IT organizations develop faster, better “human” communications, especially around GenAI capabilities that, quite literally, are opening access to corporations’ mission-critical data stores. Customers’ data resources, including “data-lake houses,” data warehouses, and centralized databases, will no longer be sufficient to coordinate GenAI data access across entire businesses and governmental organizations.

A “re-think” of data security in the AI age will be a high priority in 2024. Security threats will remain a top concern for business and IT executives. In most enterprises, traditional DevOps has morphed into DevSecOps because IT Security staffers have realized that security must be baked into the infrastructure. (By analogy, eggs are baked into a cake as an integral ingredient, while the cake’s icing may be added as a final afterthought). Integrating security into multi-tier deployments will avoid the kind of disruptions in everyday IT operations that make news headlines and slow down mission-critical business processes – impacting company revenues and profits.

Enabling this “blended” IT infrastructure makeover will require careful planning, coordination between business and IT, and enterprise-wide adoption of networking, storage, and security standards and policies. In short, getting more out of IT in 2024 will require an intensive upfront planning process and frequent communications between IT organizations and the business units that IT supports. Otherwise, the resulting infrastructure would be inefficient, producing uneven and inaccurate business results that could harm the entire enterprise.

Summary
Why embark on this journey to leverage AI, modernize infrastructure, and migrate enterprise applications to multi-cloud environments? Because the ability to do so will support business efficiency, enable IT flexibility, and improve business agility – using AI and multi-cloud to produce better business results.

Combining both “styles” of computing – on-premises and cloud-provided infrastructure – will be prevalent worldwide in 2024. By modernizing existing IT infrastructure and leveraging cloud infrastructure, customers plan to contain their operational costs (OpEx) while limiting their capital expenditures (CapEx) for acquiring and maintaining systems.

As 2024 begins, businesses must select which corporate workloads will remain on-site to comply with corporate policies and governmental regulations – and which will migrate to CSPs. From now on, across the enterprise and in multi-cloud deployments, IT infrastructure changes must improve efficiency, flexibility, management/control, and scalability to support an ever-expanding set of customer use cases.

Broadcom Completes Its Acquisition of VMware Inc.

VMware and Broadcom

by

Jean S. Bozman, President,

Cloud Architects Advisors LLC

and

M. R. Pamidi, Ph.D., Principal Analyst

Cabot Partners

 

Broadcom Completes Its Acquisition of VMware Inc.

 By Jean S. Bozman and M.R. Pamidi

 Broadcom, Inc. has completed its acquisition of VMware Inc., the software firm that came to prominence during the virtualization wave of the early 2000s. Now, it plans to leverage VMware’s broad installed base, its cloud-enablement software, and VMware’s ecosystem of partners, to focus on customers’ application modernization and digital transformation projects.

The move is seen as an expansion of Broadcom’s enterprise strategy to create value through integration with enterprise and cloud-enabling software. Broadcom previously acquired CA Technologies (formerly, Computer Associates) in 2018 and Symantec’s Veritas enterprise security group in 2019, and both were software companies that focused on enterprise workloads.

The combined company will be more than $45 billion in annual revenues. Broadcom reported it generated $33.2 Billion in annual revenues for the year ended in June 2023, and VMware generated $13.4 Billion for the year ended June 2023. Media and analysts had expected the news to come by October 30, but finalization of the deal awaited regulatory approval by China, following previous deal approvals by the European Union (E.U.) and the United States.

Following the merger finalization, VMware CEO Raghu Raghuram announced his resignation as CEO on November 22, 2023. As VMware’s CEO, he had succeeded VMware’s original CEO, Diane Greene, who founded the company in 1998 and stayed until 2008; Paul Maritz (in the CEO role from 2008-2012), and Pat Gelsinger (in the CEO role from 2012-2021), who became CEO of Intel.

 Analysis

 A Go-Forward Strategy Aimed at Application Modernization

Broadcom’s goal is to leverage its hardware and software solutions to work with customers that are modernizing IT infrastructure for use across Core, Cloud and Edge enterprise-wide deployments. VMware, and its Tanzu software for Kubernetes-based enablement of VMware’s vSphere platform, are key assets in achieving that application modernization journey – linking customers’ legacy applications with new cloud-native capabilities in multi-cloud deployments.

VMware’s broad customer base in enterprises, worldwide, made it very attractive to Broadcom, with Broadcom viewing the VMware software assets as cash-rich opportunities in customers’ modernization and digital transformation activities. Even so, Broadcom will need to make the case that it will amplify VMware’s initiatives in the application-modernization and multi-cloud market spaces.

Modernization initiatives are becoming even more compelling to customers, who realize that their legacy applications are preventing them from becoming more efficient and generating their own revenue and profits. That realization is driving IT makeovers worldwide as customers’ post-pandemic IT budgets get approved in the U.S., Europe, and Asia.

These customer moves to a cloud-native, multi-cloud world aren’t easy: many involve conversion of aging COBOL applications to Java applications, or a direct replacement of monolithic applications with distributed, modular software. Broadcom knows that VMware is already present in many of these enterprise accounts, often using Tanzu and the VMware Cloud Foundation (VCF) software platform as a launch point for customers’ multi-cloud deployments.

A Long Global Approval Cycle for This Merger

It was not surprising that approval for the merger took as long as it did – although it had to gain approval from governmental oversight agencies around the world. The entire effort — from Broadcom’s initial announcement to closing the deal – took 18 months, spanning May 2022 until November 2023.

This ambitious merger of Broadcom and VMware builds on a vision of empowering customers to update their IT infrastructure – including hardware and software – in a far-reaching and comprehensive way. Hock Tan, President and CEO of Broadcom, said of the merger’s strategy: “With a shared focus on customer success, together we are well positioned to enable global enterprises to embrace private and hybrid cloud environments, making them more secure and resilient. Broadcom has a long track record of investing in the businesses we acquire to drive sustainable growth, and that will continue with VMware for the benefit of the stakeholders we serve.”

What’s Ahead

Now that it’s free to market and sell its modernization solutions to the marketplace, Broadcom will be competing with other large companies engaged in similar modernization strategies. The list of competitors for modernization includes IBM, which acquired Red Hat (RHEL, OpenShift and Ansible) in 2019; HPE with its GreenLake software portfolio; Dell Technologies with its application-modernization services and Dell’s Apex as-a-Service; and Nutanix with its Nutanix Cloud Platform.

And here’s one more notable point: Due to ongoing co-opetition in the marketplace, systems vendors may already have VMware solutions in their modernization and multi-cloud offers to customers.

With this merger, Broadcom’s traditional enterprise business will be changing – and it will be happening quickly. The migration of enterprise applications to the Cloud and the creation of net-new applications at the Edge of corporate networks are expected to drive IT investments in 2024/25. Broadcom’s introductory press release said that VMware’s services will be an important component of those modernization and transformation projects with customers.

Specifically, as the company announced: “VMware will offer a rich catalog of services to modernize and optimize cloud and edge environments, including VMware Tanzu [for application developers] to help accelerate deployment of applications, as well as Application Networking (Load Balancing) and Advanced Security services, and VMware Software-Defined Edge for Telco and enterprise edges.”

Following the merger, Broadcom will also likely trim the combined workforce, as is often the case when two large organizations merge. Clearly, Broadcom should make every effort to retain VMware software-innovation staffers, including key product developers and managers for the most widely deployed VMware app-dev platforms and tools.

Customers will be reviewing a long list of competitive offers to achieve their organization’s IT modernization and digital transformation goals. That’s why Broadcom must differentiate its modernization and transformation services – and make them crystal-clear to the broad marketplace of enterprise customers. This is an achievable goal – and the new Broadcom must reach out to a customer base that is, suddenly, much larger than before,  to make its case. We’re confident that this will happen. We are also looking forward to tracking the trajectory of this large, and impactful, combination of technologies for customers’ digital transformation.

Intel OpenVINO Focuses on Sharpening Computer-Vision for Cloud and Edge Applications

By Jean S. Bozman and M.R. Pamidi

Expanding on our analysis of the Intel® Distribution of OpenVINO™ Toolkit, this post continues our discussion about this development framework, its support for widely used AI LLMs (large language models), and its “fit” with the Intel Geti platform for developing AI applications that include computer vision.

Intel is best known for its microprocessors, fabs, and high-speed interconnects. But in the wake of the ChatGPT generative AI (genAI) announcements of November 2022 – just one year ago – Intel has been describing, documenting, and demonstrating its AI application-development toolkit, which it calls OpenVINO. This research note takes another look at OpenVINO – its use cases, and its planned trajectory in the AI marketplace.

At the Intel Innovation 2023 conference, held in San Jose, CA, in September, Intel demonstrated the development of AI applications based on its OpenVINO toolkit. As demonstrated at that event, OpenVINO was paired with the Intel Geti platform for customized computer-vision model development.

By using the two products together, customers worked to optimize LLMs (large language models) for the automation of chatbots and factory-automation sequences. Using the Intel Geta platform and OpenVINO together can speed up model development, enabling fine-tuning of optimized workflows on Intel devices across Cloud and Edge deployments.

We believe that we will see this approach again, in 2024, as Intel doubles down on its AI-enabled product offers. A video that shows how OpenVINO and Intel Geti can be used together for Agriculture applications in the coffee industry can be viewed here.

Using a variety of Edge devices, applications developed with the Open VINO framework produced prototypes for  AI-controlled safety systems for pedestrians and cars, as shown at the conference.

This use case – controlling dynamic workloads leveraging visualization – drove home the point that multi-vendor automation solutions are growing in Cloud and Edge applications, which Intel sees as a rapidly growing opportunity in this decade.

Among the best examples of this category of use cases are in manufacturing, telecommunications (telecoms), and device-driven workloads across many industry segments (e.g., healthcare, banking/finance, and retail) worldwide.

Focusing on Developers

The OpenVINO application-development framework, as described by Intel, is designed to allow developers to “run inference on a range of computing devices – including many at the Edge. This would include machine learning (ML) for device automation, computer vision, and factories running multi-step processes.”

Intel’s strategy here is to build on the current footprints for generative AI (genAI) that are already being utilized by customers – and to create new AI solutions based on Intel hardware and the Intel ecosystem’s software technologies. In doing so, Intel is asking customers to consider migrating apps to a new deployment environment, resulting in an emerging business model where competing app/dev frameworks are also being used – often in the same Cloud or Edge environments (e.g., TensorFlow in Google Cloud environments).

Through its support for multiple hardware types, OpenVINO allows customers to “convert” applications developed through widely-used frameworks like Caffe and TensorFlow, allowing them to run on the Intel inference engine across a variety of CPUs, GPUs, and FPGA (field programmable units). Intel also provides AI model training classes, and online software showing OpenVINO being used in place of other, well-known frameworks.

This multi-vendor scenario maps to current patterns of customer use of a variety of hardware devices in Edge locations. For Intel, this is a pragmatic strategy that is aimed at addressing real-world scenarios that already leverage mixed-vendor deployments for Cloud and Edge applications. It also reflects a go-to-market strategy, in which Intel believes it can build on Intel Geti and OpenVINO, when used together, to grow the adoption of Intel-based AI solutions for new and emerging Cloud and Edge use cases.

This approach will become even more significant when Intel ships its AI PC, as it has hinted it will do next year (2024) – a future announcement that will highlight Intel’s AI strategy, as articulated at the Intel Innovation Conference.

How It Works

Intel says OpenVINO is based on convolutional neural networks (CNNs), which allows the OpenVINO toolkit to share application workloads across multiple devices – including both Intel and non-Intel devices. The toolkit supports faster performance and memory-use optimization and is designed to address rapidly growing genAI use cases.

This year, a new release of OpenVINO, V.2023.1 was released on September 18, 2023, providing expanded support for genAI. The top features of this OpenVINO toolkit release include:

  • A model optimizer to convert models from widely used frameworks, including Caffe, TensorFlow, Open Neural Network Exchange (ONNX), PyTorch, and Kaldio.
  • An inference engine that runs on heterogeneous hardware platforms, including CPUs, GPUs, FPGAs, and the Intel Neural Compute Stick 2 (Intel NCS2). GPUs may include NVIDIA GPUs and the new Intel Habana GPU platform that Intel got from its acquisition of Israeli chip maker Habana Labs for $2 billion in 2019.
  • A common application programming interface (API) for a variety of Intel-based hardware platforms, including fourth-generation Intel Xeon processors (Sapphire Rapids).

Summary

GenAI is breaking down barriers between IT and business units – with the benefit that genAI – plus easy-to-use chatbots – help to make AI understandable and visible to business managers who approve budgets for AI systems. In general, this is one aspect of genAI — bringing business and IT into dialog — that isn’t highlighted often enough in the industry.

GenAI’s widespread popularity in business units will likely give rise to broader adoption of genAI for close-to-the-customer applications running on CPUs, GPUs, and FPGAs. By offering specific app/dev solutions for AI, Intel is aiming to grow its share in the fast-growing AI segment.

Intel sees the OpenVINO opportunity to encourage the use of genAI for computer vision. However, the overall opportunity is much broader: Open VINO can be used for other AI uses, including natural language processing (NLP) and inference customer use cases across multiple industry segments.

Hardware support for next-generation Intel microprocessors, including the ones powering the Intel AI PC, are set to ship into general availability next year (2024).

Finally, Intel’s focus on Cloud and Edge use-cases makes sense for the Open VINO framework – because it anticipates expanding market opportunities for genAI app dev toolkits for Cloud and Edge use-cases. Likewise, the combination of Intel Geti for computer-vision software fits with OpenVINO’s rapid-development capabilities. In the highly competitive AI marketplace, we expect Intel to update its 2023 release by adding new features for an expanding list of industry-specific use cases – in 2024.

IBM Announces New Alliance to Accelerate Open Innovation in AI

Srini Chari, Ph. D., MBA | M.R. Pamidi, Ph. D.

Cabot Partners

It has been just over a year since OpenAI’s ChatGPT made a dramatic entry, disrupting the AI landscape.  On December 5, 2023, IBM and Meta announced an industry-wide alliance, partnering with many hardware, processors, software, and services providers to accelerate open innovation in AI. Alliance members include AMD, Dell, Intel, RedHat, ServiceNow, and many others, including top research institutions such as the University of California, Berkeley, and other academic institutions (Figure 1).

Figure 1: The IBM AI Alliance

The broadly stated objectives of this IBM-led alliance include the creation of common frameworks for evaluating AI algorithms, investing capital in AI research funds, ensuring safety, security, and equity, and collaboration on open-source models.

Another (not-stated) purpose of this Alliance could be to attempt to challenge the perceived dominance of OpenAI, Microsoft, Google, Nvidia, and Amazon, all of whom have proprietary approaches to AI solutions. Ironically, despite its name, even OpenAI’s omnipresent ChatGPT system is closed.

This IBM-led alliance includes several prominent companies in the IT industry, and many aspire to be perceived as top players in the rapidly growing AI and Generative AI (GenAI) market. This Alliance could allow these technology companies to increase their leadership position in their respective category. The big question is whether some current leaders will join this Alliance. And what is there for them? And will companies that are part of this Alliance experience value accrual?

Analysis and Point-of-View

Deploying AI and GenAI requires a collective effort across the technology ecosystem. No one vendor offers a complete, robust, and scalable technology stack—hardware, middleware, software, services—but the secret sauce is to partner with the right vendors to fill the gaps.

To analyze this AI ecosystem, Cabot Partners (“Cabot”) created an AI Blueprint in 2017 (Figure 2) and has been working on and improving this Blueprint since then.

This AI Blueprint assesses key ecosystem participants with in-depth analysis for every major component of each layer in the Blueprint: consulting, applications, APIs, software frameworks, algorithms, libraries, development tools, data infrastructure, operating systems, systems, storage, networks, and processors. The cloud is expected to be a significant delivery vehicle for AI. So, cloud providers are a critical part of this Blueprint.

Figure 2: Artificial Intelligence Blueprint by Technology Category

Cabot analysts and AI technical experts also identified key companies in each category (Figure 3), with the leader in each category listed at the top and the other firms in that category in descending order. The public companies in this Alliance (RedHat is part of IBM) are highlighted in a golden box.

Figure 3: AI Companies Organized by Technology Category (Higher Position is Better)

Then, our team scored each solution provider that’s public by category. Our criteria for evaluating each vendor considered both technology and market factors. Broadly, they include:

  • Technology for AI and High-Performance Computing (HPC): AI and HPC are tied in many ways, as they both involve processing large amounts of data and performing complex computations. AI can benefit from HPC’s parallel processing, scalability, and speed, while HPC can benefit from AI’s automation, intelligence, and optimization. AI development goes hand in hand with HPC advances. HPC can better support AI model training than traditional systems can. For instance, HPC clusters can run deep neural networks with millions of parameters and handle massive multi-dimensional data sets. AI can also help enhance the security and reliability of HPC systems by detecting and preventing cyberattacks.

From an HPC viewpoint, HPE is the leader in the systems category, especially since its Cray and Silicon Graphics acquisitions. Likewise, from a CPU perspective, AMD is the leader based on performance and current momentum, and Nvidia is the undisputed leader in graphics processing units (GPUs). Amazon, Microsoft, and Google have significant HPC capabilities being leveraged for AI.

  • Market Leadership: This includes the position of the company or product with the highest sales or market share in a given industry or market. Market leaders can also be the first to introduce new products or services (i.e., OpenAI), set the standards for quality and innovation, and influence the direction and trends of the market. Market leaders enjoy many advantages, such as brand loyalty, economies of scale, pricing power, and profitability.

If a company, e.g., IBM, Microsoft, or Google, has offerings in several categories, we took the average score across these classes.  Figure 4 depicts this score for the public companies mentioned earlier.

Figure 4: Average Leadership Score for Public Companies

Next, we looked at a vendor’s year-over-over stock performance (Figure 5) on December 4, 2023. This is one measure of a company’s recent value accrual.

 Figure 5: 52-Week Percentage Change in Stock Price

Interestingly, there is some correlation between a company’s leadership score and its recent 52-week. For instance, NVIDIA and Meta have high leadership scores and large value accrual. A company’s stock growth is a function of many variables. However, AI leadership could contribute significantly to a company’s value. The big question is, “Will this Alliance drive future value for its members”? And will this Alliance grow and thrive? It is too early to tell.

Some Concluding Thoughts

While this Alliance includes many academic/research institutions and prominent IT companies, it does not include several current AI leaders, such as Nvidia, Microsoft, Google, HPE, and Amazon. It is unclear if these AI leaders have significant incentives to be part of this Alliance. They are already creating their strategic partnerships. For example, Nvidia CEO Jensen Huang recently shared the stage with the CEOs of Amazon, Google Cloud, and Microsoft at their respective annual events to announce strategic AI initiatives.

Sustaining and growing alliances, especially large ones, are hard. Often, they require small, careful, deliberate steps. Many partnerships fail for various reasons: poor communication, a lack of common goals and shared values, profit-driven motives, a lack of motivation and drive to succeed, different risk appetites, and an inability to depend on each other as partners. The IT industry is rife with failed alliances.

However, when alliances succeed (and there are many in the IT industry), they could be a significant value multiplier for the companies that participate. We wish this Alliance great success. In its Research divisions and Consulting group, IBM has extraordinary AI capabilities to make this Alliance work and help clients in their AI journey even when clients use technologies from companies not part of this Alliance.

For example, as a small step, IBM Consulting could partner with Nvidia to help enterprises deploy and scale GenAI. Companies such as Deloitte, HPE, Lenovo, and others are already doing this. This could help attract Nvidia to this Alliance.  When leaders partner with leaders, the customer wins!

How Infinidat’s Recent Announcement Can Enhance and Accelerate Your ROI

M.R. Pamidi, Ph. D.| Srini Chari, Ph. D.

Cabot Partners

 

Nowadays, everyone is talking about Generative AI and its promise to fundamentally transform many industries, from Retail to Healthcare to Manufacturing to Financial Services. The critical role of IT infrastructure should be discussed more in the industry, especially the less glamorous part of the IT stack, namely, Data and Storage.

 

As organizations gather, process, and store larger datasets from all sources, such as sensors, instruments, log files, and so on, their workloads are becoming more compute- and data-intensive. Traditional High-Performance Computing (HPC) is converging with AI (Figure 1), including Generative AI. It places similar extreme management, performance, and scale demands on IT infrastructures (particularly Storage) to the point where the rapidly growing Generative AI workloads need an HPC infrastructure.

Figure 1: The Convergence of HPC with AI

By using proven HPC storage software and systems for AI deployments, organizations can reduce data costs, consolidate compute and storage siloes, simplify system administration, improve efficiency, and more. But they must also ensure security, affordability, performance, scalability, compliance, flexibility to manage service level agreements (SLAs), support different configurations on-premises and in multi-cloud environments, and more. Infinidat has incredible promise to deliver a significant ROI to its clients in these situations.

 

About Infinidat: Delivering Innovative High-Value Storage Solutions

We recently met Eric Herzog, Infinidat’s CMO, at the Flash Memory Summit in California. Founded in 2011, Infinidat is an enterprise storage company headquartered in Waltham, MA, and Herzliya, Israel. In addition to Series A investment, it has received a Series B investment of $150 million from TPG and a Series C investment of $95 million from Goldman Sachs—for a total of $325 million to date. The company claims over 25% of Fortune 50 as its customers, over 140 patents filed to date, and a 100% availability guarantee of its storage products. It has a global presence, with offices and enterprise customers in multiple countries, enhancing its reach and market share. Infinidat’s management team, with storage industry veterans, has a proven track record of success. The team is committed to developing innovative solutions that meet the needs of its customers, with regular announcements that help clients drive innovation and value.

 

Most Recent Infinidat Announcements Continue to Enhance Innovation and Value

On September 19, 2023, the company announced impressive enhancements to its already-strong product line by adding SSA Express SDS (Soft-Defined Storage) and InfiniBoxTM SSA (Figure 2).

Figure 2: Latest Impressive Infinidat Product Announcements

Traditionally, customers looking to expand their storage infrastructure often are forced to forklift upgrades, which are both expensive and unwieldy. Infinidat’s SSA SDS Express avoids these expensive upgrades by offering Up to 320 TB of usable all-flash capacity and supporting over 95% of existing InfiniBox® systems that can be easily configured into new InfiniBox purchases. This solution expands an InfiniBox hybrid array by allowing customers to leverage the InfiniBox flash layer for performance-sensitive workloads, essentially creating a solution akin to embedding an all-flash array inside the InfiniBox. The non-disruptive, free InfuzeOS 7.3 software upgrade of existing InfiniBox systems has zero downtime, reducing CapEx and OpEx with no new array management interface or need for a second vendor. SSA SDS Express also supports InfiniSafe® Cyber Detection software with a scanning option and storage recovery guarantees.

 

InfiniBoxTM SSA II offers twice the storage capacity in the same data center footprint, supporting up to 6.635 PB of Storage, about 50% less power (Watts) per effective TB, reduced rack space, floor tile, and power and cooling costs. Increased capacity and performance at scale help consolidate more all-flash and legacy hybrid arrays, resulting in lower storage management costs. The InfiniBox SSA has several guaranteed SLAs: the InfiniSafe Cyber Resilience and Recovery guarantee, guaranteed performance, and 100% availability guarantee. This product also provides scale-up storage systems, adding 60% and 80% partially populated configurations to the existing 100% populated option. Thus, customers can start at a low entry point and scale up as needed.

 

InfuzeOS with Neural Cache, InfiniSafe, InfiniOps, InfiniVerse, comprehensive enterprise data services, InfuzeOS Cloud Edition, autonomous automation, guaranteed performance, and availability, and InfiniSafe Cyber Resilience and Recovery customers can manage clients’ storage IT operations on a “Set It and Forget It” model.

 

Infinidat has a growing customer base and includes some of the world’s largest and most demanding organizations. The company’s customers appreciate InfiniBox’s performance, scalability, reliability, as well as Infinidat’s commitment to customer support. We believe this announcement will further strengthen its customer base and reduce the total cost of ownership (TCO). However, these enhancements will increase the total value of ownership (TVO) and ROI and lower TCO.

Using a TVO Framework to Quantify How Infinidat’s Recent Announcements Can Improve ROI

The TVO framework (Figure 3) categorizes the interrelated cost/value drivers (circles) for Storage solutions by each quadrant: Costs, Productivity/Quality, Revenue/Profits, and Risks. Along the horizontal axis, the drivers are arranged based on whether they are primarily Technology or Business drivers. Along the vertical axis, drivers are organized based on ease of measurability: Direct or Derived.

Figure 3: The TVO Framework for Storage with Cost and Value Drivers

The cost/value drivers for Storage solutions (a circle whose size is proportional to the potential impact on a client’s Total Value (Benefits – Cost) of Ownership or TVO) are as follows:

 

  1. Total Costs of Ownership (TCO): Typical costs include one-time acquisition costs for the hardware and deployment and annual charges for software, maintenance, and operations. As described earlier, the latest Infinidat products can lower the TCO.
  2. Improved Productivity: The TVO model quantifies the value of productivity gains of

administrators, end-users, and the organization. Productivity gains can be significant with the latest Infinidat solutions.

  1. Revenue and Profits: Faster time to value and more innovation capabilities for clients spur revenues and improve profits.
  2. Risk Mitigation: A streamlined process and improved governance/security lowers system downtime and minimizes cumbersome iterations in rework.

 

The TVO and ROI will typically increase with Storage solution size, giving clients even better ROI and Payback in the future as they deploy Gen AI solutions that will require them to manage petascale Storage. It will benefit Infinidat and clients to quantify and monitor ROI from Infinidat Storage solutions as they continue scaling up their infrastructure to handle their most compute- and data-intensive workloads.

 

Cabot Partners View on How Google Cloud Embraces the Duality of Generative AI and Enterprise AI

by

Jean S. Bozman, President, Cloud Architects Advisors LLC

and

M.R. Pamidi, Ph. D., Principal Analyst, Cabot Partners

 Google Cloud Next’ 23 was a three-day conference in San Francisco from August 29-31, 2023. The event brought together cloud computing professionals worldwide to learn about the latest Google Cloud Platform (GCP) innovations. Google Cloud is reinventing itself as it engages more closely with enterprise businesses that plan to reinvent their business models with AI. Many consider Generative AI as a user-oriented tool for efficient searches. However, the enterprise focus of GCP was apparent from the start of the three-day event.

 

Google Cloud CEO Thomas Kurian opened the keynote with significant announcements that may help the tech giant keep up with its peers—Amazon and Microsoft—in the still-evolving cloud market. Generative AI was the central theme at the conference, as AI powers many of the latest advancements and features. Google continues to face acute pressures to increase its AI offerings as competition from its rivals heats up.


Figure 1: Google Cloud Next 2023 Key Announcement

 

Some of the key announcements (Figure 1) from the event included:

  • General availability of Duet AI, a new natural language processing (NLP) technology that can help businesses generate more creative and engaging content.
  • Preview of Vertex AI Workbench, a new integrated development environment (IDE) for building and deploying machine learning (ML) models.
  • Launch of Cloud TPU v5e, a new machine learning accelerator that can train models up to three times faster than previous generations.
  • Availability of A3 Virtual Machines (VMs) based on NVIDIA’s H100 GPUs offers up to 80% better performance for ML workloads than previous generations.
  • Introduction of Cross-Cloud Network is a new networking platform that makes connecting and securing applications across multiple clouds easier.
  • The Vertex AI ecosystem expands with new partnerships with DocuSign, Box, Canva, and Salesforce.

 

The event showcased Google’s commitment to innovation in the cloud computing space. It highlighted the company’s latest technologies for building, deploying, and managing applications and its growing ecosystem of partners.

 

In addition to the keynote presentations and breakout sessions, the event also featured several hands-on labs and workshops. These sessions allowed attendees to learn about GCP technologies and try them out for themselves.

 

Google emphasized the duality of personal AI and enterprise AI throughout the three-day conference, along with customer examples from some of GCP’s largest customers, including General Motors, Mahindra Motors, and US  Steel. These customers are running end-to-end enterprise applications for mission-critical workloads on Google Cloud infrastructure, as in global communications (GM’s OnStar), retail sales (Mahindra), and manufacturing (US Steel). The cloud company currently has a $32 billion annual run rate, the largest in its history, as enabled by enterprise AI and the capabilities of the Google Cloud infrastructure.[1]

 

Kurian, who took the helm at Google Cloud in 2019, has deep experience with enterprise infrastructure; he was Oracle Corp.’s President from 1996 to 2018, reporting to Oracle CEO Larry Ellison and, later, to CEOs Mark Hurd and Safra Catz and engaging with Oracle’s enterprise customers worldwide.

 

Personal GenAI and Enterprise-Wide GenAI

“We’re announcing important new products to help each of you use generative AI,” said Google Cloud CEO Thomas Kurian, “First, to help the world’s leading customers and developers to build AI with our AI-optimized infrastructure and our AI platform for developers, Vertex AI.” “Generative AI (Figure 2) will likely spark greater customer requirements for performance, scalability, and reliability,” Kurian said, relating user-based AI usage to enterprise-wide AI for corporate applications. The new Duet AI product is positioned as an assistant and collaborative software for end-users.

Figure 2: Google’s Generative AI Stack

 

Google Cloud’s generative AI product for business users, Duet AI, is positioned as a collaborative assistant, while the Vertex AI targets developers – and end-to-end enterprise AI benefits from Google Cloud’s scalability, performance, and reliability, supporting millions of users across 38 geographic regions worldwide. That network of Google Cloud operations reaches the Americas (North America and South America), the European continent, Asia/Pacific, and Australia) – and will likely continue to expand as GCP competes with AWS and Azure for more business across all continents.

 

“We’re announcing important new products to help each of you use generative AI,” said Google Cloud CEO Thomas Kurian, “to help the world’s leading customers and developers to build AI with our AI-optimized infrastructure and our AI platform for developers, Vertex AI.”

 

Kurian has deep experience with enterprise infrastructure; his previous work was as Oracle’s President, engaging with Oracle’s enterprise customers with a global presence worldwide. He expects that broader use of generative will lead to faster cloud adoption for transactional and corporate applications that have not yet moved to the cloud.

 

Google and AI

Google has long been known as an inventor and developer of new AI technologies. Now, with the fall release of ChatGPT from Microsoft and OpenAI, Google Cloud is showing its Vertex AI, Duet AI, and GCP AI infrastructure as a competitive offer to enterprises. In many cases, those enterprises are working to update aging on-premises applications – often with reduced IT staffing as they consolidate their corporate data centers — and to speed the adoption of AI-based technology throughout their business operations (Core to Cloud to Edge) to help their enterprises achieve better business results.

 

AI will “touch every sector, every industry, every business function and significantly change the way we live and work,” said Alphabet and Google CEO Sundar Pichai, pointing to Google Cloud’s enterprise focus from the start of the three-day conference in San Francisco.

 

Business executives, Pichai said, “want a partner that’s been on the cutting edge of technology breakthroughs—be it from the desktop to mobile, to the cloud, or now to AI—and a partner who can help navigate and lead them” through the next phase of digital transformation driven by AI.

 

GCP Investment and Adoption

Google and Google Cloud, Pichai said, have invested heavily “in the tooling, foundation models and infrastructure” to make that happen, starting with specialized TPU and GPU processors and continuing with software that can run customers’ containers and virtual machines on the GCP.

 

Pichai cited three customers who are leveraging Google’s GenAI software, which was introduced in March: General Motors (GM), which applied conversational AI to its OnStar-connected vehicles; HCA HealthCare, which is working with Google to create Med-PaLM foundation models for scalable AI; and US Steel, which is using Google AI to summarize and visualize instruction manuals.

 

Below is a detailed look at everything Google announced at its event.

 

Vertex AI Gets Better

Figure 3: Vertex AI Architecture

 

About two years ago, Google unveiled Vertex AI, a unified artificial intelligence platform that offers all of Google’s cloud services under one roof. With Vertex AI, you can build ML models or deploy and scale them easily using pre-trained and custom tooling. It was followed by Vertex AI Vision, the machine learning platform as a service (ML PaaS) offering from Google Cloud. Since the general availability of generative AI services based on Vertex AI earlier this year, developers can use several new tools and models, such as the word completion model driven by PaLM 2, the Embeddings API for text, and other foundation models. Google is adding Meta’s Llama 2 and Technology Innovation Institute’s (TII) loyalty-free Falcon 40B, the UAE’s leading large-scale open-source AI model. This makes Google Cloud the only Cloud provider to support first-party, open-source, and third-party models. APIs of these models will be accessible through a new tool in its Cloud Platform named Model Garden.

 

Duet AI For Google Workspace

At I/O 2023, Google announced “Duet AI” as the branding for generative AI features in Workspace. At that time, its availability was limited to trusted testers by invitation only. However, now Google has made Duet AI for Google Workspace available for all users with a no-cost trial. Google will charge $30 per user for access to Duet, as per CNBC.

 

Duet AI is a development interface powered by AI that includes code and chat assistance for developers on Google Cloud’s platform. Duet AI covers a range of generative AI tools for Google‘s productivity apps, including Gmail, Drive, Slides, Docs, and more. Google is essentially taking on Microsoft‘s Co-pilot.

 

Google wants to make Gmail, Docs, Sheets, Slides, and Meet more helpful with the help of generative AI.

At I/O, Gmail got a new feature called “Help me write,” allowing people to use generative AI to send auto-replies to emails and modify them to meet their needs best. Meanwhile, Sheets has a “Help me organize” feature where users can ask for information to be organized for them in the sheet through a simple word prompt. Similarly, in Google Slides, there’s a new “Help me visualize” entry where users can use prompts to get AI-generated images.

 

Google also announced further Workspace AI integration in Google‘s other core apps, such as Meet and Chat. Within Meet, Google‘s new AI features include taking notes in real-time: When they click “take notes for me,” the app will capture a summary and action items as the meeting goes on. Google will be able to show them a mid-meeting summary so that they can catch up on what happened.

 

Another new Meet feature allows Duet to “attend” a meeting on your behalf. Users click on the “attend for me” button on a meeting invite, and Google can auto-generate some text about topics the users might want to discuss. Those notes will be viewable to attendees during the meeting so that they can discuss them in real time.

 

Other new features include dynamic tiles and face detection, giving users their video tile labeled with their name in a meeting room. An automated translated captions feature will detect when another language is spoken and display the translation on-screen. The new automatic translated captions feature supports up to 18 of the world’s languages.

 

Lastly, Google is integrating Duet AI into Google Chat. Now, users can chat directly with Duet AI and ask questions about their content, get a summary of documents in a space, and catch up on missed conversations. It’s easier to use Google Chat because of the new interface and a new shortcut option. Google also integrates “smart canvas” capabilities, such as smart chips, inside Google Chat. Google Chat now lets users add up to 50,000 members to a space. The change marks a significant increase from the previous limit of 8,000 members. In addition, a new feature called Huddles is coming to Google Chat. With Huddles, instead of jumping out of the conversation into a meeting, the meeting is integrated directly into the chat experience. Google says Huddles will be available in public preview by the end of the year. In the coming weeks, Google will add support for third-party apps to Chat, including products from Zoho, Workday, and Loom.

 

Gen AI Unicorns and Google Cloud

During the event, Google said 70 percent of Generative AI unicorns—and more than half of all funded AI startups—are Google Cloud customers. This year’s Google Cloud Next ’23 summit illustrated how Google offers the optimized AI-optimized infrastructure to host and run AI models. That is intended to be a selling point for business decision-makers who have hesitated to migrate mission-critical applications to cloud service providers (CSPs).

 

Our Analysis

When the pandemic began, the pace of cloud migration accelerated. Still, some applications never moved from corporate data centers, worried that cloud service providers could not meet legacy applications’ security, availability, and governance.

 

The most widely used Cloud Service Providers (CSPs), including AWS, Microsoft Azure, and Google Cloud Platform, benefited from cloud migrations. Still, many more applications have not yet moved to the cloud. IDC has reported that many enterprise customers subscribe to two, three, or more cloud providers, but still, nearly half of overall applications remain in corporate data centers.

 

That’s why Google Cloud executives and leading customers spoke about Google’s heavy investments in infrastructure in recent years. “Our ultra-scale, highly reliable, AI supercomputing systems combine TPU and GPU accelerators with high-performance AI-optimized storage, scalable networking, offloads, and water-cooling,” Kurian said. The firm’s compilers and software tools optimize AI  models “to deliver the best performance, latency, and overall throughput.”

 

Kurian cited those customers—and more—including Yahoo!, which is migrating 500 million mailboxes, with nearly 500 petabytes of data, to run on Google Cloud. He specifically cited Google Cloud infrastructure as a differentiator, given its development of the GKE Kubernetes engine, user-focused products for AI—Vertex AI for developers and Duet AI for users—and optimized cloud infrastructure hardware for end-to-end management of AI resources. Beyond that, Google Cloud is building an AI ecosystem to engage with enterprise customers. Now, Google Cloud must reach out to more business decision-makers – including CXOs and finance executives – to convince them that now is the time to move the next wave of business workloads to the cloud, using Google Cloud services to do so.

[1]Google Cloud Begins Profitability Era: 5 Huge Q2 Earnings Takeaways,” CRN Cloud News, July 26, 2023.