IT Infrastructure Predictions for 2024

Published by: Srini Chari

IT Infrastructure Predictions for 2024
Jean S. Bozman, President,

Cloud Architects Advisors LLC
Srini Chari, Ph.D., MBA and M. R. Pamidi, Ph.D.,
Cabot Partners

This year’s Cabot Partners’ Predictions forecast a more varied and heterogeneous computing environment in 2024 to address the incredible growth in AI and Generative AI (GenAI).

AI will be a game-changer that will cause business and IT managers to “think differently” about how they will provide system infrastructure to new AI-enabled applications. Equally important is how AI is causing business units and IT organizations to reconsider, reconstruct, and redeploy workloads into a mixed environment of on-premises systems and off-premises cloud services.

The chief characteristics of this evolving computing/storage environment are:

  • AI’s explosive growth will continue in 2024 with advancements in AI-powered automation, personalization, and predictive analytics, often overshadowed by the sheer excitement and hype surrounding GenAI.
  • Large Language Model (LLM) software will continue to grow in 2024. ChatGPT, introduced by OpenAI in November 2022, will face increasing competition from Google Gemini, Meta Llama, Anthropic, Cohere, and others. At the same time, Microsoft, which partnered with OpenAI, has made it clear that ChatGPT will be part of many Microsoft software products as a co-pilot for key functionality. We expect that GenAI, as a software category, will not replace writers and graphic designers who will use it to speed up their creative process by supporting faster content development. AI software will continue to grow and be used for a wide range of purposes (e.g., software development, application functionality, and data management) throughout the IT infrastructure.
  • Software-defined infrastructure will play a dominant role in managing hybrid clouds and multi-cloud deployments. Powerful management software will oversee the increasingly complex infrastructure – spanning the Core, the Cloud, and the Edge – ensuring that end-to-end security is in place, protecting data across a corporation, government agency, or organization. Importantly, AI and GenAI will bring senior business management and IT management closer together so that new systems are not “silos” – but rather part of a more comprehensive end-to-end design.
  • The worlds of on-premises computing and Cloud Computing will merge in 2024. Cloud providers paved the way for distributed computing, linking dense computing hubs. This approach to scalable, distributed IT will now spread to a wider variety of customer sites, especially those hosting on-prem-only compute resources for security and compliance reasons.
  • Heterogeneous, mixed-chip systems will gain ground. In 2024, more customers will leverage mixed-processing platforms for their public and private cloud deployments. Workloads will gravitate to the CPUs, DPUs, GPUs, IPUs, NPUs, and TPUs that run the applications best – so customers won’t select any single platform. Instead, they will choose multiple platforms, each chosen to run specific job types.
  • Faster interconnects will be a “must” in this evolving world of AI and multi-cloud computing. It will include new “connectors” – including optical interconnects, and new Ethernet-plus interconnects now under discussion in open-standards industry consortia. Without faster interconnects, multi-cloud computing would be too slow to support rapid data transfers that would hamper end-to-end delivery of data-based services across the entire enterprise.
  • Chiplets will be key to the new multi-vendor infrastructure. A chiplet is a tiny integrated circuit containing a well-defined subset of functionality, designed to be combined with other chiplets that reside on the same system board. The move to chiplets is gaining steam and will accelerate in 2024. Chiplets will link function-specific compute tasks. We expect their use will grow in vendors’ products, supported by new foundry processes in semiconductor factories (e.g., AMD, ARM, Intel, NVIDIA, and TSMC). More focus will be on leveraging chiplets, which support a more flexible infrastructure. One often-overlooked benefit is that adding new system features via chiplets can help reduce yield problems for chip manufacturers and foundries making very large integrated circuits.
  • Data, data, data – a key foundation for the AI world. Data placement, integrity, and optimization will be a focus for multi-cloud systems, guided by customers’ needs to support AI, GenAI, and HPC. Tapping reservoirs, or piles, of data across modular platforms will help reduce power/cooling requirements for those workloads that traditionally have been running on monolithic or standalone systems.
  • Data privacy and security breaches will continue to cause havoc in all business sectors as cyberattacks and ransomware attacks become more sophisticated. Preventive and countermeasures are being adopted in the private and public sectors – but that has not prevented disruptions to business and financial damage in the 2023 attacks. To counter these threats, we expect to see trends like AI-powered threat detection and prevention, increased focus on data privacy and compliance, and zero-trust security models to grow in 2024.
  • Liquid Cooling, a familiar feature in mainframes and Cray supercomputers, is being reborn for the AI age. Providers of systems and racks are using clear solvents rather than water, which can damage electronic circuits over time. We believe a new generation of liquid-cooling designs will increasingly be part of customer and Cloud Service Provider (CSP) data centers. Liquid cooling will not be a universal “fix,” so it will appear first for the densest computing and storage infrastructure. As we have seen in 2023 intros by large systems vendors (e.g., Dell Technologies, Lenovo, HPE, and IBM), liquid-cooled racks will be deployed alongside air-cooled racks, given the multiple types of workloads customers will be running. Major oil companies, including BP and ExxonMobil, have jumped on the liquid-cooling bandwagon by developing and marketing their brand of liquid-cooling solvents for use in liquid-cooling racks.
  • Co-opetition – long a feature of open systems – will be evident across the tech industry as a strategy to make multi-cloud computing “work” while supporting high-performance workloads (e.g., AI, Gen AI, and HPC). Unusual combinations will result, as has already been the case for Microsoft + Red Hat, Oracle + VMware, Intel + IBM, and the like. Enterprise customers like co-opetition because it prevents or reduces “vendor lock-in” that would otherwise reduce choice and increase prices for IT solutions.
  • Quantum Computing will steadily gain adoption in 2024. One driver for adoption will be to ensure that traditional security methods won’t get compromised by someone else’s quantum computers – even though wide adoption in enterprise organizations is not expected before 2030. It’s important to note that the special physical “environment” that enables quantum computing (very low-temperature cooling and an expensive and high-maintenance physical environment) will lead to quantum-as-a-service – supported by fast connections to quantum-compute “clouds” accessing supercooled quantum infrastructure.
  • Variations on the “as-a-service” theme will gain ground as vendors offer AI as a Service, Data as a Service, Quantum as a Service, and Storage as a Service (StaaS). Given the rapid introduction of new hardware and hardware specifications (e.g., processors and interconnects), as-a-service offers will reduce update pressures for on-prem enterprise data centers by reducing CapEx costs for system acquisition and system maintenance.

Business customers are eager to update their blended enterprise data centers’ IT environment – adding new technology to modernize existing systems and infusing all systems with distributed software-defined cloud computing. Older systems will become more cloud-like by supporting software-defined infrastructure that allows them to scale up applications and capacity across IT resources. So, private clouds may reside on-premises, but they will undoubtedly coordinate workloads with public cloud services – with increased cloud deployments over a five-year IT time horizon.

As a result, IT silos will likely decrease in 2024, while multi-cloud deployments will expand in many enterprise organizations. Of course, these technology changes must be virtually “transparent” to business users and business units. As a result, 2024 will see business units and IT organizations develop faster, better “human” communications, especially around GenAI capabilities that, quite literally, are opening access to corporations’ mission-critical data stores. Customers’ data resources, including “data-lake houses,” data warehouses, and centralized databases, will no longer be sufficient to coordinate GenAI data access across entire businesses and governmental organizations.

A “re-think” of data security in the AI age will be a high priority in 2024. Security threats will remain a top concern for business and IT executives. In most enterprises, traditional DevOps has morphed into DevSecOps because IT Security staffers have realized that security must be baked into the infrastructure. (By analogy, eggs are baked into a cake as an integral ingredient, while the cake’s icing may be added as a final afterthought). Integrating security into multi-tier deployments will avoid the kind of disruptions in everyday IT operations that make news headlines and slow down mission-critical business processes – impacting company revenues and profits.

Enabling this “blended” IT infrastructure makeover will require careful planning, coordination between business and IT, and enterprise-wide adoption of networking, storage, and security standards and policies. In short, getting more out of IT in 2024 will require an intensive upfront planning process and frequent communications between IT organizations and the business units that IT supports. Otherwise, the resulting infrastructure would be inefficient, producing uneven and inaccurate business results that could harm the entire enterprise.

Why embark on this journey to leverage AI, modernize infrastructure, and migrate enterprise applications to multi-cloud environments? Because the ability to do so will support business efficiency, enable IT flexibility, and improve business agility – using AI and multi-cloud to produce better business results.

Combining both “styles” of computing – on-premises and cloud-provided infrastructure – will be prevalent worldwide in 2024. By modernizing existing IT infrastructure and leveraging cloud infrastructure, customers plan to contain their operational costs (OpEx) while limiting their capital expenditures (CapEx) for acquiring and maintaining systems.

As 2024 begins, businesses must select which corporate workloads will remain on-site to comply with corporate policies and governmental regulations – and which will migrate to CSPs. From now on, across the enterprise and in multi-cloud deployments, IT infrastructure changes must improve efficiency, flexibility, management/control, and scalability to support an ever-expanding set of customer use cases.

Leave a Reply

Your email address will not be published. Required fields are marked *