How Infinidat’s Recent Announcement Can Enhance and Accelerate Your ROI

M.R. Pamidi, Ph. D.| Srini Chari, Ph. D.

Cabot Partners

Nowadays, everyone is talking about Generative AI and its promise to fundamentally transform many industries, from Retail to Healthcare to Manufacturing to Financial Services. The critical role of IT infrastructure should be discussed more in the industry, especially the less glamorous part of the IT stack, namely, Data and Storage.


As organizations gather, process, and store larger datasets from all sources, such as sensors, instruments, log files, and so on, their workloads are becoming more compute- and data-intensive. Traditional High-Performance Computing (HPC) is converging with AI (Figure 1), including Generative AI. It places similar extreme management, performance, and scale demands on IT infrastructures (particularly Storage) to the point where the rapidly growing Generative AI workloads need an HPC infrastructure.

Figure 1: The Convergence of HPC with AI

By using proven HPC storage software and systems for AI deployments, organizations can reduce data costs, consolidate compute and storage siloes, simplify system administration, improve efficiency, and more. But they must also ensure security, affordability, performance, scalability, compliance, flexibility to manage service level agreements (SLAs), support different configurations on-premises and in multi-cloud environments, and more. Infinidat has incredible promise to deliver a significant ROI to its clients in these situations.


About Infinidat: Delivering Innovative High-Value Storage Solutions

We recently met Eric Herzog, Infinidat’s CMO, at the Flash Memory Summit in California. Founded in 2011, Infinidat is an enterprise storage company headquartered in Waltham, MA, and Herzliya, Israel. In addition to Series A investment, it has received a Series B investment of $150 million from TPG and a Series C investment of $95 million from Goldman Sachs—for a total of $325 million to date. The company claims over 25% of Fortune 50 as its customers, over 140 patents filed to date, and a 100% availability guarantee of its storage products. It has a global presence, with offices and enterprise customers in multiple countries, enhancing its reach and market share. Infinidat’s management team, with storage industry veterans, has a proven track record of success. The team is committed to developing innovative solutions that meet the needs of its customers, with regular announcements that help clients drive innovation and value.


Most Recent Infinidat Announcements Continue to Enhance Innovation and Value

On September 19, 2023, the company announced impressive enhancements to its already-strong product line by adding SSA Express SDS (Soft-Defined Storage) and InfiniBoxTM SSA (Figure 2).

Figure 2: Latest Impressive Infinidat Product Announcements

Traditionally, customers looking to expand their storage infrastructure often are forced to forklift upgrades, which are both expensive and unwieldy. Infinidat’s SSA SDS Express avoids these expensive upgrades by offering Up to 320 TB of usable all-flash capacity and supporting over 95% of existing InfiniBox® systems that can be easily configured into new InfiniBox purchases. This solution expands an InfiniBox hybrid array by allowing customers to leverage the InfiniBox flash layer for performance-sensitive workloads, essentially creating a solution akin to embedding an all-flash array inside the InfiniBox. The non-disruptive, free InfuzeOS 7.3 software upgrade of existing InfiniBox systems has zero downtime, reducing CapEx and OpEx with no new array management interface or need for a second vendor. SSA SDS Express also supports InfiniSafe® Cyber Detection software with a scanning option and storage recovery guarantees.


InfiniBoxTM SSA II offers twice the storage capacity in the same data center footprint, supporting up to 6.635 PB of Storage, about 50% less power (Watts) per effective TB, reduced rack space, floor tile, and power and cooling costs. Increased capacity and performance at scale help consolidate more all-flash and legacy hybrid arrays, resulting in lower storage management costs. The InfiniBox SSA has several guaranteed SLAs: the InfiniSafe Cyber Resilience and Recovery guarantee, guaranteed performance, and 100% availability guarantee. This product also provides scale-up storage systems, adding 60% and 80% partially populated configurations to the existing 100% populated option. Thus, customers can start at a low entry point and scale up as needed.


InfuzeOS with Neural Cache, InfiniSafe, InfiniOps, InfiniVerse, comprehensive enterprise data services, InfuzeOS Cloud Edition, autonomous automation, guaranteed performance, and availability, and InfiniSafe Cyber Resilience and Recovery customers can manage clients’ storage IT operations on a “Set It and Forget It” model.


Infinidat has a growing customer base and includes some of the world’s largest and most demanding organizations. The company’s customers appreciate InfiniBox’s performance, scalability, reliability, as well as Infinidat’s commitment to customer support. We believe this announcement will further strengthen its customer base and reduce the total cost of ownership (TCO). However, these enhancements will increase the total value of ownership (TVO) and ROI and lower TCO.

Using a TVO Framework to Quantify How Infinidat’s Recent Announcements Can Improve ROI

The TVO framework (Figure 3) categorizes the interrelated cost/value drivers (circles) for Storage solutions by each quadrant: Costs, Productivity/Quality, Revenue/Profits, and Risks. Along the horizontal axis, the drivers are arranged based on whether they are primarily Technology or Business drivers. Along the vertical axis, drivers are organized based on ease of measurability: Direct or Derived.

Figure 3: The TVO Framework for Storage with Cost and Value Drivers

The cost/value drivers for Storage solutions (a circle whose size is proportional to the potential impact on a client’s Total Value (Benefits – Cost) of Ownership or TVO) are as follows:


  1. Total Costs of Ownership (TCO): Typical costs include one-time acquisition costs for the hardware and deployment and annual charges for software, maintenance, and operations. As described earlier, the latest Infinidat products can lower the TCO.
  2. Improved Productivity: The TVO model quantifies the value of productivity gains of administrators, end-users, and the organization. Productivity gains can be significant with the latest Infinidat solutions.
  3. Revenue and Profits: Faster time to value and more innovation capabilities for clients spur revenues and improve profits.
  4. Risk Mitigation: A streamlined process and improved governance/security lowers system downtime and minimizes cumbersome iterations in rework.


The TVO and ROI will typically increase with Storage solution size, giving clients even better ROI and Payback in the future as they deploy Gen AI solutions that will require them to manage petascale Storage. It will benefit Infinidat and clients to quantify and monitor ROI from Infinidat Storage solutions as they continue scaling up their infrastructure to handle their most compute- and data-intensive workloads.



Icons by Flaticon

Google Cloud Embraces Duality of Generational AI and Enterprise AI

Jean S. Bozman, President, Cloud Architects Advisors LLC
M. R. Pamidi, Ph. D., Principal Analyst, Cabot Partners

Google Cloud Next’ 23 was a three-day conference in San Francisco from August 29-31, 2023. The event brought together cloud computing professionals worldwide to learn about the latest Google Cloud Platform (GCP) innovations. Google Cloud is reinventing itself as it engages more closely with enterprise businesses that plan to reinvent their business models with AI. Many consider Generative AI as a user-oriented tool for efficient searches. However, the enterprise focus of GCP was apparent from the start of the three-day event.

Google Cloud CEO Thomas Kurian opened the keynote with significant announcements that may help the tech giant keep up with its peers—Amazon and Microsoft—in the still-evolving cloud market. Google Cloud reported $7.4 Billion in cloud service revenues for 1Q23. Generative AI was the central theme at the conference, as AI powers many of the latest advancements and features. Google continues to face acute pressures to increase its AI offerings as competition from its rivals heats up.

Figure 1: Google Cloud Next 2023 Key Announcement

Some of the key announcements (Figure 1) from the event included:

  • General availability of Duet AI, a new natural language processing (NLP) technology that can help businesses generate more creative and engaging content.
  • Preview of Vertex AI Workbench, a new integrated development environment (IDE) for building and deploying machine learning (ML) models.
  • Launch of Cloud TPU v5e, a new machine learning accelerator that can train models up to three times faster than previous generations.
  • Availability of A3 Virtual Machines (VMs) based on NVIDIA’s H100 GPUs offers up to 80% better performance for ML workloads than previous generations.
  • Introduction of Cross-Cloud Network is a new networking platform that makes connecting and securing applications across multiple clouds easier.
  • The Vertex AI ecosystem expands with new partnerships with DocuSign, Box, Canva, and Salesforce.

The event showcased Google’s commitment to innovation in the cloud computing space. It highlighted the company’s latest technologies for building, deploying, and managing applications and its growing ecosystem of partners.

In addition to the keynote presentations and breakout sessions, the event also featured several hands-on labs and workshops. These sessions allowed attendees to learn about GCP technologies and try them out for themselves.

Google emphasized the duality of personal AI and enterprise AI throughout the three-day conference, along with customer examples from some of GCP’s largest customers, including General Motors, Mahindra Motors, and US Steel. These customers are running end-to-end enterprise applications for mission-critical workloads on Google Cloud infrastructure, as in global communications (GM’s OnStar), retail sales (Mahindra), and manufacturing (US Steel). The cloud company currently has a $32 billion annual run rate, the largest in its history, as enabled by enterprise AI and the capabilities of the Google Cloud infrastructure*.

Kurian, who took the helm at Google Cloud in 2019, has deep experience with enterprise infrastructure; he was Oracle Corp.’s President from 1996 to 2018, reporting to Oracle CEO Larry Ellison and, later, to CEOs Mark Hurd and Safra Catz and engaging with Oracle’s enterprise customers worldwide.

Personal GenAI and Enterprise-Wide GenAI

“We’re announcing important new products to help each of you use generative AI,” said Google Cloud CEO Thomas Kurian, “First, to help the world’s leading customers and developers to build AI with our AI-optimized infrastructure and our AI platform for developers, Vertex AI.” “Generative AI (Figure 2) will likely spark greater customer requirements for performance, scalability, and reliability,” Kurian said, relating user-based AI usage to enterprise-wide AI for corporate applications. The new Duet AI product is positioned as an assistant and collaborative software for end-users.


Figure 2: Google’s Generative AI Stack

Google Cloud’s generative AI product for business users, Duet AI, is positioned as a collaborative assistant, while the Vertex AI targets developers – and end-to-end enterprise AI benefits from Google Cloud’s scalability, performance, and reliability, supporting millions of users across 38 geographic regions worldwide. That network of Google Cloud operations reaches the Americas (North America and South America), the European continent, Asia/Pacific, and Australia) – and will likely continue to expand as GCP competes with AWS and Azure for more business across all continents.

“We’re announcing important new products to help each of you use generative AI,” said Google Cloud CEO Thomas Kurian, “to help the world’s leading customers and developers to build AI with our AI-optimized infrastructure and our AI platform for developers, Vertex AI.”

Kurian has deep experience with enterprise infrastructure; his previous work was as Oracle’s President, engaging with Oracle’s enterprise customers with a global presence worldwide. He expects that broader use of generative AI will lead to faster cloud adoption for transactional and corporate applications that have not yet moved to the cloud.

Google and AI

Google has long been known as an inventor and developer of new AI technologies. Now, with the fall release of ChatGPT from Microsoft and OpenAI, Google Cloud is showing its Vertex AI, Duet AI, and GCP AI infrastructure as a competitive offer to enterprises. In many cases, those enterprises are working to update aging on-premises applications – often with reduced IT staffing as they consolidate their corporate data centers — and to speed the adoption of AI-based technology throughout their business operations (Core to Cloud to Edge) to help their enterprises achieve better business results.

AI will “touch every sector, every industry, every business function and significantly change the way we live and work,” said Alphabet and Google CEO Sundar Pichai, pointing to Google Cloud’s enterprise focus from the start of the three-day conference in San Francisco.

Business executives, Pichai said, “want a partner that’s been on the cutting edge of technology breakthroughs—be it from the desktop to mobile, to the cloud, or now to AI—and a partner who can help navigate and lead them” through the next phase of digital transformation driven by AI.

GCP Investment and Adoption

Google and Google Cloud, Pichai said, have invested heavily “in the tooling, foundation models and infrastructure” to make that happen, starting with specialized TPU and GPU processors and continuing with software that can run customers’ containers and virtual machines on the GCP.

Pichai cited three customers who are leveraging Google’s GenAI software, which was introduced in March: General Motors (GM), which applied conversational AI to its OnStar-connected vehicles; HCA HealthCare, which is working with Google to create Med-PaLM foundation models for scalable AI; and US Steel, which is using Google AI to summarize and visualize instruction manuals.

Below is a detailed look at everything Google announced at its event.

Vertex AI Gets Better

Figure 3: Vertex AI Architecture

About two years ago, Google unveiled Vertex AI, a unified artificial intelligence platform that offers all of Google’s cloud services under one roof. With Vertex AI, you can build ML models or deploy and scale them easily using pre-trained and custom tooling. It was followed by Vertex AI Vision, the machine learning platform as a service (ML PaaS) offering from Google Cloud. Since the general availability of generative AI services based on Vertex AI earlier this year, developers can use several new tools and models, such as the word completion model driven by PaLM 2, the Embeddings API for text, and other foundation models. Google is adding Meta’s Llama 2 and Technology Innovation Institute’s (TII) loyalty-free Falcon 40B, the UAE’s leading large-scale open-source AI model. This enables Google Cloud to support first-party, open-source, and third-party models. APIs of these models will be accessible through a new tool in its Cloud Platform named Model Garden.

Duet AI For Google Workspace

At I/O 2023, Google announced “Duet AI” as the branding for generative AI features in Workspace. At that time, its availability was limited to trusted testers by invitation only. However, now Google has made Duet AI for Google Workspace available for all users with a no-cost trial. Google will charge $30 per user for access to Duet, as per CNBC.

Duet AI is a development interface powered by AI that includes code and chat assistance for developers on Google Cloud’s platform. Duet AI covers a range of generative AI tools for Google’s productivity apps, including Gmail, Drive, Slides, Docs, and more. Google is essentially taking on Microsoft’s Co-pilot.

Google wants to make Gmail, Docs, Sheets, Slides, and Meet more helpful with the help of generative AI.
At I/O, Gmail got a new feature called “Help me write,” allowing people to use generative AI to send auto-replies to emails and modify them to meet their needs best. Meanwhile, Sheets has a “Help me organize” feature where users can ask for information to be organized for them in the sheet through a simple word prompt. Similarly, in Google Slides, there’s a new “Help me visualize” entry where users can use prompts to get AI-generated images.

Google also announced further Workspace AI integration in Google’s other core apps, such as Meet and Chat. Within Meet, Google’s new AI features include taking notes in real-time: When they click “take notes for me,” the app will capture a summary and action items as the meeting goes on. Google will be able to show them a mid-meeting summary so that they can catch up on what happened.

Another new Meet feature allows Duet to “attend” a meeting on your behalf. Users click on the “attend for me” button on a meeting invite, and Google can auto-generate some text about topics the users might want to discuss. Those notes will be viewable to attendees during the meeting so that they can discuss them in real time.

Other new features include dynamic tiles and face detection, giving users their video tile labeled with their name in a meeting room. An automated translated captions feature will detect when another language is spoken and display the translation on-screen. The new automatic translated captions feature supports up to 18 of the world’s languages.

Lastly, Google is integrating Duet AI into Google Chat. Now, users can chat directly with Duet AI and ask questions about their content, get a summary of documents in a space, and catch up on missed conversations. It’s easier to use Google Chat because of the new interface and a new shortcut option. Google also integrates “smart canvas” capabilities, such as smart chips, inside Google Chat. Google Chat now lets users add up to 50,000 members to a space. The change marks a significant increase from the previous limit of 8,000 members. In addition, a new feature called Huddles is coming to Google Chat. With Huddles, instead of jumping out of the conversation into a meeting, the meeting is integrated directly into the chat experience. Google says Huddles will be available in public preview by the end of the year. In the coming weeks, Google will add support for third-party apps to Chat, including products from Zoho, Workday, and Loom.

Gen AI Unicorns and Google Cloud

During the event, Google said 70 percent of Generative AI unicorns—and more than half of all funded AI startups—are Google Cloud customers. This year’s Google Cloud Next ’23 summit illustrated how Google offers the optimized AI-optimized infrastructure to host and run AI models. That is intended to be a selling point for business decision-makers who have hesitated to migrate mission-critical applications to cloud service providers (CSPs).

Our Analysis

When the pandemic began, the pace of cloud migration accelerated. Still, some applications never moved from corporate data centers, worried that cloud service providers could not meet legacy applications’ security, availability, and governance.

The most widely used Cloud Service Providers (CSPs), including AWS, Microsoft Azure, and Google Cloud Platform, benefited from cloud migrations. Still, many more applications have not yet moved to the cloud. IDC has reported that many enterprise customers subscribe to two, three, or more cloud providers, but still, nearly half of overall applications remain in corporate data centers.

That’s why Google Cloud executives and leading customers spoke about Google’s heavy investments in infrastructure in recent years. “Our ultra-scale, highly reliable, AI supercomputing systems combine TPU and GPU accelerators with high-performance AI-optimized storage, scalable networking, offloads, and water-cooling,” Kurian said. The firm’s compilers and software tools optimize AI models “to deliver the best performance, latency, and overall throughput.”

Kurian cited those customers—and more—including Yahoo!, which is migrating 500 million mailboxes, with nearly 500 petabytes of data, to run on Google Cloud. He specifically cited Google Cloud infrastructure as a differentiator, given its development of the GKE Kubernetes engine, user-focused products for AI—Vertex AI for developers and Duet AI for users—and optimized cloud infrastructure hardware for end-to-end management of AI resources. Beyond that, Google Cloud is building an AI ecosystem to engage with enterprise customers. Now, Google Cloud must reach out to more business decision-makers – including CXOs and finance executives – to convince them that now is the time to move the next wave of business workloads to the cloud, using Google Cloud services to do so.


*Google Cloud Begins Profitability Era: 5 Huge Q2 Earnings Takeaways,” CRN Cloud News, July 26, 2023.

Estimating the Ginormous Growth of Genomics Data and Storage

Estimating the Ginormous Growth of Genomics Data and Storage

By Srini Chari, Ph. D., MBA | Ravi Shankar, Ph. D., MBA

Genomic data science is a field of study that enables researchers to use powerful computational and statistical methods to decode the functional information hidden in DNA sequences. Our ability to sequence DNA has far outpaced our ability to decipher the information it contains, so genomic data science will be a vibrant field of research for many years to come.

Researchers are now generating more genomic data than ever before to understand how the genome functions and affects human health and disease. These data are coming from millions of people in various populations across the world. Much of the data generated from genome sequencing research must be stored, even if over time, older information is discarded. The demand for storage is colossal with increased and new areas of life sciences research.

New areas of life sciences research are driving up data volumes

We identified five major life sciences research areas that are generating large volumes of data. For each area, we estimated the demand for the instruments that are producing this data. We then estimated average data generated per year by each of the instruments. The total storage demand over the study scope period is obtained by multiplying the number of sequencing instruments with the average data generated by each instrument.

Following areas of life sciences were considered in the data generation estimation:

Next-generation sequencing (NGS) is a massively parallel sequencing technology that offers ultra-high throughput, scalability, and speed.

RNA sequencing (RNA-Seq) uses the capabilities of high-throughput sequencing methods to provide insight into the transcriptome of a cell.

Spatial transcriptomics are methods designed for assigning cell types (identified by the mRNA readouts) to their locations in the histological sections. This method can also be used to determine subcellular localization of mRNA molecules.

Cryogenic electron microscopy (cryo-EM) is a cryo microscopy technique applied on samples cooled to cryogenic temperatures. This approach has attracted wide attention as an alternative to X-ray crystallography or NMR spectroscopy for macromolecular structure determination without the need for crystallization.

Single-cell sequencing technologies refer to the sequencing of a single-cell genome or transcriptome, so as to obtain genomic, transcriptome or other multi-omics information to reveal cell population differences and cellular evolutionary relationships.


How to quantify the colossal storage needs?

We made the following assumptions to estimate the storage needs between years 2021 through 2028.

  1. We assumed the information provided in the paper[1] “Big Data – astronomical or genomical” published in 2015 as the basis of our calculation. In the year 2015, the authors working with various subject matter experts surmised that in 2015, there were 2500 gene sequencing instruments and the storage need at that time was 9 Petabytes (PBs).
  2. Based on their and other research studies, they looked into two possibilities
    1. Storage needs will double every 12 months (conservative case)
    2. Storage needs will double every 7 months (aggressive case).
  3. In our estimation exercise, we assumed that the growth rate will be the average of the above two cases.
  4. Based on our discussion with experts, we assumed storage needs per RNA, Spatial transcriptomics, Single-cell sequencing instrument will be 4,8, and 100 times that of an NGS sequencing instrument respectively.
  5. We worked with a Grand View Research, a market research company to estimate the global volumes of NGS sequencing, RNA sequencing, Spatial transcriptomics, single cell sequencing and Cryo EM instruments for the period between 2021 thru 2028.
  6. The volume of each instrument type multiplied by unit storage requirement for each category provided the total storage need. We also assumed that each year, 80% of the data will be retained (i.e.,20% discarded).
  7. The following table provides a summary of our estimate.



Volume of Units Cumulative storage 2021 to 2028 (Exabytes)
2021 2028 2021 2028
Sequencing Platforms
NGS Sequencing 11,642 33,840 1.22 3,265
RNA Sequencing 8,484 19,921 3.55 7,802
Spatial Transcriptomics 479 1,009 0.40 804
Single Cell Sequencing 128 5,744 22.41 55,636
Cumulative Storage in Exabytes 67,507
Cumulative Storage Retained in Exabytes 54,006
Cryo EM 138 1,027 0.12 2.98

From the table, we see that the need for storage for sequencing is around 54 Exabytes in 2028 (assuming a retention of 80%). For this exercise, we have made many assumptions re: growth, sequencing technologies, retention, and size based on what we know today. One thing is certain: storage needs for life sciences and genomics will only increase, and it will be ginormous.


Quantifying the Value of Integration

Integration is a vital part of the growth of any business. It fosters collaboration and helps business processes run better by automating many critical tasks.

Data and API Integration

There are several types of integration, but the two important ones are Data integration and API (Application Programming Interface) integration. Data integration is when data from multiple different sources are connected to a single centralized or a virtual federated location e.g., a warehouse for data. It allows businesses to organize different types of data from different sources into a digital warehouse or a federated repository. This makes it easier to view, access, comprehend, and manage data. Data integration fosters collaboration and streamlines many business processes so that it is easier to get insights to grow the business. However, it must be able to handle large volumes and varieties of data.

API integration is a connection between two or more applications using their APIs. It also lets systems and applications exchange data with each other and allows large amounts of data to be transferred without any errors that would be seen in human transfers. API integration also enhances collaboration and boosts innovation.

These two forms of integration increase automation in business processes, reduce errors particularly in large complex data transfers, and make it easier for businesses to organize, view and share their data amongst employees and each other. So, businesses can save time, and boost efficiency and innovation.

The Value of Integration is Growing

Integration is becoming more widely used. Many businesses are implementing integration to their business processes to improve how they process and view data. Artificial Intelligence (AI) and Cloud Computing techniques are being used to make integration more effective and valuable, and to improve overall quality.

At Cabot Partners we understand the importance of integration and how vital it is to the growth of any business. Investing in the latest IT technologies such as Cloud Computing and AI helps automate business processes, increase integration maturity, and operate at a higher level of maturity.

Integration Maturity Model and Quality

We describe 3 levels of integration in the maturity model: Basic, Advanced, and Expert. Basic integration is the simplest with the lowest level of integration: there is little automation and no self-service capabilities. The next level is advanced. At this level, a business has some automation and self-service but it’s not 100 percent. The final level of integration is Expert, at this level a business is fully integrated. This means that this business has full self-service and is fully automated. At this level, business processes are all automated using the best technology available, so there are almost no manual processes needed.

As a business goes higher in the integration maturity level, its quality and value increases. The way that you can measure the quality is to look at the seven ISO 9001 quality principles: customer focus, leadership, employee engagement, process approach, continuous improvement, relationship management, and technology-enabled decisions/operations. These seven principles are critical to measure the quality of integration in a company. If the integration doesn’t help a business meet their customers’ expectations, improve leadership, and increase employee engagement then you can tell the integration implemented in the business is not of high quality. This is also the same for improving business processes and operations. If these two things are not better as result of integration, then the quality of is low. The ISO 9001 principles and model show what aspects you should look at when measuring the quality of integration in your business.

These techniques are used to measure the quality of IBM Cloud Pak for Integration. To learn more please download this paper.

Emerging Computing Technology Trends for 2022

Emerging Computing Technology Trends for 2022
M. R. Pamidi, Ph. D.
Happy New Year to all. Here are some thoughts on key emerging computing technology trends for 2022:
We see AI expanding in a variety of ways.
  1. Reliable data: Organizations in continuing their efforts build digital-first business models will further explore using AI to enhance customer acquisition, improve customer experience, and expand customer retention. To achieve these goals, they need to have reliable data that is both clean and structured. Today, these tasks are being accomplished by highly paid data scientists, who maintain their notebooks with very little shared infrastructure. We expect AI to take over much of these activities and create pools of priceless data and pipelines of valuable dataflow.
  2. Conversational and ethical AI: Debates will continue. Recent (ab)use of AI by major social media companies and state agencies involving face recognition, monopoly, and privacy issues have caught the attention of politicians and lawmakers worldwide and many countries have imposed hefty fines on these companies. These will force media companies to adopt more-ethical AI.
  3. AI for all: We’ve often heard the phrase data is the new oil. This may be true but, remember, the OPEC and a few other countries around the world control oil extraction, production, and distribution, but data is much more democratic. Even oil-scarce smart countries (e. g., Israel) in a flat world can exploit AI and show their prowess.
  4. Improving lifestyle: A machine-learning (ML) program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. With this invention, tetraplegic patients hopefully will be able to carry out more day-to-day activities on their own.
  5. Entertainment: On a more entertaining side, classical-music lovers know that Beethoven composed nine symphonies and was reportedly working on his 10th when he died in 1827, leaving behind 40 sketches for this symphony. Many music lovers, musicologists, academia, and AI experts, mainly from Europe and the U. S., got together a few years ago and decided to complete Beethoven’s unfinished 10th using AI, sponsored by Deutsche Telekom.[1] The work had its premiere in Bonn on October 9, 2021.[2] This, definitely, is one of the most creative and fascinating applications of AI.
  6. Patents: In an excellent article on AI in The Wall Street Journal, the reporter states South Africa in July 2021 granted a patent to an invention that listed an AI system as the inventor.[3] The system came up with an idea for a beverage container based on fractal geometry. It was the first time a government awarded a patent for an invention made by AI. The U. S. grants patents only to human beings, or “natural persons.”
  1. Continuing growth: Cloud Computing (CC) continues to make deeper inroads into the enterprises and spending on CC is expected to surpass that on non-cloud before 2025. CC has traditionally been a technology disruptor and will eventually morph into a business disruptor in many areas, e. g., bio-pharma, public sector, consumer goods, banks and financial sectors, oil and gas, energy, technology to name a few. We expect the current leaders—Amazon Web Services, Microsoft Azure, and Google GCP—to maintain their strong positions with continued growth (Figure 1), although Google and Microsoft are gaining market shares at the expense of Amazon.[4]                                                           Figure 1. Public Cloud Market Shares
  2. Security and privacy: As CC proliferates, so will concerns about cybersecurity. Traditionally, security has been an afterthought in enterprises, and they will soon realize that, if IT is considered as a cake, security should be baked into it like eggs, and not just brushed on later as icing. DevOps will be gradually replaced by DevSecOps and we expect the beginning of distributed public clouds to different physical locations with due considerations to geo-fencing and privacy laws, such as Brazil’s Lei Geral de Proteção de Dados (LGPD), California Consumer Protection Act/California Privacy Rights Act of 2020 (CCPA/CPRA), the EU’s GDPR, and South Africa’s Protection of Personal Information (POPI). The U. S. is still kind of loosey-goosey on privacy issues and appears to be saying what Scott McNealy of Sun Microsystems said over 20 years ago, “You got zero privacy, get over it.” We hope the rest of the U. S. learns from California, which has always led the nation in creative issues.
  3. Complements AI: AI and CC will complement one another, because AI with ML and Deep Learning will require large amounts of computing resources—CPUs/GPU/IPUs/TPUs, speed, storage, and network bandwidth—and CC can easily deliver these to those in need. AI will get smarter and more resourceful—creating its own algorithms as it ‘learns’ from experience, with very little help from humans.
  4. “Serverless Computing”, a buzz phrase for the past few years, will make a deeper footprint from Amazon Lambda, Microsoft Azure Functions, and IBM Cloud Functions. Serverless means enterprises are not acquiring or leasing servers, but are using a cloud provider on a pay-as-you-go basis. So, “serverless” is really a misnomer; someone out there pays for and owns those servers. It’s more like “less-server” or, to please our grammarian readers, “Fewer-Servers Computing.”
  5. Streaming: Finally, with increased emergence and embrace of 5G and Wi-Fi 6E, not only more, but new kinds of, data, such as those from Amazon Luna and Google’s Stadia gaming platforms, will be streaming on networks. Only CC will accommodate such burst-load spikes, as it has successfully done so on Black Fridays and Cyber Mondays in recent years.
  1. Accelerated computing: Years ago, High-Performance Computing (HPC), born in traditional on-premises datacenters, was done using expensive water-cooled supercomputers[5] and parallel processing techniques to execute multiple time-consuming tasks simultaneously. However, edge computing and AI have redefined HPC which can now deliver these tasks very inexpensively. What has made these possible is a combination of AI, new kind of processors beyond traditional CPUs—such as GPUs (Nvidia), TPUs (Google), and IPUs (Graphcore)—and improvements in traditional ASICs and FPGAs.
  2. Mainstreaming: AI, CC, and HPC complement each another in that AI, as noted above, drives the HPC engine and CC democratizes IT infrastructure and delivers a level playing field. Once the kingdom enjoyed mainly academia, national labs, and defense, HPC has been widely embraced by aerospace, bio-pharma, energy, healthcare, oil and gas, Wall Street, and other industries. With CC delivering HPC as a Service (HPCaaS), edge computing will further HPC’s footprint. These trends will continue as Exascale Computing appears on the horizon, with performance measured in exaFLOPS (1 quintillion or 1018 FLOPS). But we are still far from achieving the late great Seymour Cray’s vision of 4-T Computing—Terahertz chip speed, terabit bandwidth, terabyte memory (achieved), and terabyte storage (achieved).
The concept of Quantum Computing (QC) was first posited by the Nobel Prize-winning physicist Richard Feynman who explained that classical computers could not process calculations that describe quantum phenomena, and a quantum computing method was needed for these complex problems.[6] Since then, QC has made significant strides and established companies and nations are investing heavily to gain leadership positions in this field.
On the commercial front:
  1. Honeywell recently completed a business combination of its Honeywell Quantum Solutions division with Cambridge Quantum and has formed a new company, Quantinuum. The previously announced business combination results in Honeywell owning a majority stake in Quantinuum.  Honeywell and IBM were both prior investors with Cambridge Quantum. Jointly headquartered in Cambridge, U.K., and in Broomfield, CO., Quantinuum plans to launch a “quantum cyber security product” this year, and an enterprise software package that applies quantum computing to solve complex scientific problems in pharmaceuticals, materials science, specialty chemicals and agrochemicals later this year.
  2. PlatformE, the fashion technology company enabling on-demand production for top brands, has acquired Catalyst AI, an artificial intelligence company based in Cambridge, UK. The deal will see Catalyst AI’s ML tools for optimizing fashion supply chains bolster PlatformE’s services for efficient on-demand and made-to-order fashion.
  3. IBM recently announced[7] its new 127-quantum bit (qubit) ‘Eagle’ processor at the IBM Quantum Summit 2021, its annual event to showcase milestones in quantum hardware, software, and the growth of the quantum ecosystem. IBM measures progress in quantum computing hardware through three performance attributes:
  4. Scale, measured in the number of qubits (quantum bits) on a quantum processor and determines how large of a quantum circuit can be run.
  5. Quality, measured by Quantum Volume and describes how accurately quantum circuits run on a real quantum device.
  6. Speed, measured by CLOPS (Circuit Layer Operations Per Second), a metric IBM introduced in November 2021, and captures the feasibility of running real calculations composed of a large number of quantum circuits.
IBM’s Quantum System Two offers a glimpse into the future quantum computing datacenter, where modularity and flexibility of system infrastructure will be key towards continued scaling,” said Dr. Jay Gambetta, IBM Fellow and VP of Quantum Computing. “System Two draws on IBM’s long heritage in both quantum and classical computing, bringing in new innovations at every level of the technology stack.”
Expected to be up and running in 2023, IBM Quantum System Two is designed to work with IBM’s future 433-qubit and 1,121 qubit processors and is based on the concepts of flexibility and modularity. The control hardware has the flexibility and resources necessary to scale, including control electronics, allowing users to manipulate the qubits, and cryogenic cooling, keeping the qubits at a temperature low enough for their quantum properties to manifest.
QC will not replace traditional computing anytime soon, but will coexist with it. When it does mature, QC applications will be widespread in climate-change studies, new drug discoveries, revolutionary agriculture resulting in reduced carbon emissions, systems biology, and cognitive computing processes—involving programs that are capable of learning and becoming better at their jobs—using vast neural networks. Quantum-powered AI will yield machines that are able to think and learn more quickly than ever, although machines may never equal humans in creative and emotional aspects.
Cybercrime reportedly cost damages totaling US$6 trillion globally in 2021, larger than the economies of U.S. and China and would be the world’s third-largest economy, and is expected to grow by 15% CAGR reaching US$10.5 trillion 2025.
  1. Security, like CC, is a journey and not a destination and, just as CC does, security threats from hackers, fraudsters, phishers, and scammers are only expected to get worse and more frequent. Ransomware attacks, for instance, were three times higher in the first quarter of 2021 than they were during 2019, according to the UK National Cyber Security Centre. Sixty-one percent of respondents to a PwC research survey expect the ransomware attacks to increase in 2022. Ransomware locks files behind hard-to-break encryption and threatens to wipe them all if they are not paid. Not only organizations, but also individuals, have become targets. AI, again, is coming to rescue cybersecurity professionals, as it did in financial fraud detection involving money-laundering schemes. AI can identify unusual patterns of behavior in systems dealing with hundreds of thousands of events per second. As IT security professionals encourage companies to invest in AI, cybercriminals are equally adept and aware of AI’s benefits and will try to outsmart IT. In fact, they have developed new threats using ML technology to bypass cybersecurity (think of ‘sandbox’). Again, it’ll be a battle of good vs. evil using the same technology—AI—and the savvy ones will win. This is not to discourage security spend, but to spend it wisely.
  2. Phishing or spear fishing, either in the form of employees tempted to click on an innocent-looking link, thus welcoming malware, or via USB devices that employees pick up for free at trade shows, is also becoming more common. Stuxnet is one of the most well-known phishing incidents of the latter kind.
  3. Finally, Internet of Things (IoT), about 18 billion of which are expected to be connected by 2022, is another attractive pick for cybercriminals. The targets include billions of smart appliances, light bulbs, autonomous vehicles, plant and control systems (chemical, electric power, manufacturing, traffic, oil and gas, water supply…). Thus, IoT may have to be rechristened IoVT—Internet of Vulnerable Things.
The IT industry is never dull and 2022 will be no different.
AI will invade more fields and also attract the attention of central governments worldwide concerning privacy, racial profiling, and facial recognition.
CC will continue to grow fueled by its leaders’ growth. New players will face daunting challenges from established vendors.
HPC, aided by AI and CC, will become cheaper to embrace and expand its footprint by entering new fields.
QC is still in its early stages and, but for a few marquee use cases, may take 5 to 10 years to reach practical implementations.
Security will face more challenges with hacksters (hackers + fraudsters) trying to outsmart cybersecurity experts. Central governments have to play a key role to avoid individuals (seeking fun, money, or both) or state-sponsored infrastructure meltdowns. While our Defense Brass is stuck in 20th century warfare (mass killings, carpet bombing), the 21st century will face cyber warfare. Einstein is famously reported to have said, “I do not know with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.” We beg to disagree with probably the greatest scientist and humanitarian of all time and state: The next World War will be fought with ‘0’s and ‘1’s. It will be a cyber war. Mass destruction of past wars will be replaced by mass disruption.
[1] “Beethoven’s 10th Symphony Completed By AI: Premiere October 2021,”
[2]Welturaufführung: Beethoven X,´ October 9, 2021.
[3] “For AI, 2021 Brought Big Events,” John McCormick, The Wall Street Journal, December 30, 2021.
[4]Rivals Tap Cash Piles To Win In Cloud,” Tripp Mickle and Aaron Tilley, The Wall Street Journal, December 30, 2021 (may need subscription for access).
[5] Seymour Cray, often called the Father of Supercomputing, once quipped “I’m an overpaid plumber.
[6] W. Knight, “Serious Quantum Computers Are Here. What Are We Going To Do With Them?”, MIT Technology Review, February 2018.
[7] IBM Unveils Breakthrough 127-Qubit Quantum Processor, November 16, 2021.

As the world gathers to fight climate change, let’s recognize the critical role of HPC, AI and Cloud Computing

This week in Glasgow, the COP26 summit will bring global leaders together to accelerate action towards combating climate change. This is happening as energy consumption, mostly with polluting fossil fuels, is at an all-time high. This may seem good for the oil, gas, and coal industries, but it isn’t. The grave reality is that the fossil fuel industry is under immense pressure to mitigate climate change and decarbonize since high levels of energy consumption are causing unsustainable levels of C02 and other greenhouse gases.


So, the energy industry is undergoing a profound transition from fossil fuels to renewable and clean energy sources. As oil and gas companies decarbonize, they are looking into new and economically viable solutions including potentially becoming carbon-neutral energy companies.  For this they are investing heavily in physical infrastructure and boosting investments in High-Performance Computing (HPC) and artificial intelligence (AI) to innovate and quickly solve critical problems in this transition.


Oil and gas companies have always used HPC (seismic processing and reservoir simulation) and certain forms of AI technology (analytics) for decades to improve decision-making regarding exploration and production, and to reduce investment risks particularly as fossil fuels become harder to extract. Now as oil and gas companies transition, they must be able to handle newer interdisciplinary workloads in Geophysics, Computer-aided engineering (CAE), Life sciences, Combustion/Turbulence models, Weather modeling, Material science, Computational chemistry, Nuclear engineering, and Advanced optimization.


For this oil and gas companies are increasing their investments in HPC, AI and other innovative and agile solutions to handle exploding compute/storage requirements. As the use of AI and HPC continues to grow in the energy industry, cloud computing is making is easier to process spiky workloads and improve overall user experience. So, many oil and gas companies are using hybrid cloud solutions with several deployment options.


The harmful impact of climate change is becoming more severe day by day, threatening the survival of the planet. The entire world is coming together to fight it.  Oil and gas companies are doing their part and transitioning to renewable energy sources. This transition is hard and expensive but is made easier with novel applications and extensions of proven HPC and AI solutions oil and gas companies have used for years. As the world focuses this week on climate change, let’s also recognize the critical role HPC and AI play in solving one of mankind’s most pressing challenges.


You can learn more by reading this Hewlett Packard Enterprise whitepaper that Cabot Partners recently helped create.

HPC and AI enable breakthroughs in genomics for better healthcare

Many Life Sciences organizations are using digital technologies to meet the needs and expectations of patients. These technologies help treat and manage diseases in new ways. HPC and AI solutions are at the forefront of this. They are needed to accelerate breakthroughs in large-scale genomics.

Genomics is a sub-discipline of molecular biology that focuses on the storage, function, evolution, mapping, and editing of genomes. It is a vital and growing field because it can improve the lifestyle and outcomes for patients. In the next five years, the economic impact of genomics is estimated to be in the hundreds of billion to a few trillion dollars a year.

Next-generation Sequencing (NGS), Translational and Precision Medicine

High-performance computing (HPC) and Artificial Intelligence (AI) are essential for Genomics primarily because they help speed up NGS which processes and reduces the amount of raw data into a usable format. NGS helps determine the sequence of DNA or RNA to study genetic variations associated with diseases or other biological phenomena.

After this NGS step, it is important to establish the relationship between genotypes to understand the influence individual DNA variances have on disease and medical outcomes. This is Translational medicine. The final step is Precision or Personalized medicine which customizes disease prevention and treatment for an individual based on their genetic makeup, environment, and lifestyle. This last step uses all the data collected using HPC and AI technology to create a personalized approach for the patient.

How HPC and AI help accelerate genomics

Genomics is difficult, but new HPC technology is making this process easier. NGS deals with complex algorithms that require a lot of memory to assemble and solve. HPC and AI solutions help speed up this process and make it cheaper and more accurate.  Using the cloud and new IT solutions, data sequence analysis also becomes easier and can be done at a much larger scale.

Translational medicine requires HPC solutions that can process a lot of data efficiently. This step looks for relationships between a lot of genes, DNA, and diseases and makes it possible to provide personalized treatment for a patient.

HPC and AI are game changers in life sciences. They make genomics easier and faster for healthcare providers, so they can provide highly effective personalized care for their patients. We expect HPC and AI in Life Sciences to continue grow even more rapidly especially in the post-COVID-era by enabling breakthroughs in vaccines, personalized medicine, and healthcare.

You can learn more by reading this Hewlett Packard Enterprise and NVIDIA whitepaper that Cabot Partners recently helped create.

The Promise and Peril of RPA

RPA or Robotic Process Automation emulates human activity when interacting with digital software. It automates tedious and mundane business processes. Artificial Intelligence (AI) when integrated with RPA increases business value. AI can directly be used in bots to execute tasks without human intervention. This results in better efficiency, and improved customer and employee experiences.

RPA software revenue is growing rapidly despite economic disruptions caused by the COVID-19 pandemic and is projected to reach $1.89 billion in 2021 with double-digit rates through 2024.

Automating processes with RPA seems like a great solution in theory, but in practice, this isn’t the case. RPA has been successful for some but disappointing for others. While many organizations are relatively happy with their automation investment, most haven’t fully realized the ROI promised by RPA software vendors. For this reason, clients need to carefully evaluate the various RPA vendors before making this strategic investment.

Read this Cabot Partners paper for more details.

A Fresh Look at the Latest AMD EPYC 7003 Series Processors for EDA and CAE Workloads

When it comes to high-performance computing (HPC), engineers can never get enough performance. Even minor improvements at the chip level can have dramatic financial impacts in hyper-competitive industries such as computer-aided engineering (CAE) for manufacturing and electronic design automation (EDA).

With their respective x86 processor lineups, Intel and AMD continue to battle for bragging rights, leapfrogging one another in terms of absolute performance and price-performance. Both Intel and AMD provide a comprehensive set of processor SKUs optimized for various HPC workloads.

In March of 2021, AMD “upped the ante” with the introduction of their 3rd Gen AMD EPYC™ processors. Dubbed as the world’s highest-performing server processor, AMD 7003 series processors deliver up to 19% more instructions per clock (IPC) than the previous generation. The new “Zen 3” processor cores deliver industry-leading amounts of cache per core, a faster Infinity Fabric™, and industry-leading memory bandwidth of 3200 MT/sec across eight channels of DDR4 memory. HPC users are particularly interested in the recently announced 7xF3 high-frequency SKUs with a boost speed of up to 4.1 GHz.

In two recently published whitepapers sponsored by AMD, Cabot Partners looked at the latest AMD EPYC 7003 series processors (aka “Milan”) in HPE Apollo and HPE ProLiant server platforms, characterizing their performance for various CAE and EDA workloads. Among the headlines were that EPYC 7003 series processors deliver 36% better throughput and up to 60% more simultaneous simulations per server than previous 2nd Gen EPYC processors.

These performance gains benchmarked on the latest HPE servers make these processors worth a look. Readers can download the recently published whitepapers here:

TVO Analysis of Federated Learning with IBM Cloud Pak for Data

Analytics and AI are profoundly transforming how businesses and governments engage with consumers and citizens. Across many industries, high value transformative use cases in personalized medicine, predictive maintenance, fraud detection, cybersecurity, logistics, customer engagement and more are rapidly emerging. In fact, AI adoption alone has grown an astounding 270% in the last four years and 40% of organizations expect it to be the leading game changer in business[1]. However, for analytics and AI to become an integral part of an organization, numerous deployment challenges with data and infrastructure must be overcome – data volumes (50%), data quality and management (47%) and skills (44%)[2].

In addition, many companies are beginning to use hybrid cloud and multi-cloud computing models to knit together services to reach higher levels of productivity and scale. Today, large organizations leverage almost five clouds on average. About 84% of organizations have a strategy to use multiple clouds[3].

IBM Cloud Pak for Data is an end-to-end Data and AI platform that reduces complexity, increases scalability, accelerates time to value and maximizes ROI with seamless procedures to extend to multiple clouds. While Cloud Pak for Data and can run on any public or private cloud, it is also modular and composable allowing enterprises to embrace just the capabilities that they need on-premises. So, it is truly a hybrid multi-cloud platform.

Recently, IBM announced enhancements to IBM Cloud Pak for Data (Version 3.5). These enhancements can be broadly grouped into 2 key themes:  Cost Reduction and Innovation to drive digital transformation. Customers can drive down costs through automation, consolidated management and an integrated platform. On the innovation front, Accelerated AI, Federated Learning, improved governance & security and an expanded ecosystem are the key focus areas. In this blog, we primarily focus on the value of Federated Learning.

Federated learning (also known as collaborative learning) is a machine learning technique that trains an algorithm across multiple decentralized edge devices or servers with local datasets, without transferring them ( Figure 1). The data stays local and it allows for executing deep learning algorithms while preserving privacy and security.   This approach is different from traditional centralized machine learning techniques where all the local datasets are uploaded to one server and deep learning ML algorithms are executed on this aggregated dataset.   

Figure 1: Comparison of Federated Learning and a Standard Approach

Federated learning enables multiple actors to build a common, robust machine learning model without sharing data, thus maintaining data privacy, data security, data access rights and access to heterogeneous data. Many industries including defense, telecommunications, IoT, healthcare, manufacturing, retail and others use federated learning and getting significant additional value from their AI/ML initiatives.

For IBM Cloud Pak for Data, this additional value can be quantified using the Cabot Partners Total Value of Framework.

High Level TVO Framework for Federated learning

TVO analysis is an ideal avenue to quantify and compare the value of Federated Learning compared to the standard approach for Machine Learning.  In the TVO analysis, the Total Value (Total Benefits – Total Costs) of IBM Cloud Pak for Data solution with Federated Learning is compared against IBM cloud Pak for Data solution without Federated Learning

The TVO framework (Figure 2) categorizes the interrelated cost/value drivers (circles) for Analytics by each quadrant:  Costs, Productivity, Revenue/Profits and Risks. Along the horizontal axis, the drivers are arranged based on whether they are primarily Technical or Business drivers. Along the vertical axis, drivers are arranged based on ease of measurability: Direct or Derived.

The cost/value drivers for Analytics are depicted as circles whose size is proportional to the potential impact on a client’s Total Value (Benefits – Cost) of Ownership or TVO as follows:

  1. Total Costs of Ownership (TCO): Typical costs include: one-time acquisition costs for the hardware and deployment, and annual costs for software, maintenance and operations. For the case without Federated Learning, the costs associated with data transfer to a central repository need to be considered. 

Figure 2: TVO Framework for Federated Learning with Cost/Value Drivers

  • Improved Productivity: The TVO model quantifies the value of productivity gains of data scientists, data engineers, applications developers and the organization. It should also consider the value associated with the availability of additional heterogeneous data due to Federated Learning. Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud and the value associated with this innovation need to be considered for applicable cases.   
  • Revenue/Profits: Benefit of Federated Learning is access to a large pool of data, resulting in increased machine learning performance, while respecting data ownership and privacy.  Faster time to value with better performance results in greater innovation and better decision-making capabilities which spur growth, revenues and improve profits. 
  • Risk Mitigation: Federated Learning enables multiple actors to build a common, robust machine learning model without sharing data, thus allowing users to address critical issues such as data privacy, data security, data access rights which also allows for improved governance and compliance.  

The above Framework is a simplified pictorial look of TVO analysis. In a rigorous TVO analysis, which is a major offering of Cabot Partners, the elements of the framework are quantified and expressed in easily understandable business terms. In addition, the analysis can be expanded include other innovation features.   


IBM, recently announced enhancements to IBM Cloud Pak for Data (version 3.5). The enhancements focus primarily on cost reduction and Innovation to drive digital transformation. A major element of innovation is Federated Learning. As detailed above, Federated Learning amplifies the value of IBM Cloud Pak for Data through:

  • Lower costs – no costs associated with data migration to a central database location   
  • Availability of heterogeneous data improves the quality of ML models
  • Access to larger pool of data resulting in increased ML performance
  • Improved security
  • Multiple actors to build a common robust ML model without sharing data, thus allowing to address critical issues such as data privacy and data access rights


[2] Ritu Jyoti, “Accelerate and Operationalize AI Deployments Using AI – Optimized Infrastructure”, IDC Technology Spotlight, June 2018  

[3] RightScale® STATE OF THE CLOUD REPORT 2019 from Flexera™