HPC and AI enable breakthroughs in genomics for better healthcare

Many Life Sciences organizations are using digital technologies to meet the needs and expectations of patients. These technologies help treat and manage diseases in new ways. HPC and AI solutions are at the forefront of this. They are needed to accelerate breakthroughs in large-scale genomics.

Genomics is a sub-discipline of molecular biology that focuses on the storage, function, evolution, mapping, and editing of genomes. It is a vital and growing field because it can improve the lifestyle and outcomes for patients. In the next five years, the economic impact of genomics is estimated to be in the hundreds of billion to a few trillion dollars a year.

Next-generation Sequencing (NGS), Translational and Precision Medicine

High-performance computing (HPC) and Artificial Intelligence (AI) are essential for Genomics primarily because they help speed up NGS which processes and reduces the amount of raw data into a usable format. NGS helps determine the sequence of DNA or RNA to study genetic variations associated with diseases or other biological phenomena.

After this NGS step, it is important to establish the relationship between genotypes to understand the influence individual DNA variances have on disease and medical outcomes. This is Translational medicine. The final step is Precision or Personalized medicine which customizes disease prevention and treatment for an individual based on their genetic makeup, environment, and lifestyle. This last step uses all the data collected using HPC and AI technology to create a personalized approach for the patient.

How HPC and AI help accelerate genomics

Genomics is difficult, but new HPC technology is making this process easier. NGS deals with complex algorithms that require a lot of memory to assemble and solve. HPC and AI solutions help speed up this process and make it cheaper and more accurate.  Using the cloud and new IT solutions, data sequence analysis also becomes easier and can be done at a much larger scale.

Translational medicine requires HPC solutions that can process a lot of data efficiently. This step looks for relationships between a lot of genes, DNA, and diseases and makes it possible to provide personalized treatment for a patient.

HPC and AI are game changers in life sciences. They make genomics easier and faster for healthcare providers, so they can provide highly effective personalized care for their patients. We expect HPC and AI in Life Sciences to continue grow even more rapidly especially in the post-COVID-era by enabling breakthroughs in vaccines, personalized medicine, and healthcare.

You can learn more by reading this Hewlett Packard Enterprise and NVIDIA whitepaper that Cabot Partners recently helped create.

The Promise and Peril of RPA

RPA or Robotic Process Automation emulates human activity when interacting with digital software. It automates tedious and mundane business processes. Artificial Intelligence (AI) when integrated with RPA increases business value. AI can directly be used in bots to execute tasks without human intervention. This results in better efficiency, and improved customer and employee experiences.

RPA software revenue is growing rapidly despite economic disruptions caused by the COVID-19 pandemic and is projected to reach $1.89 billion in 2021 with double-digit rates through 2024.

Automating processes with RPA seems like a great solution in theory, but in practice, this isn’t the case. RPA has been successful for some but disappointing for others. While many organizations are relatively happy with their automation investment, most haven’t fully realized the ROI promised by RPA software vendors. For this reason, clients need to carefully evaluate the various RPA vendors before making this strategic investment.

Read this Cabot Partners paper for more details.

A Fresh Look at the Latest AMD EPYC 7003 Series Processors for EDA and CAE Workloads

When it comes to high-performance computing (HPC), engineers can never get enough performance. Even minor improvements at the chip level can have dramatic financial impacts in hyper-competitive industries such as computer-aided engineering (CAE) for manufacturing and electronic design automation (EDA).

With their respective x86 processor lineups, Intel and AMD continue to battle for bragging rights, leapfrogging one another in terms of absolute performance and price-performance. Both Intel and AMD provide a comprehensive set of processor SKUs optimized for various HPC workloads.

In March of 2021, AMD “upped the ante” with the introduction of their 3rd Gen AMD EPYC™ processors. Dubbed as the world’s highest-performing server processor, AMD 7003 series processors deliver up to 19% more instructions per clock (IPC) than the previous generation. The new “Zen 3” processor cores deliver industry-leading amounts of cache per core, a faster Infinity Fabric™, and industry-leading memory bandwidth of 3200 MT/sec across eight channels of DDR4 memory. HPC users are particularly interested in the recently announced 7xF3 high-frequency SKUs with a boost speed of up to 4.1 GHz.

In two recently published whitepapers sponsored by AMD, Cabot Partners looked at the latest AMD EPYC 7003 series processors (aka “Milan”) in HPE Apollo and HPE ProLiant server platforms, characterizing their performance for various CAE and EDA workloads. Among the headlines were that EPYC 7003 series processors deliver 36% better throughput and up to 60% more simultaneous simulations per server than previous 2nd Gen EPYC processors.

These performance gains benchmarked on the latest HPE servers make these processors worth a look. Readers can download the recently published whitepapers here:

TVO Analysis of Federated Learning with IBM Cloud Pak for Data

Analytics and AI are profoundly transforming how businesses and governments engage with consumers and citizens. Across many industries, high value transformative use cases in personalized medicine, predictive maintenance, fraud detection, cybersecurity, logistics, customer engagement and more are rapidly emerging. In fact, AI adoption alone has grown an astounding 270% in the last four years and 40% of organizations expect it to be the leading game changer in business[1]. However, for analytics and AI to become an integral part of an organization, numerous deployment challenges with data and infrastructure must be overcome – data volumes (50%), data quality and management (47%) and skills (44%)[2].

In addition, many companies are beginning to use hybrid cloud and multi-cloud computing models to knit together services to reach higher levels of productivity and scale. Today, large organizations leverage almost five clouds on average. About 84% of organizations have a strategy to use multiple clouds[3].

IBM Cloud Pak for Data is an end-to-end Data and AI platform that reduces complexity, increases scalability, accelerates time to value and maximizes ROI with seamless procedures to extend to multiple clouds. While Cloud Pak for Data and can run on any public or private cloud, it is also modular and composable allowing enterprises to embrace just the capabilities that they need on-premises. So, it is truly a hybrid multi-cloud platform.

Recently, IBM announced enhancements to IBM Cloud Pak for Data (Version 3.5). These enhancements can be broadly grouped into 2 key themes:  Cost Reduction and Innovation to drive digital transformation. Customers can drive down costs through automation, consolidated management and an integrated platform. On the innovation front, Accelerated AI, Federated Learning, improved governance & security and an expanded ecosystem are the key focus areas. In this blog, we primarily focus on the value of Federated Learning.

Federated learning (also known as collaborative learning) is a machine learning technique that trains an algorithm across multiple decentralized edge devices or servers with local datasets, without transferring them ( Figure 1). The data stays local and it allows for executing deep learning algorithms while preserving privacy and security.   This approach is different from traditional centralized machine learning techniques where all the local datasets are uploaded to one server and deep learning ML algorithms are executed on this aggregated dataset.   

Figure 1: Comparison of Federated Learning and a Standard Approach

Federated learning enables multiple actors to build a common, robust machine learning model without sharing data, thus maintaining data privacy, data security, data access rights and access to heterogeneous data. Many industries including defense, telecommunications, IoT, healthcare, manufacturing, retail and others use federated learning and getting significant additional value from their AI/ML initiatives.

For IBM Cloud Pak for Data, this additional value can be quantified using the Cabot Partners Total Value of Framework.

High Level TVO Framework for Federated learning

TVO analysis is an ideal avenue to quantify and compare the value of Federated Learning compared to the standard approach for Machine Learning.  In the TVO analysis, the Total Value (Total Benefits – Total Costs) of IBM Cloud Pak for Data solution with Federated Learning is compared against IBM cloud Pak for Data solution without Federated Learning

The TVO framework (Figure 2) categorizes the interrelated cost/value drivers (circles) for Analytics by each quadrant:  Costs, Productivity, Revenue/Profits and Risks. Along the horizontal axis, the drivers are arranged based on whether they are primarily Technical or Business drivers. Along the vertical axis, drivers are arranged based on ease of measurability: Direct or Derived.

The cost/value drivers for Analytics are depicted as circles whose size is proportional to the potential impact on a client’s Total Value (Benefits – Cost) of Ownership or TVO as follows:

  1. Total Costs of Ownership (TCO): Typical costs include: one-time acquisition costs for the hardware and deployment, and annual costs for software, maintenance and operations. For the case without Federated Learning, the costs associated with data transfer to a central repository need to be considered. 

Figure 2: TVO Framework for Federated Learning with Cost/Value Drivers

  • Improved Productivity: The TVO model quantifies the value of productivity gains of data scientists, data engineers, applications developers and the organization. It should also consider the value associated with the availability of additional heterogeneous data due to Federated Learning. Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud and the value associated with this innovation need to be considered for applicable cases.   
  • Revenue/Profits: Benefit of Federated Learning is access to a large pool of data, resulting in increased machine learning performance, while respecting data ownership and privacy.  Faster time to value with better performance results in greater innovation and better decision-making capabilities which spur growth, revenues and improve profits. 
  • Risk Mitigation: Federated Learning enables multiple actors to build a common, robust machine learning model without sharing data, thus allowing users to address critical issues such as data privacy, data security, data access rights which also allows for improved governance and compliance.  

The above Framework is a simplified pictorial look of TVO analysis. In a rigorous TVO analysis, which is a major offering of Cabot Partners, the elements of the framework are quantified and expressed in easily understandable business terms. In addition, the analysis can be expanded include other innovation features.   


IBM, recently announced enhancements to IBM Cloud Pak for Data (version 3.5). The enhancements focus primarily on cost reduction and Innovation to drive digital transformation. A major element of innovation is Federated Learning. As detailed above, Federated Learning amplifies the value of IBM Cloud Pak for Data through:

  • Lower costs – no costs associated with data migration to a central database location   
  • Availability of heterogeneous data improves the quality of ML models
  • Access to larger pool of data resulting in increased ML performance
  • Improved security
  • Multiple actors to build a common robust ML model without sharing data, thus allowing to address critical issues such as data privacy and data access rights

[1] https://futureiot.tech/gartner-ai-adoption-growing-despite-skills-shortage/

[2] Ritu Jyoti, “Accelerate and Operationalize AI Deployments Using AI – Optimized Infrastructure”, IDC Technology Spotlight, June 2018  

[3] RightScale® STATE OF THE CLOUD REPORT 2019 from Flexera™

IBM Storage Simplified for Multi-cloud and AI

A profound digital transformation is underway as High-Performance Computing (HPC) and Analytics converge to Artificial Intelligence/ Machine Learning/Deep Learning (AI/ML/DL). Across every industry, this is accelerating innovation and improving a company’s competitive position, and the quality and effectiveness of its products/services, operations and customer engagement. Consequently, with 2018 revenues of $28.1 billion, the relatively new AI market is rapidly growing at 35.6% annually[1].

As the volume, velocity and variety of data continue to explode, spending on storage systems and software just for AI initiatives is already almost $5 billion a year and expected to grow rapidly.[2] In addition, many companies are beginning to use hybrid cloud and multi-cloud computing models to knit together services to reach higher levels of productivity and scale. Today, large organizations leverage almost five clouds on average. About 84% of organizations have a strategy to use multiple clouds[3] and 56% plan to increase the use of containers[4]

What’s needed to handle the data explosion challenges are simple, high-performance and affordable storage solutions that work on hybrid multi-cloud environments (Figure 1).

Figure 1: Data Challenges, Storage Requirements and Solutions for HPC, Analytics and AI


Key Storage Requirements

Scalable and affordable: These two attributes don’t always co-exist in enterprise storage. Historically, highly scalable systems have been more expensive on a cost/capacity basis. However, newer architectures allow computing and storage to be integrated more pervasively and cost-effectively throughout the AI workflow.

Intelligent software: This helps with the cumbersome curatorial, data cleansing tasks, and help run and monitor compute and data-intensive workloads efficiently and reliably from the edge to the core to multiple clouds. It also greatly improves the productivity of highly skilled Data Scientists, Data Engineers, Data Architects, Data Stewards and others throughout the AI workflow.

Data integration/gravity: This provides the flexibility to simplify and optimize complex data flows for performance even with data stored in multiple geographic locations and environments. Wherever possible, moving the algorithms to where the data resides can accelerate the AI workflow and eliminate expensive data movement costs especially when reusing the same data iteratively.


Storage Solutions Attributes

Parallel: As clients add more storage capacity (including Network Attached Storage – NAS), they are realizing that the operating costs (including downtime and productivity loss) of integrating, managing, securing and analyzing exploding data volumes are escalating. To reduce these costs, many clients are using high performance scalable storage with parallel file systems which can store data across multiple networked servers. These systems facilitate high-performance access through concurrent, coordinated input/output operations between clients and storage nodes across multiple sites/clouds.

Hybrid: Different data types and stages in an AI workflow have varying performance requirements. The right mix of storage systems and software is needed to meet the simultaneous needs for scalability, performance and affordability, on premises and on the cloud. A hybrid storage architecture combines file and object storage to achieve an optimal balance between performance, archiving, and data governance and protection requirements throughout the workflow.

Software-defined: It is hard to support and unify many siloed storage architectures and optimize data placement to ensure the AI workflow runs smoothly with the best performance from ingest to insights. With no dependencies in the underlying hardware, Software-defined Storage (SDS) provides a single administrative interface and a policy-based approach to aggregate and manage storage resources with data protection and scale out the system across servers. It also provides data-aware intelligence to dynamically adapt to real-time needs and orchestrate IT resources to meet critical service level agreements (SLAs) in parallel, virtual and hybrid multi-cloud environments. SDS is typically platform agnostic and supports the widest range of hardware, AI frameworks, and APIs.

Integrated: A lot of AI innovation is occurring in the cloud. So, regardless of where the data resides, on-premises storage systems with cloud integration will provide the greatest flexibility to leverage cloud-native tools. Since over 80% of clients are expected to use at least two or more public clouds[5], there will be a need for smooth and integrated data flow to/from multisite/multi-cloud environments. This requires more intelligent storage software for metadata management and integrating physically distributed, globally addressable storage systems.

IBM Spectrum Storage provides these attributes and accelerates the journey to AI from ingest to insights.

IBM Spectrum Storage and Announcements – October 27, 2020

IBM Spectrum Storage is a comprehensive SDS portfolio that helps to affordably manage and integrate all types of data in a hybrid, on-premises, and/or multi-cloud environment with parallel features that increase performance and business agility. Already proven in HPC, IBM Spectrum Storage Software comes with licensing options that provide unique differentiation and value at every stage of the AI workflow from ingest to insights

On October 27, 2020, IBM announced new capabilities and enhancements to its storage and modern data protection solutions that are designed to:

  • Enrich protection for containers, and expand cloud options for modern data protection, disaster recovery, and data retention
  • Expand support for container-native data access on Red Hat OpenShift
  • Increase container app flexibility with object storage.

These enhancements are primarily designed to support the rapidly expanding container and Kubernetes ecosystem, including RedHat OpenShift, and to accelerate clients’ journeys to hybrid cloud. This announcement further extends an enterprise’s capabilities to fully adopt containers, Kubernetes, and Red Hat OpenShift as standards across physical, virtual and cloud platforms.

IBM announced the following new capabilities designed to advance its storage for containers offerings:

  • The IBM Storage Suite for Cloud Paks is designed to expand support for container-native data access on OpenShift. This suite aims to provide more flexibility for continuous integration and continuous delivery (CI/CD) teams who often need file, object, and block as software-defined storage. This is an enhancement with new Spectrum Scale capabilities.
  • Scheduled to be released in 4Q 2020, IBM Spectrum Scale,a leading filesystem for HPC and AI, adds a fully containerized client and run-time operators to provide access to an IBM Spectrum Scale data lake, which could be IBM Elastic Storage systems, or an SDS deployment. In addition, IBM Cloud Object Storage adds support for the open source s3fs file to object storage interface bundled with Red Hat OpenShift
  • For clients who are evaluating container support in their existing infrastructure, IBM FlashSystem provides low latency, high performance, and high-availability storage for physical, virtual, or container workloads with broad CSI support. The latest release in 4Q 2020 includes updated Ansible scripts for rapid deployment, enhanced support for storage-class memory, and improvements in snapshot and data efficiency
  • IBM Storage has outlined plans for adding integrated storage management in a fully container-native software-defined solution. This solution will be managed by the Kubernetes administrator, and is designed to provide the performance and capacity scalability demanded by AI and ML workloads in a Red Hat OpenShift environment.
  • IBM intends to enhance IBM Spectrum Protect Plus, to protect Red Hat OpenShift environments in 4Q 2020. Enhancements include ease of deployment with the ability to deploy IBM Spectrum Protect Plus server as a container using a certified Red Hat OpenShift operator, the ability to protect metadata which provides the ability to recover applications, namespaces, and clusters to a different location, and expanded container-native and container-ready storage support. IBM is also announced the availability of a beta of IBM Spectrum Protect Plus on Microsoft Azure Marketplace


As HPC and Analytics grow and converge, clients can continue to leverage these new IBM storage capabilities to overcome the many challenges with deploying and scaling AI across their enterprise. These simple storage solutions – on-premises or on hybrid multi-clouds can accelerate their AI journey from ingest to insights.

[1] https://www.idc.com/getdoc.jsp?containerId=US45334719

[2] http://www.ibm.com/downloads/cas/DRRDZBL2

[3] RightScale® STATE OF THE CLOUD REPORT 2019 from Flexera™


[5] https://www.gartner.com/smarterwithgartner/why-organizations-choose-a-multicloud-strategy/

Cloudera Introduces Analytic Experiences for Cloudera Data Platform

Cloudera recently announced new enterprise data cloud services on Cloudera Data Platform (CDP): CDP Data Engineering; CDP Operational Database; and CDP Data Visualization. The new services include key capabilities to help data engineers, data analysts, and data scientists collaborate across the entire analytics workflow and work smarter and faster. CDP enterprise data cloud services are purpose-built to enable data specialists to navigate the exponential data growth and siloed data analytics operating across multiple public and private clouds.

Data lifecycle integration enables data engineers, data analysts and data scientists to work on the same data securely and efficiently, no matter where that data may reside or where the analytics run. CDP not only helps to improve individual data specialist productivity, it also helps data teams work better together, through its unique hybrid data architecture that integrates analytic experiences across the data lifecycle and across public and private clouds. Effectively managing and securing data collection, enrichment, analysis, experimentation and analytics visualization is fundamental to navigating the data deluge. The result is data scientists and engineers can collaborate better and more rapidly deliver data-driven use cases. Following are the new enterprise cloud services announcements:

CDP Data Engineering: is a powerful Apache Spark service on Kubernetes and includes key productivity enhancing capabilities typically not available with basic data engineering services. Preparing data for analysis and production use cases across the data lifecycle is critical for transforming data into business value. CDP Data Engineering is a purpose-built data engineering service to accelerate enterprise data pipelines from collection and enrichment to insight, at scale.

CDP Operational Database: is a high-performance NoSQL database service that provides scale and performance for business-critical operational applications. It offers evolutionary schema support to leverage the power of data while preserving flexibility in application design by allowing changes to underlying data models without having to make changes to the application. In addition, it provides Auto-scaling based on the workload utilization of the cluster to optimize infrastructure utilization.

CDP Data Visualization: CDP Data Visualization simplifies the curation of rich, visual dashboards, reports and charts to provide agile analytical insight in the language of business, democratizing access to data and analytics across the organization at scale. It allows Technical teams to rapidly share analysis and machine learning models using drag and drop custom interactive applications. It provides business teams and decision makers the data insights to make trusted, well informed business decisions.

These data cloud services in combination with CDP are purpose-built for data specialists. They deliver rapid, real-time business insights with the enterprise-grade security and governance and will permit Cloudera to continue to be a leader in data science.

Why Deploy an Enterprise Data Warehouse on a Hybrid Cloud Architecture?

Why Deploy an Enterprise Data Warehouse on a Hybrid Cloud Architecture?

Analytics and artificial intelligence (AI) solutions are profoundly transforming how businesses and governments engage with consumers and citizens. Across many industries, high-value transformative use cases in personalized medicine, predictive maintenance, fraud detection, cybersecurity, logistics, customer engagement, geospatial analytics, and more are rapidly emerging

Deploying and scaling AI across the enterprise is not easy especially as the volume, velocity, and variety of data continue to explode. What’s needed is a well-designed, agile, scalable, high-performance, modern, and cloud-native data and AI platform that allows clients to efficiently traverse the AI space with trust and transparency. An enterprise data warehouse (EDW) is a critical component of this platform.

EDWs are central repositories of integrated data from many sources. They store current and historical data used extensively by organizations for analysis, reporting, and better insights and decision-making. Historically, data warehouse appliances (DWAs) have delivered high query performance and scalability, but are now struggling to transform data into timely, actionable insights with the data explosion.

A hybrid, open, multi-cloud platform allows organizations to take advantage of their data and applications wherever they reside, on-premises, and across many clouds. Here are some key pros and cons of deploying EDWs over on-premises, hybrid, or public clouds (Figure 1):


Figure 1: Comparing Enterprise Data Warehouses on On-Premises, Public and Hybrid Cloud

  • Strategic for the long-term: About 80% of enterprise workloads are still on-premises[1] and still strategic, the public/hybrid cloud is even more strategic driving most of the innovation, growth, and investment in analytics.
  • Total long-term costs: On-premises costs are predictable and become more favorable with greater utilization. Public cloud costs are unpredictable and good for short, infrequent spiky workloads and consumption-based pricing produces greater accountability of the user population. However, these costs grow steeply with higher utilization typical for most EDWs today. In addition, there are many other hidden costs such as long-term contracts, incremental, supplementary licensing fees, and more.

With hybrid cloud EDWs, customers can prudently optimize costs using on-premises assets for predictable workloads and offload spiky workloads to the public cloud. This is very effective for the long-term as a smaller on-premises hardware footprint can meet immediate requirements, and incremental needs for resources during peaks can be satisfied by the public cloud.  Key components of the total costs include:

  • Data Transfer/Migration Costs: For on-premises, these are negligible since most of the data for the entire analytics workflow typically reside on-premises. Significant for public clouds since many analytics workflows require substantial movement of data to and from the public cloud. Often enterprises are limited in their ability to move datasets from the cloud back to their on-premises equipment or to another cloud. Moreover, cloud providers charge fees for transferring data out their cloud environment which dramatically increases costs – particularly as datasets continue to grow. Also migrating on-premises workloads to the public cloud is hard and time-consuming.

In hybrid clouds, there is limited movement of data throughout the analytics workflow to and from the public cloud, and so these costs are low to medium. With consistent cloud-native architectures, migrating workloads from on-premises to public clouds is also relatively easy and less expensive.

  • Capital Costs: Significant capital investment for on-premises IT infrastructure is needed to handle peak loads and may result in lower and sub-optimal utilization under normal operations. For public clouds customer capital costs are negligible. For hybrid clouds, some capital investment for IT infrastructure is needed for certain critical analytics workloads to run on-premises with the rest offloaded to the public cloud. This may result in better utilization and lower capital costs compared to the all on-premises alternative.
  • Upgrade Costs: Significant capital expense for hardware upgrades over time needed to modernize on-premises IT infrastructure to drive innovation. For public clouds, the customer incurs a negligible capital expense for hardware upgrades over time since the provider is responsible for the infrastructure. For hybrid clouds, the modest capital expense for hardware upgrades over time is needed to modernize infrastructure.
  • Operating Costs: Since the customer typically owns and operates on-premises assets, costs are predictable and high utilization environments provide better economics than public clouds which are better for short spiky workloads. With a hybrid cloud, the customer can prudently minimize costs by largely using on-premises assets for predictable workloads and offloading spiky workloads to the public cloud.
  • Deployment Costs (no Integration/Customization): Significant for on-premises since provisioning and deploying resources and analytics workflows take more time and effort. Whereas costs are low on public clouds with faster provisioning and deployment as the process is automated. On hybrid clouds, costs are significant since connectivity between on-premises and public cloud and maintaining two environments could add another layer of complexity. However, this could be alleviated with a consistent cloud-native containerized architecture.
  • Management/Maintenance: Moderately hard for on-premises since customers must invest in scarce skills and resources to maintain and operate these environments. Much easier with public clouds since customers typically can use a centralized portal with process automation. For hybrid clouds, it is relatively straightforward for customers to maintain and operate with the right pre-determined operating policies and procedures for workload placement on-premises or on-the-cloud.
  • Integration/Customization: Easier for on-premises customers to customize and integrate newer solutions with their legacy solutions. This is harder to do on public clouds. On hybrid clouds, it is easier to integrate legacy systems with newer custom solutions from the edge to multiple clouds seamlessly.
  • Business Continuity/Serviceability: It can be tailored to provide higher service level agreements (SLAs) for on-premises customers. It is harder to do for public clouds, but they can deliver excellent business continuity. Hybrid clouds can provide high SLAs and excellent business continuity even with disasters.
  • Performance/Scalability: EDWs offer excellent performance on-premises with hardware accelerators, faster storage, and proximity to data, but harder to scale to address new business requirements. Lower performance for large-scale analytics on public clouds since maintaining data proximity is hard and optimized storage and computing infrastructure are typically not available. But public clouds can easily scale to meet new business requirements for smaller data sizes. However, as data sets continue to grow exponentially, beyond a few 100s of terabytes, these environments have limited elasticity. Hybrid EDWs have excellent performance with hardware accelerators, faster storage, and proximity to data either on-premises or on-the-cloud and can also easily scale to meet new business requirements.
  • Governance/Compliance: Excellent for on-premises since these operations can be tailored to meet individual enterprise and regulatory requirements. Public clouds have limited ability to tailor these operations for individual customers since they are set broadly by the cloud provider. Hybrid clouds are excellent since these operations can be tailored to meet individual enterprise and regulatory requirements consistently end-to-end.
  • Data Protection/Security: On-premises and hybrid clouds are excellent since sensitive data can be stored and managed for individual customer requirements and protocols. Public clouds are somewhat vulnerable since their infrastructure is shared and many enterprises are reluctant to part with their mission-critical data.
  • Vendor Lock-in: Strong for on-premises and public clouds especially with the underlying software infrastructure. Also, data migration to an alternate solution is complex and expensive.


A hybrid multi-cloud environment empowers customers to experiment with and choose the tools, programming languages, algorithms, and infrastructure to build data pipelines, train and make analytics/AI models ready for production in a governed way for the enterprise, and share insights throughout the workflow.

[1] Nagendra Bommadevara, Andrea Del Miglio, and Steve Jansen, “Cloud adoption to accelerate IT modernization”, McKinsey & Company, 2018


Total Value of Ownership (TVO) of IBM Cloud Pak for Data

The speed and scope of the business decision-making process is accelerating because of several emerging technology trends – Cloud, Social, Mobile, the Internet of Things (IoT), Analytics and Artificial Intelligence/Machine Learning (AI/ML). To obtain faster actionable insights from this growing volume and variety of data, many organizations are deploying Analytics solutions across the entire workflow.

For strategic reasons, IT leaders are focused on moving existing workloads to the cloud or building new workloads on the cloud and integrating those with existing workloads. Quite often, the need for data security and privacy makes some organizations hesitant about migrating to the public cloud. The business model for cloud services is evolving to enable more businesses to deploy a hybrid cloud, particularly in the areas of big data and analytics solutions.

IBM Cloud Pak for Data is an integrated data science, data engineering and app building platform built on top of IBM Cloud Pak for Data – a hybrid cloud that provides all the benefits of cloud computing inside the client’s firewall and provides a migratory path should the client want to leverage public clouds. IBM Cloud Pak for Data clients can get significant value because of unique capabilities to connect their data (no matter where it is), govern it, find it, and use it for analysis. IBM Cloud Pak for Data also enables users to collaborate from a single, unified interface and their IT staff doesn’t need to deploy and connect multiple applications manually.

These IBM Cloud Pak for Data differentiators enable quicker deployments, faster time to value, lower risks of failure and higher revenues/profits. They also enhance the productivity of data scientists, data engineers, application developers and analysts; allowing clients to optimize their Total Value of Ownership (TVO), which is Total Benefits – Total Costs.

The comprehensive TVO analysis presented in a recent Cabot Partners paper compares the IBM Cloud Pak for Data solution with a corresponding In-house solution alternative for three configurations – small, medium and large. This cost-benefit analysis framework considers cost/benefit drivers in a 2 by 2 continuum: Direct vs. Derived and Technology vs. Business mapped into four quantified quadrants: Costs, Productivity, Revenues/Profits and Risks.

Compared to using an In-house solution, IBM Cloud Pak for Data can improve the three-year ROI for all three configurations. Likewise, the Payback Period (PP) for the IBM Cloud Pak for Data solution is shorter than the In-house solution; providing clients faster time to value. In fact, these ROI/PP improvements grow with configuration size; offering clients better investment protection as they progress in their Analytics and AI/ML journey and as data volumes and Analytics model complexities continue to grow.

You can access the full report here.

IBM – Building Momentum to Win the Hybrid Cloud Platform War

IBM – Building Momentum to Win the Hybrid Cloud Platform War

IBM – Building Momentum to Win the Hybrid Cloud Platform War

By Ravi Shankar and Srini Chari, PhD., MBA – Cabot Partners, May 8, 2020.

As the evolving impacts of COVID-19 ripple globally through our communities, the new IBM CEO Arvind Krishna kicked-off the virtual IBM Think conference on May 5, 2020 with an apt assertion: “There's no question this pandemic is a powerful force of disruption and an unprecedented tragedy, but it is also a critical turning point.” Krishna said “History will look back on this as the moment when the digital transformation of business and society suddenly accelerated, and together, we laid the groundwork for the post-COVID world”.

Originally set to be held in San Francisco, the IBM 2020 Think Digital Experience quickly became one of the many tech events held online. The attendance was very large. About 100,000 non-IBM participants registered and over 170,000 unique visitors attended sessions and consumed content. On average, IBM clients and Business Partners joined 6.5 sessions and watched most of those sessions. This was 3 times the number of clients and 2 times the number of Business Partners compared to last year.

Think Digital featured many key announcements and offered virtual attendees an exciting array of speaker sessions, real-time Q&As and technical training highlighting how hybrid cloud and artificial intelligence (AI) are galvanizing digital transformation, and how IBM is building an agile and scalable platform for developers, partners and clients to overcome data and applications migration challenges from the edge to the cloud.

Lift and Shift to the Public Cloud is Inadequate for Most Enterprise Workloads[i]

The simplest enterprise workloads – about 20% of all enterprise workloads – have already been moved to the public cloud and have benefited from greater agility and scalability. However, the remaining 80% of workloads continue to remain on-premises.1 Contrary to what many public cloud providers proclaim, traditional Lift and Shift (Figure 1) cloud transformation is not always economical and easy, especially for the analytics and AI journey.

Figure 1: Traditional Lift and Shift of On-Premises Data and Applications to the Public Cloud is Inadequate

If public cloud migration were easy for many enterprise applications and data, most businesses would have already migrated most of their workloads to the public cloud and realized the associated benefits (Table 1). On-premises solutions still provide many benefits especially for analytics and AI (Table 1).

Public Cloud


·     High scalability and flexibility for unpredictable workload demands with varying peaks/valleys

·     Rapid software development, test and proof of concept pilot environments

·     No capital investments required to deploy and maintain infrastructure

·     Faster provisioning time and reduced requirements on IT expertise as this is managed by the cloud vendor

·    Bring compute to where data resides since it is hard to move existing data lakes into the cloud because of large data volumes

·    Supports analytics at the edge and other distributed environments to make immediate decisions

·    Provides dedicated and secure environments for compliance to stringent regulations and/or unique workload requirements

·    High/custom SLA performance and efficiency

·    Retains the value of investments in existing solutions

Table 1: Benefits of Public Cloud and On-Premises Infrastructures for Analytics/AI

To improve the agility and scalability of the remaining 80% of enterprise applications and data, it is possible, with hybrid clouds, to combine the benefits of a public cloud with that of an on-premises infrastructure. 

The Hybrid Multi-Cloud Platform is the New Battleground

Over the last decade, the term “cloud wars” has been used to describe the competition between public cloud providers, AWS, Microsoft Azure, Google Cloud, IBM Cloud and a few others.  But with 80% of the workloads still left on-premises, this is more a rift or a squabble than a war. The real “cloud war” is only beginning for rapidly-growing hybrid cloud platform – particularly for analytics and AI.

The worldwide data services for the hybrid cloud market is expected to grow at a healthy CAGR of 20.53% from 2016 to 2021[i] as enterprises prioritize a balance of public and private infrastructure. Only 31% of enterprises see public cloud as their top priority, while a combined 45% of enterprises see hybrid cloud as the future state.2

Today, large organizations leverage almost five clouds on average. The percentage of enterprises with a strategy to use multiple clouds is 84%[ii]and 56% of the organizations plan to increase the use of containers.[iii] 

Hybrid cloud platforms that support a multi-cloud architecture will be the winning platform in the future, especially as more data is ingested at the edge with the transition to 5G and stored on-premises or in the cloud.

Lift, Sift and Shift Data and Applications for Swift Connect from the Edge to Multi-cloud

In order to win the impending hybrid cloud war, the following four elements must be in place:

  • Lift: Ability to move/process data and applications all the way from the edge to the enterprise or to a multi-cloud environment and migrate workloads efficiently
  • Sift: Automate the current, tedious semi-manual, error-prone processes used to cleanse and prepare data, remove bias, prioritize it for analysis, and provide clear traceability
  • Shift: Ability to move compute to where the data resides to minimize data movement costs and improve performance
  • Swift: Multi-directionally execute all the above operations at scale with agility, flexibility and high-performance so that data can move between public, private and hybrid clouds, on-premises and edge installations, and can be updated as needed
  • Connect: Seamlessly connect the edge, on-premise and multi-cloud implementations to a cohesive and agile environment with a single dashboard for centralized observation across all platform entities.   

Figure 2 depicts this hybrid cloud platform which empowers customers to experiment with and choose the programming languages, tools, algorithms and infrastructure to build data pipelines, train and productionize analytics/AI models in a governed way for the enterprise and share insights throughout the organization from the edge to the cloud.

Figure 2: An Agile Hybrid Cloud Platform for Analytics/AI that Scales from the Edge to Multi-cloud

To win the hybrid cloud war, the platform must scale, be compliant, resilient and agile, and support open standards for interoperability. This gives clients the flexibility to adapt quickly to changing business needs and to choose the best components from multiple providers in the ecosystem.  

The IBM Hybrid Multi-cloud Vision and Key Think 2020 Announcements  

With the acquisition of Red Hat, IBM has laid the foundation to win the hybrid cloud war by enabling clients to avoid the pitfalls of single-vendor reliance. Clients can scale workloads across multiple systems and cloud vendors with increased agility through containers and unify the entire infrastructure from the edge to the data center to the cloud (Figure 3).

Red Hat provides open source technologies to bring a consistent foundation from the edge to on-premises or to any cloud deployment: public, private, hybrid, or multi: 

  • Red Hat OpenShift is a complete container application platform built on Kubernetes – an open source platform that automates Linux container operations and management
  • Red Hat Enterprise Linux and Red Hat OpenShift bring more security to every container and better consistency across environments.
  • Red Hat Cloud Suite combines a container-based development platform, private infrastructure, public cloud interoperability, and a common management framework into a single, easily deployed solution for clients who need a cloud and a container platform.

Figure 3:  IBM Hybrid Multi-cloud Vision and Key New Think 2020 Announcements

For decades, IBM’s core competency has been as a trusted technology provider for enterprise customers running mission critical applications. IBM delivers enterprise systems, software, network and services. Key hybrid cloud offerings include:

  • IBM Cloud Paks (Figure 3) are enterprise-ready, containerized services that give clients an open, faster and more secure way to move core business applications to any cloud. Each of the six IBM Cloud Paks includes containerized IBM middleware and common cloud services for development and management, on top of a common integration layer and runs wherever Red Hat OpenShift
  • IBM Cloud is built on open standards, with a choice of many cloud models: public, dedicated, private and managed, so clients can run the right workload on the right cloud model without vendor lock-in.
  • IBM Systems deliver reliable, flexible and secure compute, storage and operating systems solutions.
  • IBM Services help organizations by bringing deep industry expertise to accelerate their cloud journeys and modernize their environments.

Reinforcing the strength of the existing portfolio of products and offerings, IBM launched several AI and hybrid cloud offerings backed by a broad ecosystem of partners to help enterprises and telecommunications companies speed their transition to edge computing in the 5G era. (Figure 3):

  • IBM Cloud Satellite gives the customer the ability to use IBM Cloud services anywhere — on IBM Cloud, on premises or at the edge — delivered as-a-service from a single pane of glass controlled through the public cloud. IBM Cloud Satellite specifically extends the IBM Public Cloud with a generalized IaaS and PaaS environment, including support for cloud native apps and DevOps, while providing access to IBM Public Cloud Services in the location that works best for individual solutions.
  • IBM Watson AIOps uses AI to automate how enterprises self-detect, diagnose and respond to IT anomalies in real time to better predict and shape future outcomes, focus resources on higher-value work and build more responsive and intelligent networks that can stay up and running longer.
  • IBM Edge Application Manager is an autonomous management solution designed to enable AI, analytics and IoT enterprise workloads to be deployed and remotely managed, delivering real-time analysis and insight at scale. The solution enables the management of up to 10,000 edge nodes simultaneously by a single administrator.
  • IBM Telco Network Cloud Manager runs on Red Hat OpenShift, to deliver intelligent automation capabilities to orchestrate virtual and container network functions in minutes. Service providers will have the ability to manage workloads on both Red Hat OpenShift and on the Red Hat OpenStack Platform, which will be critical as telcos increasingly look for ways to modernize their networks for greater agility and efficiency, and to provide new services today and as 5G adoption expands.

With the Hybrid Cloud Strategy in Place, the Focus is on Execution

The new product announcements and initiatives launched during IBM’s 2020 Think Digital event reinforce IBM’s intent – even during this pandemic – to march full steam ahead to execute on its hybrid multi-cloud vision: Any application can run anywhere on any platform, at scale, wherever data resides, with resilience, agility and interoperability across all clouds on an open, secure and governed enterprise-grade environment.

IBM has also issued the Call for Code challenge to address the current pandemic.   This global challenge encourages innovators to create practical, effective, and high-quality applications based on one or more IBM Cloud services (for example, web, mobile, data, analytics, AI, IoT, or weather) that can have an immediate and lasting impact on humanitarian issues. Teams of developers, data scientists, designers, business analysts, subject matter experts and more are challenged to build solutions to mitigate the impact of COVID-19 and climate change.

With this compelling hybrid strategy and continuing focus of collaboratively solving challenging problems, we believe IBM is well-positioned to execute to win the hybrid cloud war because:

  1. A strong technical, business and sales savvy leadership is in place with Arvind Krishna and Jim Whitehurst (President of IBM).
  2. This pragmatic strategy builds on IBM’s traditional strengths of serving enterprise customers on their cloud journey by providing the much-needed technologies and momentum for modernization.  
  3. In addition to IBM Research, the autonomy that Red Hat enjoys will ensure its entrepreneurial growth culture will continue to be a lightning rod for IBM innovation.
  4. With the focus on an open platform, clients and the partner ecosystem will have the assurance to co-create high-value offerings and services to meet future challenges.

Last and perhaps most important, the technology industry is constantly disrupting, with new billion-dollar businesses emerging rapidly. This century’s first decade witnessed the rise of social media, mobile and cloud computing. As keen observers of IBM both from the inside and outside, we believe this is probably the first time in recent decades that IBM is endowed with a technology and business savvy leadership team that has a track-record of rapidly growing large billion-dollar businesses. As the cloud wars rage in the next decade, there will undoubtedly be many disruptions. Perhaps now more than at any time in the recent past, IBM will not only spot these opportunities but also boldly act to galvanize its incredible human resources and its vast ecosystem to build these next generation hybrid cloud and AI businesses.

[1] Nagendra Bommadevara, Andrea Del Miglio, and Steve Jansen, “Cloud adoption to accelerate IT modernization”, McKinsey & Company, 2018

[1] https://www.ibm.com/downloads/cas/V93QE3QG

[1] RightScale STATE OF THE CLOUD REPORT 2019 from Flexera

[1]  https://www.redhat.com/cms/managed-files/rh-enterprise-open-source-report-detail-f21756-202002-en.pdf


Cabot Partners is a collaborative consultancy and an independent IT analyst firm. We specialize in advising technology companies and their clients on how to build and grow a customer base, how to achieve desired revenue and profitability results, and how to make effective use of emerging technologies including HPC, Cloud Computing, Analytics and Artificial Intelligence/Machine Learning. To find out more, please go to www.cabotpartners.com.


Copyright® 2020. Cabot Partners Group. Inc. All rights reserved. Other companies’ product names, trademarks, or service marks are used herein for identification only and belong to their respective owner. All images and supporting data were obtained from IBM or from public sources. The information and product recommendations made by the Cabot Partners Group are based upon public information and sources and may also include personal opinions of both Cabot Partners Group and others, all of which we believe to be accurate and reliable. However, as market conditions change and not within our control, the information and recommendations are made without warranty of any kind. The Cabot Partners Group, Inc. assumes no responsibility or liability for any damages whatsoever (including incidental, consequential or otherwise), caused by your or your client’s use of, or reliance upon, the information and recommendations presented herein, nor for any inadvertent errors which may appear in this blog. This blog was developed with IBM funding. Although the blog may utilize publicly available material from various vendors, including IBM, it does not necessarily reflect the positions of such vendors on the issues addressed in this document. 




Highlights from the Strata AI Conference in New York City

Last month (September 25 and 26), I attended the Strata AI Conference in New York City. The Strata AI Conference continues to provide an informative and comprehensive overview of artificial intelligence and its accelerating transition from research to industrialization. Sessions covered a broad spectrum of AI topics, including cutting-edge research, open source tools, regulatory considerations, use cases and best practices for implementation.

Over 5,000 people attended the conference. There were over 125 exhibitors and about 170 breakout sessions covering all aspects of AI establishing the Strata conference as one of the unique gathering in the area of Cloud/AI/ML/DL

Considering the number of keynote speeches and breakout sessions, in the interest of space, we will highlight only few of the topics.

The road to an enterprise cloud

Mick Hollison (CMO, Cloudera) discussed the essential elements of enterprise cloud and how Cloudera and its strategic partner IBM are working together in assisting customers to build a true enterprise cloud. He stated that “No one that we work with does more to force the issue around hybrid and multicloud, and to really drive that message home, than our most strategic partner in the world, IBM”.  He informed The IBM + Cloudera strategic partnership reinforces a combined commitment to open source and cloud for Analytics/AI initiatives. It offers clients an industry-leading, enterprise-grade Hadoop distribution plus an ecosystem of integrated products and services – all designed to help organizations industrialize Analytics/AI.

Reader’s may be interested in the latest Cabot Partners publication that describes the strengths of IBM and Cloudera alliance – Greater Choice and Value for Advanced Analytics and AI –  https://www.cabotpartners.com/wp-content/uploads/2018/07/IBM-Cloudera-Alliance-September-2019.pdf .

AI Ladder

Highlights of an interesting discussion between Rob Thomas, General Manager, IBM Data and Watson AI and Tim O’Reilly, Chairman of O’Reilly Media.

AI Ladder: Breaking an AI strategy down into pieces – or rungs of a ladder serves as a guiding principle for organizations to transform their business by providing four key areas to consider: how they collect data, organize data, analyze data, and then ultimately infuse AI into the organization. By using the ladder to AI as a guiding framework, enterprises can build the foundation for a governed, efficient, agile, and future-proof approach to AI.

AI challenges: The challenges companies face can be categorized as follows:

  • Lack of understanding – because of increasing popularity of AI, assume that it will fix any problem – which is not true
  • Getting a handle on their data – good data is essential for a successful AI implementation. Organizations suffer from a combination of lack of data, too much data, and bad data.
  • Lack of relevant skills – AI skills are rare and therefore high in demand and there is a shortage of skilled workers.
  • Trust – as more applications make use of AI, businesses need visibility into the recommendations made by their AI applications. Traceability and explainability are very important
  • Culture and business model change – is required to take advantage of the opportunity the new technology provides.

Concluding thoughts: Fear factor – managers who use AI will replace those who haven’t gotten through the hype phase of AI. Now is the time for AI.

Top blunders in Big Data

Michael Stonebraker, Computer scientist, winner of Turing award had an interesting take on top ten blunders of Big Data (in the interest of space – combined few of them). One need not agree with his list – however, they stimulate thinking and good discussion.

  1. Not planning for AI/ML
  2. Not solving your real data science problem – typical data scientist 90% of the time on data discovery and data cleaning,
  3. Belief that traditional data integration will be solved by data science
  4. Belief that data warehouses, data lakes and Hadoop/Spark will solve all your data science problems
  5. Succumbing to Innovator’s dilemma – got to reinvent yourself
  6. Not paying for few rocket scientists
  7. Outsourcing to an external service provider
  8. Not moving everything to cloud


AI is one of the greatest challenges and opportunities of our time. It will transform entire industries and the way enterprises operate. The pace of innovation continues to accelerate at a phenomenal pace. Events like the Strata AI Conference can provide analytics leaders with valuable insights and an understanding of the future.