Intel OpenVINO Focuses on Sharpening Computer-Vision for Cloud and Edge Applications

Published by: M.R. Pamidi

By Jean S. Bozman and M.R. Pamidi

Expanding on our analysis of the Intel® Distribution of OpenVINO™ Toolkit, this post continues our discussion about this development framework, its support for widely used AI LLMs (large language models), and its “fit” with the Intel Geti platform for developing AI applications that include computer vision.

Intel is best known for its microprocessors, fabs, and high-speed interconnects. But in the wake of the ChatGPT generative AI (genAI) announcements of November 2022 – just one year ago – Intel has been describing, documenting, and demonstrating its AI application-development toolkit, which it calls OpenVINO. This research note takes another look at OpenVINO – its use cases, and its planned trajectory in the AI marketplace.

At the Intel Innovation 2023 conference, held in San Jose, CA, in September, Intel demonstrated the development of AI applications based on its OpenVINO toolkit. As demonstrated at that event, OpenVINO was paired with the Intel Geti platform for customized computer-vision model development.

By using the two products together, customers worked to optimize LLMs (large language models) for the automation of chatbots and factory-automation sequences. Using the Intel Geta platform and OpenVINO together can speed up model development, enabling fine-tuning of optimized workflows on Intel devices across Cloud and Edge deployments.

We believe that we will see this approach again, in 2024, as Intel doubles down on its AI-enabled product offers. A video that shows how OpenVINO and Intel Geti can be used together for Agriculture applications in the coffee industry can be viewed here.

Using a variety of Edge devices, applications developed with the Open VINO framework produced prototypes for  AI-controlled safety systems for pedestrians and cars, as shown at the conference.

This use case – controlling dynamic workloads leveraging visualization – drove home the point that multi-vendor automation solutions are growing in Cloud and Edge applications, which Intel sees as a rapidly growing opportunity in this decade.

Among the best examples of this category of use cases are in manufacturing, telecommunications (telecoms), and device-driven workloads across many industry segments (e.g., healthcare, banking/finance, and retail) worldwide.

Focusing on Developers

The OpenVINO application-development framework, as described by Intel, is designed to allow developers to “run inference on a range of computing devices – including many at the Edge. This would include machine learning (ML) for device automation, computer vision, and factories running multi-step processes.”

Intel’s strategy here is to build on the current footprints for generative AI (genAI) that are already being utilized by customers – and to create new AI solutions based on Intel hardware and the Intel ecosystem’s software technologies. In doing so, Intel is asking customers to consider migrating apps to a new deployment environment, resulting in an emerging business model where competing app/dev frameworks are also being used – often in the same Cloud or Edge environments (e.g., TensorFlow in Google Cloud environments).

Through its support for multiple hardware types, OpenVINO allows customers to “convert” applications developed through widely-used frameworks like Caffe and TensorFlow, allowing them to run on the Intel inference engine across a variety of CPUs, GPUs, and FPGA (field programmable units). Intel also provides AI model training classes, and online software showing OpenVINO being used in place of other, well-known frameworks.

This multi-vendor scenario maps to current patterns of customer use of a variety of hardware devices in Edge locations. For Intel, this is a pragmatic strategy that is aimed at addressing real-world scenarios that already leverage mixed-vendor deployments for Cloud and Edge applications. It also reflects a go-to-market strategy, in which Intel believes it can build on Intel Geti and OpenVINO, when used together, to grow the adoption of Intel-based AI solutions for new and emerging Cloud and Edge use cases.

This approach will become even more significant when Intel ships its AI PC, as it has hinted it will do next year (2024) – a future announcement that will highlight Intel’s AI strategy, as articulated at the Intel Innovation Conference.

How It Works

Intel says OpenVINO is based on convolutional neural networks (CNNs), which allows the OpenVINO toolkit to share application workloads across multiple devices – including both Intel and non-Intel devices. The toolkit supports faster performance and memory-use optimization and is designed to address rapidly growing genAI use cases.

This year, a new release of OpenVINO, V.2023.1 was released on September 18, 2023, providing expanded support for genAI. The top features of this OpenVINO toolkit release include:

  • A model optimizer to convert models from widely used frameworks, including Caffe, TensorFlow, Open Neural Network Exchange (ONNX), PyTorch, and Kaldio.
  • An inference engine that runs on heterogeneous hardware platforms, including CPUs, GPUs, FPGAs, and the Intel Neural Compute Stick 2 (Intel NCS2). GPUs may include NVIDIA GPUs and the new Intel Habana GPU platform that Intel got from its acquisition of Israeli chip maker Habana Labs for $2 billion in 2019.
  • A common application programming interface (API) for a variety of Intel-based hardware platforms, including fourth-generation Intel Xeon processors (Sapphire Rapids).

Summary

GenAI is breaking down barriers between IT and business units – with the benefit that genAI – plus easy-to-use chatbots – help to make AI understandable and visible to business managers who approve budgets for AI systems. In general, this is one aspect of genAI — bringing business and IT into dialog — that isn’t highlighted often enough in the industry.

GenAI’s widespread popularity in business units will likely give rise to broader adoption of genAI for close-to-the-customer applications running on CPUs, GPUs, and FPGAs. By offering specific app/dev solutions for AI, Intel is aiming to grow its share in the fast-growing AI segment.

Intel sees the OpenVINO opportunity to encourage the use of genAI for computer vision. However, the overall opportunity is much broader: Open VINO can be used for other AI uses, including natural language processing (NLP) and inference customer use cases across multiple industry segments.

Hardware support for next-generation Intel microprocessors, including the ones powering the Intel AI PC, are set to ship into general availability next year (2024).

Finally, Intel’s focus on Cloud and Edge use-cases makes sense for the Open VINO framework – because it anticipates expanding market opportunities for genAI app dev toolkits for Cloud and Edge use-cases. Likewise, the combination of Intel Geti for computer-vision software fits with OpenVINO’s rapid-development capabilities. In the highly competitive AI marketplace, we expect Intel to update its 2023 release by adding new features for an expanding list of industry-specific use cases – in 2024.

Leave a Reply

Your email address will not be published. Required fields are marked *