PARIS — Intel Corp. built its formidable reputation as a hardware company, impressing the market with the speed and power of its processor architecture and delivering finer geometry nodes that allow the company to take advantage of Moore’s Law.
This is a tried-and-true checklist that the industry typically uses to size up a CPU company.
But if the world is, indeed, moving to embrace, apply, and implement more and more artificial intelligence (AI)-based algorithms in processing data, the yardstick for the success of processors and those who develop them will inevitably change.
At least one expert, Google’s platform architect Sheng Li, who was previously a researcher at Intel Labs, is now saying that the abstraction layer that used to separate software from hardware architecture has begun to collapse in the world of AI.
If so, hardware performance won’t be the only consideration in judging a company’s AI strategy. More importantly, is the company offering software (AI)-aware hardware, and does it provide software that’s very cognizant of different types of hardware?
AI is “bringing [to the industry] a new paradigm,” Keven Krewell, principal analyst at Tirias Research, told us. It is “changing a whole computer system.” CPUs will need “a learning process,” he said, or “a machine-learning roadmap.”
In a recent phone interview with EE Times, Remi El-Ouazzane, chief operating officer of Intel’s AI Products Group, spent little time pitching Intel’s specific hardware architecture — such as Myriad X.
Myriad X, unveiled a year ago by Intel’s Movidius group, is a vision processing unit with a dedicated neural compute engine for accelerating deep-learning inferences at the edge. Its ability to deliver more than 4 TOPS is impressive.
But during our discussion, El-Ouazzane passed quickly over Movidius’ latest VPU. Instead, he stayed “on message” with Intel’s AI work, including nGraph, a framework-independent deep neural network (DNN) model compiler, and a new toolkit called “OpenVINO” (Open Visual Inference & Neural Network Optimization) designed for application developers.
Intel has good reason to emphasize the significance of its software offerings.
In the last several years, gunning for the AI market, Intel has acquired four companies: Nervana, Movidius, MobileEye, and Altera. Intel now has a broad AI hardware portfolio ranging from CPUs and GPUs to VPUs (Movidius) and FPGAs (Altera).
Intel pitches this AI portfolio diversity as its strength. El-Ouazzane noted during the interview, “At Intel, we’ve concluded that there are no one-size-fits-all solutions for AI.”
While that may be true, this diversity won’t be turned into gold unless Intel develops a software strategy that unifies all of its hardware offerings and helps customers choose and implement what they need.
In El-Ouazzane’s mind, that’s where nGraph and OpenVINO come in.
For instance, nGraph is a “framework-neutral” DNN model compiler. By using the nGraph compiler, data scientists can bring their favorite deep-learning framework with them, compile, and run it on the most optimized deep-learning compute device. In other words, Intel designed nGraph to offer “framework-abstraction,” said El-Ouazzane.
Intel’s nGraph (Source: Intel)
Presumably, such a compiler lets data scientists create deep-learning models without having to think about how that model must be adjusted across different frameworks.
With OpenVINO, Intel goes a step further. El-Ouazzane describes it as a toolkit to address “application domains.” The goal of OpenVINO is to help clients develop computer vision apps much faster. Based on convolutional neural networks (CNNs), the toolkit extends workloads across Intel hardware and maximizes performance.
Intel’s OpenVINO (Source: Intel)
Those customers may be developing drones, video surveillance systems, or robotics. By leveraging OpenVINO, they can promptly develop CNN-based deep-learning inference on the edge.