What’s the difference between edge and endpoint? It depends on your perspective.
When new industry buzzwords come up, the challenge for people like us who write about the topic is figuring out what exactly a company means, especially when it uses the term to fit its own marketing objective. The latest buzzword is actually a phrase: edge AI. Because of the proliferation of the internet of things and the ability to add a fair amount of compute power or processing to enable intelligence within IoT devices, the “edge” can be quite wide and could mean anything from the edge of a gateway to an endpoint.
We set out in search of industry consensus on the definitions of edge and endpoint, who might want to add artificial intelligence at the edge, and how much “smartness” could be added. We discovered that the answers depend on your perspective. But, for starters, essentially anything not in the cloud can be defined as the edge.
Many ‘edges,’ but just one endpoint
Probably the clearest definitions of edge and endpoint came from Wolfgang Furtner, Infineon Technologies’ senior principal for concept and system engineering. “The term ‘edge AI’ inherits its vagueness from the term ‘edge’ itself,” he said. “Some people call a car an edge device, and others are using the term for a small energy-harvesting sensor with low-power wireless connectivity. Edge is used in relative ways and distinguishes the more local from the more central.
“But indeed, there is a need to distinguish between the various kinds of things that you find at the edge. Sometimes, you hear terms like ‘edge of the edge’ or ‘leaf nodes.’ Edge AI can be many things, including a compute server in a car.” The key, he said, is that “endpoint AI resides at the location where the virtual world of the network hits the real world, where sensors and actuators are close.”
It’s all about semantics and where you draw the boundary, according to Markus Levy, director of machine-learning technologies at NXP Semiconductors. “Edge machine learning [ML] is the same as an ‘endpoint’ machine learning, except edge ML can also include ML that takes place in a gateway or even fog compute environment,” said Levy. “Endpoint ML is typically related to distributed systems — for example, where our customers are adding intelligence even down to the sensor level. Another example is a home automation system, where there are ‘satellite’ devices, such as thermostat, doorbell camera, security cameras, or other types of connected devices. While these can independently perform machine-learning functions, they might also feed into a gateway where more advanced ML processing occurs.”
Chris Bergey, general manager and vice president of the infrastructure business at Arm, had a somewhat different perspective, citing the increasing levels of intelligence in both edge servers and endpoints. “Basic devices such as network bridges and switches have given way to powerful edge servers that add data-center–level hardware into the gateway between endpoint and cloud,” he said. “Those powerful new edge servers making their way into 5G base stations are plenty powerful enough to perform sophisticated AI processing — not only ML inference but training, too.”
How is that different from endpoint AI? Bergey explained by offering an example: “Due to their powerful internal hardware, smartphones have long been a fertile testbed for endpoint AI. As the IoT intersects with AI advancements and the rollout of 5G, more on-device intelligence means that smaller, cost-sensitive devices can be smarter and more capable while benefiting from greater privacy and reliability due to less reliance on the cloud or internet.
“As this evolution of bringing more intelligence to endpoints continues, the boundaries of where exactly the intelligence takes place will also begin to blend from endpoint to edge, stressing the need for a heterogeneous compute infrastructure.”
There are others for whom edge is everything that’s not in the cloud. For example, Jeff Bier, founder of the Edge AI and Vision Alliance, said that the group defines edge AI “as any AI that is implemented — in whole or in part — outside the data center. The intelligence might be right next to the sensor — for example, in a smart camera — or a bit farther away, such as an equipment closet in a grocery store, or even farther away, such as in a cellular base station. Or [it might be in] some combination or variation of these.”
Xilinx takes a similar position. “Edge AI is basically a self-sufficient intelligence deployed in the field without reliance on a data center,” said Nick Ni, the company’s director of product marketing for AI, software and ecosystem. “It is essential for applications that require real-time response, security — for example, not sending confidential data to the data center — and low power consumption, which is most of the devices out there.
Just as humans don’t rely on a data center to make countless decisions daily, edge AI will dominate the market in applications like semi-autonomous cars and smart-retail systems in coming years.”
Andrew Grant, senior director for artificial intelligence at Imagination Technologies, affirmed that idea. “It’s all edge as far as we are concerned; it’s the customer who decides where it goes,” he said. “We’ll see very much a hybrid approach, and there’s absolutely a role for the cloud and data centers in this, too.”
Grant added that “the speed with which the market is moving [to the edge] is phenomenal. There’s been a wave of movement to the edge, but for many applications, it takes time for the silicon to materialize. We were talking to a traffic management company in China; they are moving data back and forth from the cloud. When I explained to them what we do, they immediately saw the benefit of not having to take the data to the cloud if the traffic lights themselves can determine whether a car is moving or not.”
Embedded systems provider Adesto Technologies doesn’t necessarily differentiate between edge and endpoint, given that the company provides devices for IoT edge servers as well as IoT edge devices. “While we don’t tend to use the word ‘endpoint’ in our own communications, perhaps definitionally, ‘endpoint’ is aligned with the edge devices,” said Adesto CTO Gideon Intrater.
“AI in these devices would typically be some amount of local inference, with the algorithms running as a program on a processor, using a dedicated accelerator, through near-memory processing or in-memory computing.”
He added that edge AI “is becoming a reality across just about every application. We see a great opportunity in industrial and building implementations where AI can provide benefits through predictive and preventive maintenance, quality control in manufacturing, and many other areas. The industry is just getting started, and every day that passes, we expect AI to do more for us. When our older devices without AI don’t intuitively understand our needs, we often get frustrated because we have other devices that will provide intuitive capability. The end consumer doesn’t know what goes into making an AI solution work; they just expect it to work.”