The network-centric security and surveillance industry enabled by the IP-based cameras has been steadily progressing over the years. Now the advent of the Internet of Things (IoT) promises to turn this segment into a mass surveillance infrastructure. However, the crossover between IoT and surveillance is also demanding the edge devices like security cameras to get connected as well as get smart.
In other words, move more imaging and video analytics to camera and process information directly inside the smart camera. So, in this facet of IoT, where surveillance machines are becoming part of the network of connected devices, it is imperative that edge devices like security cameras acquire some level of intelligence while some of the data is sent to the cloud servers.
Take the use case of object recognition in the context of home security and surveillance. First, an object, for example, a person is recognized. Next, the camera has to identify if the person is part of the list of approved people that have access to the home or building. Then, the camera must identify the situation; for instance, if the person has fallen or has entered a certain area that is prohibited for him.
So the camera system may simply create a notification in the form of a message or a call. Apparently, it's hard for the cloud to respond to all the data quickly enough because data transfer isn't always that fast. The data transfer in the cloud environment isn't real-time either, as some people might have believed. Sometimes, even the network link to the cloud is down.
IoT surveillance apps like scene analysis demand local intelligence inside cameras
Birth of Smart Camera
Now consider another use case: Smart city. Image and video resolution are going up due to the proliferation of cheap cameras. But it also requires more bandwidth. There can be 1,000 cameras in a smart city, and it will take too much of bandwidth while they are all connected to the cloud. All of them will need to transfer, handle and store massive data quantities.
So moving more video analytics to the camera can reduce cloud server processing. Moreover, having local analytics power in end devices like surveillance cameras can limit data traffic. It's worth noting that cloud can't do everything no matter how good processing engines and algorithms are. Then, power could be concern for devices like smart meters, which will drain too much of power in transmitting all the data to the cloud.
There could also be regulation at some stage in the future about the cloud usage amid privacy issues. The monitoring of elderly at home, for instance, may simply require a phone call after the transmission of an alarm signal. On the other hand, people may hesitate to buy cloud-centric safety products because of privacy concerns.
There are devices like Amazon's Echo smart speaker cum personal assistant and Dyson's 360 Eye robot that cleans floors without human help. Amazon Echo is a cloud-connected wireless speaker that employs local audio analytics to carry out voice control tasks such as voice recognition and speaker identification.
Likewise, Dyson's robotic vacuum cleaner incorporates video analytics in the camera to observe and interpret surroundings from all angles and avoids collision with fixed and mobile obstacles such as furniture, walls and pets. These two use cases show how smart devices can send some data to the cloud while embedding some degree of local intelligence to reduce data transfer and overall reliance on the cloud.
Dyson's 360 Eye robot is a testament that vision analytics can be embedded locally
Chips That Can See
There are chipmakers vying for pre-analytics in security and surveillance cameras so that such edge devices are smart as well as connected. Take VATICS, a fabless supplier of SoCs for surveillance and smart network cameras, which has acknowledged the need for local intelligence at the edge devices by licensing CEVA's imaging and vision DSP solution.
Taipei, Taiwan–based chipmaker for multimedia applications plans to leverage the dedicated DSP solution to embed computer vision and scene analytics capabilities while keeping power consumption in check. VATICS is a spin-off from the semiconductor division of VIVOTEK, one of the largest manufacturers of IP cameras for security and surveillance.
Earlier, in November 2014, another chipmaker from Taiwan, Novatek, adopted CEVA's imaging and vision DSP solution for its SoCs targeted at the surveillance, action camera and automotive markets. The fabless chip design firm is using the DSP horsepower to incorporate intelligent algorithms for machine vision, scene analysis, depth mapping and object detection in its imaging SoCs in a power efficient manner.
The CEVA-XM4—the company's fourth-generation image and vision processor IP for surveillance and other IoT applications—provides distributed and automatic intelligence. So that applications like scene analysis can be embedded into the onsite camera module while it stores only relevant information.
Phi Algorithm Solutions has implemented pedestrian recognition engine using CEVA-XM4 vision processor
Case in point: Phi Algorithm Solutions, a Toronto, Canada–based firm that has optimized its object detection engine using the CEVA-XM4 DSP solution. The CEVA-XM4 is a powerful yet energy efficient programmable vision DSP that boasts an ISA that is well tuned for vision algorithms.
Find out more about surveillance and other computer vision applications and how intelligent image and vision processor can facilitate tasks like object recognition and scene analysis in a white paper from CEVA Inc.
Yair Siegel is Director of Product Marketing, Imaging & Vision, at CEVA. Yair is focusing on expanding CEVA’s imaging & vision product line into a wide variety of camera-enabled devices. Yair and his team collaborate with lead computer vision companies to bring new technologies to the market.