zyteettqe

Can we trust the cloud for video analytics?

January 03, 2017

yairs504-January 03, 2017

Video analytics are used to examine objects, people, situations, and more, and to then generate conclusions using deep learning and computer vision algorithms. Many implementations send footage over the cloud for processing, but is that such a good idea? The cloud is a useful and powerful resource for storing and processing data, but -- as video cameras become more ubiquitous and more intelligent -- the question of whether video analytics should be performed locally or remotely must be addressed. Issues of privacy, safety, security, and cost are strong grounds for the use of edge processing, meaning performing the video analysis onsite (i.e., edge analytics). With CES 2017 starting in just a couple of days at the time of this writing, a variety of cutting-edge prototypes will be showcased -- what trend will take the crown?

Every fraction of a second counts in ADAS
One of the most pervasive uses of video analytics is in the automotive industry for advanced driver assistance systems (ADAS) in highly-automated vehicles (HAVs). The vision systems on HAVs use multiple cameras to identify traffic signals, vehicles, pedestrians, and other indicators, and then respond accordingly. This requires split-second response times and any delay is intolerable. So, relying on a remote server to process the data is not an option. Even if the communication speed is theoretically adequate, the data from each camera covering every angle of the vehicle, together with the growing number of camera-equipped vehicles, could stress available bandwidth, thereby causing an unacceptable delay.


Driver monitoring systems make sure the driver is alert when receiving the control from automated features (Source: Unsplash)

Since most HAVs aren't fully autonomous and rely on a human driver, many systems include inward-directed cameras to assist with the handoff between the ADAS features and the human. One such driver monitoring system (DMS) is CoDriver by Jungo Connectivity, which has been named a CES 2017 Innovation Awards Honoree. This solution uses deep learning, machine learning, and computer vision algorithms to monitor the people inside the car and thus reduce crashes caused by drowsy or distracted drivers. According to ABI Research, camera-based DMSs like this one will reach 17.5 million annual shipments within a decade.

Virtual assistants will be able to see everything
Always-listening virtual assistants are already present in millions of homes. The next progression that will make virtual assistants even smarter and more helpful is the ability to see. An example of this is the always-seeing Koova camera robot, another 2017 CES Innovation Award Honoree. This small portable camera is capable of tracking movement, detecting and recognizing faces, and sending alerts or notifications according to customized settings. It allows the user to define specific activity zones for greater protection, as well as "blockout zones," which are not to be monitored (presumably to address privacy concerns), but is that enough? With each advance in technology, privacy concerns arise.


Koova 2 portable camera robot (Source: Amaryllo International)

Always-listening assistants like Amazon's Alexa were criticized for sending everything uttered in the privacy of the home to the cloud for analysis. Still, the Amazon Echo was purchased by millions of consumers, regardless of those concerns. Will visual recordings also receive the same acceptance? One example for the value of edge analytics is the Arlo camera from NEATGEAR, a 2015 CES award holder. In addition to features like HD video, night vision, and motion detection, the camera also comes with free access to seven days of audio and motion-triggered recordings in the cloud. By performing the activity detection onsite, the company can upload only the desired footage to their servers. Thus, instead of recording 24 hours of video, they actually record an average of only a few minutes or less per day. Think of the server space cost reduction for them!

Recent events indicate that the Arlo camera is now on the brink of becoming a whole lot smarter. This anticipation stems from NETGEAR's acquisition of Placemeter, a tech company specializing in computer vision analytics. Placemeter's technology will enable the security camera to distinguish between motion and background scenery, and to classify any moving objects as people, bikes, motorcycles, cars, or larger vehicles. These features, like the ones of the Koova described above, are part of a strong trend towards intelligent home security cameras.


Arlo wire-free and weatherproof security camera (Source: Arlo)

Eye-tracking for hands-free communication
In the category of tech for a better world, Tobii Dynavox PCEye Mini was named CES 2017 Best of Innovation Awards Honoree. This eye-tracker is designed for individuals who do not have use of their hands due to physical and cognitive disabilities. By using cameras, an illuminator, and image processing algorithms, it enables users to control a computer, laptop, or tablet using only their eyes. This type of technology has the potential to significantly enrich lives. Here, though, both issues of time delay and privacy are critical. No one would want their ability to communicate dependent on the availability of an Internet connection. Also, for users who truly have no other way to communicate, cloud processing means that everything they ever say is relayed to a remote server. That is a far cry from an always-listening virtual assistant that can easily be turned off whenever the user wants some real privacy.

The power of edge processing
It seems there is a very strong case in support of edge processing for many different use cases of video analytics. Delayed response time, vulnerability of sensitive data, and increased data traffic are some of the impediments caused by remote cloud processing that place an edge-based solution in favor. However, while edge processing does reduce data transmission, it also increases the workload on the edge device. This means that small, portable, battery-operated devices will end up running some heavy-duty algorithms. For this to be effective, the vision processor must be highly efficient. It should be able to perform the intense processing required by deep learning and vision algorithms while using extremely low power. If these criteria are met, the edge solution becomes more beneficial than the remote solution.

Yair Siegel is Director of Segment Marketing, CEVA. His focus is on computer vision, deep learning, audio, voice, and always-on technologies for use in mobile phones, virtual reality, augmented reality, and other consumer devices. An example for these use-cases was set in a recent collaboration between Himax Imaging, emza Visual Sense and CEVA. The companies have developed an always-on, low-power visual sensor able to detect, track, and classify objects at the edge device, hence enabling real-time alerts while minimizing the amount of data transmitted to the cloud. The companies will showcase this WiseEye IoT sensor at the Consumer Electronics Show (CES) 2017, January 5-8, 2017 in Las Vegas.

Loading comments...

Parts Search Datasheets.com

Sponsored Blogs