Our electronics systems are slowly opening their eyes
Before you start reading this, I would like to ask you to close your eyes for 10 seconds. Please do this now.
It's tough, right? We really need our eyes and are pretty powerless without them. The first thing we do when we wake up in the morning is to open then, and the last thing we do in the evening when we go to sleep is to close them.
The history of our eyes started 270 million years ago. The Trilobite was the first species that developed this compelling new sensing mechanism: the power of sight. It turned out to be very potent. Using the power of sight, it was easier for the Trilobite to search for food, keep an eye on its family, and flee from predators. This new sense was possibly one of the driving forces behind the Cambrian Explosion, a period in which many new species originated in a relatively short period of time.
We are now on the eve of a new revolution in which our electronics will also adopt the power of sight. A large part of this new sensing component has already been widely adopted by our electronics in the form of their cameras. At home in my family of five, I count about thirty. Two cameras per mobile phone and tablet; one in each laptop; we also have a few digital still cameras; and then there are some cameras gathering dust as part of our old cell phones that haven't yet been recycled.
So the eyes are already there. They're small -- the selfie camera of a phone is just a few square millimeters --capture a pretty high-quality image, and cost just a few dollars. Today, these cameras are primarily used to capture, store, and distribute images. The next big step is to build devices that actually understand what they're seeing, or -- if not fully understand what they're seeing -- at least interpret the images and use this information to take appropriate action.
We're not really sure how to do this though. How do people recognize a picture of their niece in a fraction of a second? How are we capable of reading text in any font? How do we drive a car when it's raining and dark, or drive it slowly through a crowd when it's busy in the city?
Fortunately, much progress has recently been made in this area. Previously, image recognition algorithms had to be carefully hand-crafted -- a labor-intensive task with non-stellar results. But now there's been a breakthrough with the use of deep neural networks. One key advantage of this technique is that you train the networks by simply showing many examples. More importantly, these networks are much better in generalizing, and recognize images with a much higher accuracy than what was possible just a few years ago, even rivaling us humans at some image recognition tasks. Combining this technology with image processing techniques that are more grounded in classical math, such as optical flow or SLAM algorithms that deduce the locations and movement of objects, we can build highly powerful intelligent systems that understand what they see.
These image processing techniques, however, require huge amounts of computing power and storage. According to neurologists, humans use about half of our brain power for the sole task of visual processing. Fortunately, Moore's law helps, and we can now extract teraops of compute power from silicon. Even so, neural networks can still bring a large compute cluster to its knees. The next step we need to make is not research anymore, but one of good engineering. In order to solve the compute problem, we need special embedded vision processors that can work the transistors much more efficiently than standard CPUs or GPUs. We need design tools that shrink and optimize the neural networks for minimal compute and memory usage while maintaining their ability to perform their detection tasks well. And we need to develop software tools that make it easy to use and implement such image processing software in lower power, low cost silicon. Making efficient silicon and tools for visual processing is exactly what we at videantis have been working on for many years, and we're excited that our technology is now rapidly being adopted by the market.
But we're still only at the beginning, and there are still many good product ideas and strategies to come that will use this new electronic power of sight. Of course, we know that with a proper set of eyes our cars can drive themselves, and drones can automatically deliver our packages or food, but what about cameras that keep an eye on us all day to see if we're still healthy? How about a washing machine that really knows when the laundry is clean, or an oven that prevents the potatoes from burning? The possibilities are endless.
Electronics that can see -- I've got my eyes set on that!