PARIS – For decades, the “vision” technology most commonly used by consumers was an ordinary camera, either a point-and-shoot camera, digital SLR, or smartphone.
But what if consumers could get their hands on more computationally intensive embedded vision technologies?
For example, the US Army has a device that senses and records in a 360-degree arc, capable of capturing images front, back, and on every side.
Or, how about a pen that sees, follows, and digitizes every movement of your handwriting, and wirelessly transmits that data real-time to a smartphone or tablet? This “smart” pen, by the way, can also simultaneously capture and record samples of the real-life environment (voice, music, death threats, gunshots, screams, dripping blood, dying words, death rattles, random noises, etc.).
These are examples of new-generation vision technologies that are coming soon, as consumer products, to your neighborhood for a few hundred dollars.
The 360-degree camera described above is being redesigned as the first 4K resolution panoramic camera by Centr Camera, a San Francisco-based startup founded by ex-Apple engineers. The doughnut shaped camera is small, evocative of a roll of duct tape. The project is now pitched at Kickstarter.
The “smart” pen mentioned above is called Neo 1, developed by NeoLab based in Korea.
“The common thread (of those new products) is that we're now at the point where we can build sophisticated vision capabilities into small, low cost, low power systems. This was not the case a few years ago,” Jeff Bier, founder of Embedded Vision Alliance, told EE Times.
Bier calls this sort of technology “a game-changer” based on his conviction that “machines that see can be safer, more responsive, more capable, and more efficient than their sight-deprived predecessors.”
In essence, a slew of new companies contend merely that taking a photo of what's in front of you just too 20th century. They hope to deploy embedded vision technologies to capture the environment as a whole, or what you are experiencing as a whole, so that you can extract the “value” of vision.
Of course, none of this is possible, if consumer vision products depend on CMOS image sensors alone. Also critical are powerful vision processors capable of computing what image sensors are seeing. In the case of Centr's 360-degree camera, it deploys a Movidius chip, which Google's Project Tango also uses. For the smart pen, processors that are being designed in are CogniVue's APEX Image Cognition Processor (ICP) along with two ARM 9 processors.
The panoramic video, for example, is already possible today, with raw video footage taken by four separate video camcorders. The process, requiring hours of patient editing, requires off-line processing, frame-by-frame scrutiny and careful splicing.
In contrast, Centr's 360-degree camera does the whole capturing-and-stitch job in real time. The start-up developed algorithms for identifying key points on every frame and marking them so that the embedded vision technology can see overlapped regions, determine areas to be stitched together, all the while calibrating colors, white balance and equalizing video images shot by four different cameras, according to Paul Alioshin, CTO at Centr, in a recent interview with EE Times.
To read the rest of this article and to leave a comment, go to “Movidus.“