TOKYO —Imaging technology is no longer just aboutthe never-ending megapixel race among CMOSimage sensors. As market focus shifts to”vision” processing, the industry has drawna new battle line — over how fast and howaccurately a processor can capture, dissect,and interpret data in a mannercomprehensible to an embedded system.
In short, the whole concept of who'swatching whom has flipped.
In the embedded vision world, what mattersis not so much you, the photographer, whowants to take better photos; instead, thetechnology now exists to cater to embeddedsystems that need to watch you, recognizewho you are, analyze your behavior, andprocess data they think you need.
You might call this just the plain realityof technology progress in machine vision orcomputer vision. Maybe so. But I confessthat some of the embedded vision plotshatched by marketers today are disturbingenough to make me cringe.
None of this stuff, of course, is moreworrisome than the NSA's electronic spyingprograms. But the very notion of a bunch ofsensors physically watching me — solely tomake a commercial gain at my expense –gives me, at least, a slight case of thewillies. At worst, it's a reminder of theincreasingly Orwellian society we alreadylive in.
Over a cup of coffee in Tokyo, I recentlysat down with Tom Wilson, vice president ofbusiness development at CogniVue, aQuebec-based embedded vision technologydeveloper. Wilson tried to convince me thatautomotive isn't the only market beingtargeted by vision processing technologydevelopers like CogniVue.
Here are a few examples he shared with me– in terms of what comes next with embeddedvision:
- Drive a car on a deserted road in thedark. Street lamps — normally switchedoff — light up the road just in front ofyour car, as you move forward. As soon asthey sense your car is leaving, they gooff. (Yeah, I know: an evening's drivethrough The Twilight Zone .)
- Walk in front of a digital sign — agigantic electronic display in a publicspace. The sign, even before you noticeit, recognizes your gender and age, thenquickly changes the ad message — to fityour demographic profile — as you look atit. (Yeah, I know: shades of MinorityReport .)
- Smartphones that can recognize your handgestures, or that can do face recognitionsto help you tag images (by informing youwho you are seeing, and whose pictures youare taking, and even uploading to socialnetworks.)
- A set-top box embedded with eyes in yourliving room identifies who is watchingwhat program. It sends the information toa backend server, triggering a digitalproduct placement in a TV program. (Right.Saw that in Fahrenheit 451 .)
Among these examples, what ticked me offwas the last item about a set-top box witheyes. Of course, for someone who's knownKinect (a motion sensing input device byMicrosoft for the Xbox 360 video gameconsole and Windows PCs), I probablyshouldn't have been so surprised. But Ineeded further clarification over what itexactly does.
“Say you are watching Friends . Theset-top box knows you're watching it and youactually like Pepsi instead of Coke,”explained CogniVue’s Wilson. The backendserver, then, can digitally insert a Pepsican, replacing a Coke, in Monica's livingroom.
Wilson pointed out that Mirriad, adeveloper of ad platforms, is one companyworking on such a project. “The plan is tocouple this type of ad insertion with viewerpreference,” he explained. In fact, aset-top box with eyes isn't such afar-fetched idea. Mirriad recently signed adeal with Pace, a set-top box vendor, totrial this in the UK, according to Wilson.
While explaining the digital productplacement scheme, Wilson joked that this ispartly why he doesn't own a TV. But he madesure that I understood the far-reachingramifications of embedded visionapplications and how the competition amongembedded vision IP vendors — both softwareand hardware — has been escalating inrecent years.
To read more, go to “Hard problems to solve.”