Advertisement

Prepare to be terrified at ESC Silicon Valley 2017

November 16, 2017

Max The Magnificent-November 16, 2017

I don’t know about you, but some of the things I'm seeing and hearing are beginning to make me a tad uneasy as to what the future holds for us.

The optimistic side of me -- the part that loves technology -- envisages a time when we all enjoy the benefits of things like 3D holographic computer interfaces, tactile displays, and augmented reality (AR) systems that make me squeal in delight.

By comparison, the pessimistic side of me looks at some of the technologies that are starting to appear and thinks, "But what if..."



(Source: pixabay.com)

For example, I love my Amazon Echo, but I have a niggling feeling of disquiet about having something that's constantly listening to what's going on in my home. It's not that I think the folks at Amazon (or Google, or Apple) are interested in accessing my conversations -- at least, not at the moment -- but who is to say what tomorrow will bring? Theoretically, the only time anything is transmitted into the cloud is when I use the keyword "Alexa," but what if someone did decide to spy on my family at some time in the future?

The way artificial intelligence (AI) is going, will it really be all that long before someone can create an AI agent that can locate and take over someone's Echo and start listening to everything that's going on?

And this won’t stop with smart speakers like the Amazon Echo and the Google Home. Pretty soon, just about everything from your electric toaster to your dishwasher to your television will be speech-enabled, which also means they will be cognizant of your every utterance.

Do you recall my recent article XMOS + Setem could be a game-changer for embedded speech? In that column, we discussed how the folks at XMOS now have the ability to disassemble a sound space into its individual sources. Take a roomful of people chatting, for example; in addition to being able to identify and resolve the locations of the individual speakers, the system provides the ability to simultaneously listen to all of the speakers all of the time.

Now imagine you are wearing an augmented reality (AR) headset. Suppose you focus your attention on two people chatting at the far side of the room. Now suppose that your headset has the ability to "wind down" all of the other voices to a background hum, and to amplify the voices of the people you are observing. This could be very useful or very invasive, depending on who is doing what to whom.

Did you see the recent Gizmodo article: How Facebook Figures Out Everyone You've Ever Met? It turns out that, behind the profile you build for yourself, Facebook is constantly evolving a "shadow profile" (they don’t like this term) for you based on the content of the inboxes and mobile devices of other Facebook users.

The bottom line is that Facebook and Amazon and Google know more about you than you think they know. I remember some time ago hearing that one or more of these companies are working on building avatars for each of their users. I'm not talking about visual representations of the users here, I'm talking about embodiments or personifications in the form of artificial neural networks (ANNs).

Do you remember the way it used to be when you were looking at a book on Amazon? The system would tell you "A lot of people who bought this book also bought..." and it would give you a couple of other suggestions. Although it was clever at the time, this was really based on simple number-crunching. Now the system is much more predictive, offering proposals and recommendations based on all sorts of factors, and we're still in the early days of what's possible.

The idea behind the avatars is that your "Mini-Me" would be trained using all the items you’ve previously bought along with all the items you've looked at and rejected. Over time, your avatar will continue to be educated based on the items you search for, the links and images you click, the pages you look at, and the amount of time you spend there.

The goal, of course, is to be able to sell you more things more efficiently. Rather than show you so many offerings that you become annoyed, the system will instead present tens or hundreds of thousands of items to your avatar. The only ones you see will be ones to which your avatar gives the equivalent of a "thumbs up."

None of the above is bad in and of itself, but it does make me wonder if we really know what we are doing and where we are headed with our latest and greatest technologies. In fact, these are a few of the topics I will be discussing in my Advanced Technologies for 21st Century Embedded Systems session at the forthcoming Embedded Systems Conference (ESC) Silicon Valley, which will take place December 5-7, 2017, in the San Jose Convention Center, San Jose, California.

Happily, this talk will take place in the ESC Engineering Theater, which means it will be open to anyone to attend so long as they are flaunting a Free Expo Pass, but you do have to register. I'll be the one in the Hawaiian shirt. As always, all you have to do is shout "Max, Beer!" or "Max, Bacon!" to be assured of my undivided attention.

Loading comments...