Advertisement

The challenges and opportunities for machine learning in the IoT

June 04, 2018

Jennifer Prendki-June 04, 2018


Model Development

One of the main factors behind the impressive advances in artificial intelligence nowadays is the appearance of better technologies, such as GPUs, that enable faster data processing. Machine learning for IoT has given rise to an interesting conundrum: while the best models need to be trained with a lot of data, most IoT devices are still limited in storage space and processing power. For that reason, the ability to safely and efficiently transfer large amounts of data from devices to a server or to the cloud, and vice versa, is key to the development of AI applications. In the age of cloud computing, a natural solution is to export the data to the cloud where models are developed, and to export the models back onto the device once they are ready for use. This is particularly appealing, especially since 94% of all generated data is expected to be processed in the cloud by 2021, which means it becomes possible to capitalize also on the other sources of data, either historical or originated on other IoT devices. However, storing complex models back onto a memory-constrained device can in itself be a challenge, as sophisticated models with large numbers of parameters, such as deep learning models, are often very large themselves. On the other hand, the solution consisting in sending data from the device to the model on the cloud for the inference step can also be suboptimal, especially in cases where latency needs to be very low.

Another challenge comes from the fact that IoT devices might not continuously be connected to the cloud and therefore might require some local reference data for offline processing, as well as the capability to function in standalone. This is where an edge-computing architecture becomes interesting, as it enables data to be initially processed at the level of the edge devices. This approach is particularly attractive when enhanced security is desired; it is also advantageous because such edge devices are capable of filtering data, reducing noise and improving data quality on the spot.

Unsurprisingly, AI engineers have been trying to get the best of both worlds and have eventually developed fog computing, which is a decentralized computing infrastructure. In this approach, data, compute power, storage and applications are distributed in the most logical way between the device and the cloud, ultimately leveraging their respective advantages by bringing them closer together.

Transfer Learning

We have seen that IoT devices were capable of generating Big Data, but in practice, it is not  uncommon to use external, historical datasets to develop intelligent applications for IoT. This implies that it is possible to either rely on the data generated by an ensemble of multiple IoT devices (typically, the same type of device across multiple users), or on an entirely different source of data. The more specific and unique the application, the less likely it is that an existing dataset will be available for use – this would be the case, for example, when the device captures a very specific type of image with no similarity with open source image datasets such as Imagenet. That being said, it is very common that IoT applications are actually the clever blend of several existing off-the-shelf models. This makes transfer learning well adapted to the development of intelligent applications in the context of IoT.

The transfer learning paradigm consists in training a model on a dataset (generally a gold standard one) and using it to make inferences on another dataset. Alternatively, it is possible to use the parameters computed during the generation of this model as a starting point when training a model on the actual dataset instead of initializing the model to random values. In this case, we refer to the original model as a “pre-trained” model, which we fine-tune on the data specific to the application. This approach can speed up the training phase by several orders of magnitude.  With the same paradigm, it is possible to train a general model that is then refined and optimized on a case-by-case basis, using the data directly generated by the end user.

Security and Privacy Concerns

Because Internet-connected devices technology extends the current Internet by providing connectivity between the physical and cyber worlds, the data it generates is highly versatile but is also the cause for major privacy concerns. In fact, about 50% of organizations involved in IoT consider security the biggest hindrance to IoT deployments. And considering that about two-thirds of IoT devices are in the consumer space, and how personal some of the shared data can be, it is easy to understand why. These concerns, coupled with the expected risks linked to frequent data transfers onto the cloud, explain why users are demanding guarantees regarding the protection of their data.

Yet things get even more insidious when those IoT applications are powered by “federated” data (i.e., data generated by multiple users): not only can user data be leaked directly, it can also be exposed indirectly through side-channel attacks, when malicious agents reverse-engineer the output of a Machine Learning algorithm to infer private information. And for these reasons, there is a clear necessity for data protection laws to evolve alongside the technology and the applications themselves.

IoT Machine Learning Is Human-Centered Machine Learning

Because IoT devices brings the Internet closer to its users and touches all aspects of human life, they often allow to collect highly contextual and personal data. IoT data narrates the story of the life of its users and is making it more achievable than ever to understand a user’s needs, desires, history and preferences. This makes IoT data the perfect data to build personalized applications tailored to a user’s personality.

And because IoT touches our lives so intimately both by collecting highly personal data, and by offering highly personalized applications and services, IoT machine learning can truly be qualified as human-centered machine learning by excellence.

References 


Jennifer Prendki is currently the VP of Machine Learning at Figure Eight, the Human-in-the-Loop AI category leader. She has spent most of her career creating a data-driven culture wherever she went, succeeding in sometimes highly skeptical environments. She is particularly skilled at building and scaling high-performance Machine Learning teams, and is known for enjoying a good challenge. Trained as a particle physicist, she likes to use her analytical mind not only when building complex models, but also as part of her leadership philosophy. She is pragmatic yet detail-oriented. Jennifer also takes great pleasure in addressing both technical and non-technical audiences at conferences and seminars, and is passionate about attracting more women to careers in STEM.

< Previous
Page 2 of 2
Next >

Loading comments...