Conference ponders AI advances - Embedded.com

Conference ponders AI advances

Editor’s note: The industry’s brightest minds in AI research gathered this fall at IBM T.J. Watson Research Center for the second AI Compute Symposium, held in collaboration with IEEE’s Circuits and Systems Society (CAS) and Electron Device Society (EDS). EE Times invited IBM Academy of Technology member Rajiv Joshi, a coordinator of the AI event, to write up the highlights. Joshi co-authored this report with IBM Research colleagues Matt Ziegler and Arvind Kumar.

The world’s AI researchers zoomed in on topics ranging from natural language to AI hardware architectures, machine learning for social-network platforms, quantum computing, and inference functionality on sensory devices.

Together with the IEEE Circuits and Systems Society and Electron Device Society, IBM Research organized the 2nd AI Compute Symposium at the IBM T.J. Watson Research Center THINKLab in Yorktown Heights, N.Y., on Oct 17. More than 200 distinguished academics, renowned thinkers, students, and innovators from across industry and academia assembled for the one-day symposium, which showcased leadership and advancement in research addressing AI compute from pervasive to general AI. The free event featured three keynotes, three invited talks, a student poster session, and a panel discussion.


Committee and invited speakers (l.–r.): Jin-Ping Han (IBM), Donhee Ham (Harvard/Samsung), Xin Zhang (IBM), Hsien-Hsin (Sean) Lee (Facebook), Matt Ziegler (IBM), Arvind Kumar (IBM), Carmen G. Almudéver (Delft University), Eduard Alarcon (UPC Barcelona Tech), Rajiv Joshi (IBM), Naresh Shanbagh (UIUC), Luis Lastras (IBM), Wen-mei Hwu (UIUC), Krishnan Kailas (IBM), Anna Topol (IBM)

The keynoters were Dr. Luis Lastras, a researcher with IBM; Professor Wen-mei Hwu of the University of Illinois at Urbana-Champaign (UIUC); and Harvard University/Samsung Fellow Donhee Ham.

Lastras provided an exciting overview of research projects from IBM related to natural language processing and its evolution. He noted that IBM has been at the forefront of language and speech research for decades. Examples include the famous research program on statistical speech processing from the 1970s that led to the powerful speech recognition systems in widespread use today; the inception of BLEU, a metric widely used to measure the performance of translation systems; the well-known IBM Watson Jeopardy system that was able to defeat the world’s Jeopardy champions; and, more recently, the demonstration of a system that can debate with a human by drawing on new computational argumentation capabilities. These are innovations with purpose, focusing on technologies that address targeted business problems and demonstrating world-class performance in strategic shared tasks.

Hwu followed with a keynote address describing the architecture needed for AI. GPU/accelerator architectures have greatly sped up both the training and inferencing for neural-network-based machine learning models. As major industry players race to develop ambitious applications such as self-driving vehicles, unstructured data analytics, human-level interactive systems, and human intelligence augmentation, major challenges remain in computational methods as well as hardware/software infrastructures required for these applications to be effective, robust, responsive, accountable, and cost-effective.

These applications impose much more stringent requirements for data storage capacity, access latency, energy efficiency, and throughput. Hwu presented a vision for building a new generation of computing components and systems for such applications.

Following the first two keynotes, Dr. Hsien-Hsin (Sean) Lee of Facebook gave an invited talk during the “Industry Perspectives” session about machine learning for social network platforms. Social networks have been deeply woven into our everyday life. These internet platforms host a plethora of real-time services to keep people connected, provide customized information to users, and preserve information transparency.

Underneath their infrastructure, the adoption of machine learning (ML) techniques is rapidly becoming omnipresent in both data centers and end users’ devices, steering a rich feature set to enhance the effectiveness of users’ communication and to improve the quality of online experiences. In achieving those objectives, however, ML consumes enormous computing resources and requires meticulous resource design, provisioning, and management. Lee discussed the state-of-the-art machine learning approaches on production-scale DNN-based personalized recommendation models for content ranking and the computing challenges that lie ahead.

Next, Prof. Donhee Ham of Harvard and Samsung Fellow gave a keynote talk on the “Reconstruction of the Brain.” The artificial neural net made a brilliant comeback with deep learning and has since been revolutionizing a broad range of technologies in the big-data era. On the other hand, deciphering the natural neuronal network of the biological brain –– how it forms circuits and processes information for its higher function –– is one of the most celebrated unsolved problems in all of science. Ham described his group’s continuing effort to develop a semiconductor interface with a biological neuronal network, which might help map and uncover its circuit and function. He said that this study might not only contribute to fundamental neuro-biology but also help develop the next-generation artificial neural net, for which several interesting possibilities were discussed.

Professor Naresh Shanbhag of UIUC talked about “Bringing Artificial Intelligence to the Edge.” Much of AI today is deployed in the cloud, primarily because of the high complexity of machine learning algorithms. Realizing inference functionality on sensory edge devices requires finding ways to operate at the other edge, i.e., at the limits of energy efficiency, latency, and accuracy, in nanoscale semiconductor technologies.

Shanbhag described a Shannon-inspired model of computing (Proceedings of the IEEE, January 2019) to accomplish this objective. The framework comprises low signal-to-noise ratio (SNR) circuit fabrics (the channel) with engineered error statistics, coupled with efficient techniques to compensate for computational errors (encoder and decoder).

According to Shanbhag, a low-SNR circuit fabric referred to as deep in-memory architecture (DIMA) breaches the long-standing “memory wall” in von Neumann architectures by embedding analog computations in the periphery of the memory array, thereby achieving >100× energy-delay–product gains in laboratory prototypes over custom digital architectures implementing the same inference function. Other examples of Shannon-inspired design methods include designing deep-learning systems in fixed-point, energy efficient subthreshold ECG classifier ICs, and STT-RAM–based all-spin logic competitive with CMOS.

Professor Carmen G. Almudéver presented current challenges in quantum computing and described how multidisciplinary science is working toward solutions. Quantum computers promise to solve a certain set of hard problems that are intractable for even the most powerful current supercomputers. Remarkable progress has been made in recent years in quantum hardware, and quantum computation in the cloud is already a reality, offering small quantum processors that are capable of handling basic quantum algorithms. IT behemoths such as Google, Intel, Microsoft, and IBM, along with numerous research groups, are working on building the first universal quantum computer. Building such a quantum system requires bridging quantum algorithms and quantum processors, the professor noted.

Almudéver first presented the state of the art in quantum computing, emphasizing the main challenges, which include improvement and scalability of quantum processors, classical control electronics at (possibly) cryogenic temperatures, and definition of a heterogeneous quantum computer architecture. She then discussed the system architecture, focusing on making quantum computing fault-tolerant, and the compilation of quantum circuits. In the last part of the talk, she presented her vision on how the research community could accelerate the process toward building such a scalable quantum machine, potentially through vertical cross-layer co-design structured methodologies, and possible applications, particularly quantum-enhanced deep-learning co-processors.

At the symposium’s well-attended student poster session, about 30 students presented compelling research spanning numerous topics in AI computing. Awards for the three best poster presentations went to:

  • Sohum Datta, Yubin Kim, and Jan Rabaey, “Statistics-inspired Architectures for the Cosine Hyper-dimensional Processor,” University of California, Berkeley.
  • Abhisek Khanna, Sourav Dutta, Jorge Gomez, Wriddhi Chakraborty, Siddharth Joshi, and Suman Dutta, “Spatio-Temporal Pattern Learning and Classification Using Coupled Nano-oscillators,” University of Notre Dame.
  • Saruyama Pumma, Daniele Buono, Fabio Checconi, Xinnyu Que, and Wu-chun Feng, “Optimizing Large-Scale Deep Learning by Minimizing Resource Contention for Data Processing,” Virginia Institute of Technology and IBM.

The symposium closed with a panel discussion that posed two questions: (1) What problems would we like to solve with AI that we cannot? (2) What innovations are needed to solve them?”

One of the toughest challenges in AI is in getting enough training data to sufficiently generalize the models. A second challenge centers on explainability of the model in inference, particularly when critical decision-making is involved. Accuracy falls short of 100% and, even more troubling, can be severely impacted by minor changes to the input.

The panelists noted that solving these problems requires training concepts in addition to decisions. AI robustness, accuracy, complexity, fairness, ethics, security, and privacy are all pivotal challenges.

Rajiv Joshi
Rajiv Joshi’s welcoming introduction

The question of whether AI can help solve big humanitarian problems such as climate change was also discussed. Nearer term, the panelists considered the practicality of autonomous driving vehicles, real-time wearable language translators, and understanding text and video. Solving such problems requires innovations in natural-language processing, new methods in image classification and signal processing, and high compute efficiency.

The inquisitive comments from the audience following the panel discussions made it clear that AI is not just hype, but offers hope.

The consensus of attendees, speakers, and organizers was that the day’s events had provided an important platform for sharing insights and information on the most current and compelling topics in the computing field. Additional publications (book, journal papers, etc.) based on the symposium’s technical content are planned to provide educational resources for anyone interested.

Further events centering on AI compute are in the planning stages at IBM and IEEE. Please see the website for updates.

>> This article was originally published on our sister site, EE Times.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.