Chip show features machine learning, fast networks, and fat memories -

Chip show features machine learning, fast networks, and fat memories


SAN JOSE, Calif. — Forget Moore’s Law and screaming microprocessors. This year’s International Solid-State Circuits Conference ( ISSCC ) is all about the age of data in which machine learning, fast networks, and fat memories are king.

Samsung and Intel will detail 5G/LTE combo chips. In NAND flash, Toshiba will describe a 1.33-Tbit chip, and Western Digital will talk about a 128-layer one. In DRAM, SK Hynix and Samsung will report on DDR5 and LPDDR5. Separately, Samsung will present a deep-learning accelerator for smartphones.

This year’s event has no papers on 5-nm SRAMs or test chips, although it does include talks on a handful of fast 7-nm networking chips. In another break from the past, there are no papers on flagship CPUs.

“We do not believe this will be a continuing trend but, rather, an indication of where the industry is in terms of product cycles,” wrote an organizer of the microprocessor session.

In their place, ISSCC invited IBM engineers to describe the Summit and Sierra supercomputers, currently the most powerful systems in the world . The processor session also hosts an interesting paper on a robot controller that scales from 37 to 238 mW at 80–365 MHz using an Intel 22-nm process intended as a rival to fully depleted silicon-on-insulator.

Staking out the new reality, Yann LeCun, director of AI research at Facebook, will give the opening keynote. The father of convolutional neural networks will describe the road to unsupervised learning, where machines learn like people do from their environment.

AI and 5G will also be topics of all-day tutorials and short courses that book-end the start and end of the conference.

Organizers provided context on progress on the efficiency and throughput of deep-learning processors for CNNs/DNNs presented at ISSCC 2019 compared to the state of the art in 2018. Click to enlarge. (Source: ISSCC)

Samsung’s mobile accelerator for deep learning delivers up to 11.5 tera-operations/second (TOPS) at 0.5 V and fits into 5.5 mm 2 in an 8-nm process. It packs 1,024 multiply-accumulate units in a dual-core design and delivers a tenfold performance boost over the previous state of the art, said ISSCC organizers.

Toshiba will describe a 16-nm SoC for robocars that delivers 20.5 TOPS in a 94.52-mm 2 die that includes 10 processors, four DSPs, and eight accelerators. It performs ASIL-B–compliant image recognition and ASIL-D–compliant control processes.

In a session on in-memory computing, a hot approach in AI acceleration , National Tsing Hua University will detail a chip that delivers 53.17 TOPS/W in binary mode using resistive RAM. The device sports an operation latency of 14.6 ns.

> This article was originally published on our sister site, EE Times: “AI, 5G, Big Memories Define ISSCC

2 thoughts on “Chip show features machine learning, fast networks, and fat memories

  1. “I am really keen to see how computers are going to start managing tasks on their own. They are already so much more capable in handling issues nowadays in comparison to a few years ago. With all of these developments, it won't be long before you start see

    Log in to Reply
  2. “As a consumer myself, it is true that these are indeed the kings of products and services that we go after. We know the competition in the industry is so tight so manufacturers are simply increasing their game to be the best among the best. It is to our g

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.