Enhanced SSDs support AI workloads

TORONTO – There’s been much talk about the memories and architectures necessary for artificial intelligence (AI) and machine learning workloads, and Micron Technology’s latest high performance and high capacity solid-state drives (SSDs) put flash firmly in the mix.

The company just unveiled its 9300 series that uses the NVM Express (NVMe) protocol aimed at data intensive applications with 3.5 Gbps throughput on both reads and writes.

“Latency is becoming much more important in the enterprise and cloud work space where the response time for the application is pretty important, so that your infrastructure can respond to more user requests on a given server storage platform,” said Cliff Smith, Micron’s product line manager told EE Times.

Broadly, that’s the market the 9300 series is aimed at. Aside from the performance, other selling points for these types of customers include 28 percent less power consumption than the company’s previous generation of NVMe SSDs, and capacities as high as 15.36 TB with 32 NVMe namespaces to use the storage space as efficiently as possible.

But more specifically, the performance and capacity put the 9300 series in a position to address the demands of AI and machine learning. Smith said the throughput and the capacity enable the SSD to ingest large datasets quickly. “When you're loading up a workload for a learning algorithm, you're just writing. We’re going to be able to write this data very quickly,” he said. Once that dataset is in, the learning algorithm takes over, which is constantly reading and training. “There's a trick to developing the training so it's constantly reading that dataset.”

One ability that Micron is still trying to perfect is parallelizing the extract, transform and load (ETL) processes, where it’s possible to move vast amounts of information from data lakes to the faster SSD and then into the GPU complex. This will speed up learning and generation of a model that can be put in production to do inference. However, there will be one drawback, said Smith—data scientists won’t able to take long coffee breaks thanks to the parallel processes. “These two processes are sequential today for performance reasons and at the end of the day some software reasons.”


Today, machine learning sequential as the ETL process caches the data set into an SSD and then to the GPU, but Micron sees a near future where this can be done in parallel

The latency and throughput of NVMe 9300 series enables the ultra-fast feeding of data to the high-throughput and low-latency specialty memory sitting on the GPUs, such as 3D Xpoint, said Jason Echols, senior technical marketing manager for Micron. “The more you can keep those fed, the more you can keep all those parallel cores fed.”

Gregory Wong, principal analyst with Forward Insights, said the 9300 series could tackle two AI-related workloads — the machine learning and the inferencing. With the former, it might be churning through large data sets where there are “tons and tons” of faces to learn from. The facial recognition is the inferencing part, and that’s an example where the AI workload might need to happen at the edge. “The machine has already learned from this huge data set and now they have to put it to use,” he said, and if it’s a surveillance application, you want that inference made quickly and on-site — the system can’t be querying the cloud to do face matching.

Wong said the capacity and very low latency of the Micron 9300 series is what’s necessary for AI workloads, although the latency falls short of what an Intel Optane-based SSD can provide. But given that Optane is price-prohibitive for large data sets, the Micron offering makes a lot of sense when there’s lots of data to churn through.

The high capacity could be a drawback in some instances, in that it would take a long time to rebuild if it ran into problems — you wouldn’t want an excessively large SSD to be doing the inference in an autonomous vehicle, Wong said, and you can’t be shipping data to the cloud and back to make real-time decisions, even with 5G networks, as there’s always potential for a lag or even an outage. “The car has to be able to react because it’s not just the latency, it's also the quality of the connection.”

It will be several years before we’re at that level of autonomous driving on the roadways, and in the meantime, said Wong, there’s plenty of AI workloads that need to be addressed by various memory and storage technologies, including SSDs, both in the data center and at the edge in scenarios where an immediate inference is required.

Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times .

>> This article was originally published on our sister site, EE Times: “Micron Puts SSD into AI Mix.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.