Solid Sands: SuperTest helping Graphcore accelerate the future of AI

Machine intelligence processor maker Graphcore (Bristol, UK) develops massively-parallel computing platforms that let innovators create next-generation AI products. As part of its compiler development tool chain, the company is using SuperTest to test and validate compilers for its unique Intelligence Processing Unit architecture.

There are several factors that compel the need for exhaustive compiler testing. Firstly, if you are building a compiler for a proprietary target architecture, where you can’t draw on the wealth of experience already acquired by the developer community. Secondly, if your compiler must perform complex application-dependent optimizations, reordering or modifying code to make maximum use of target architecture features and instructions.

For Graphcore, a UK-based chip-maker that develops silicon platforms for accelerating Artificial Intelligence applications, both these conditions apply. The instruction set and memory architecture of its massively parallel Intelligence Processing Unit (IPU), which integrates over 1200 processor cores on a single piece of silicon, is uniquely designed to accelerate today’s AI applications while also allowing engineers to create new categories of AI models and products. The back-end of its software stack, called Poplar, employs multi-layered optimizations to distribute tasks between the cores and ensure they run at optimum speed and efficiency.

To promote exploration and innovation in the AI application space, the company is also committed to allowing application developers to program its IPU at the C++ level, which makes it extremely important to offer them a highly robust compiler that can handle anything they throw at it.

In terms of user confidence and convenience, there are also other reasons why the C++ compiler for Graphcore’s IPU needs to be rigorously tested and validated. For example, in addition to allowing programming at the C++ level, the Poplar software development stack accepts input from highly abstracted deep-learning AI frameworks, which adds additional layers of translation between the abstracted model and the target hardware machine code.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.