The size and complexity of AI models is growing at a truly astounding pace. Demand for compute cycles needed to train state-of-the-art models, which often exceed tens of billions of parameters, is truly meteoric, straining the supply and capabilities of existing chips.
As a cloud platform built to power cutting-edge accelerated applications where every ounce of performance counts, our proximity to infrastructure has given us a unique insight into the exciting and fast-growing AI chip space. In particular, we have been watching Graphcore emerge as a key player, delivering on its vision of increasing the effiency and performance of AI systems with its Intelligence Processing Unit or IPU.
As Graphcore CEO Nigel Toon discussed in a recent interview, a well-designed AI software stack is needed to truly unlock the raw performance advancements afforded by purpose-built AI silicon.
You can build all kinds of exotic hardware, but if you can't actually build the software that can translate from a person's ability to describe at a very simple level into hardware, you're not really producing a solution.
– Nigel Toon, Graphcore CEO
This perspective that developers need a simple access layer to abstract away the many complexities involved in machine learning infrastructure management is one that is enthusiastically shared by Paperspace. In fact, our MLOps platform Gradient was built to do just that.
Today, we are thrilled to announce the integration of Graphcore IPUs into Gradient Notebooks. This new machine type in Gradient provides free access to IPU-POD16 Classic machines delivering 4 petaFLOPS of AI compute.
Any Paperspace user can now effortlessly run state-of-the-art models on Graphcore IPUs, including getting started with a pre-configured Transformers sample project developed in partnership with HuggingFace. You can learn more in the announcement here.
More to come
Paperspace and Graphcore will continue to collaborate on integrating IPUs into the full end-to-end Gradient platform, offering ML pipelining and inference capabilities that are becoming more and more prevalent as the AI industry matures.