Introducing the next iteration of Gradient: Run on-prem, in your cloud, or hybrid

Gradient provides a production-ready platform as a service to accelerate the development of their AI applications. Platform capabilities include the ability for ML teams to run Jupyter Notebooks, distributed training, hyperparameter tuning, deploy models as RESTful APIs.

5 years ago   •   1 min read

By Dillon

Update: This launch was featured in TechCrunch

Over the last few years, we've fielded countless requests to run Gradient, our MLOps SaaS platform, on existing infrastructure, ranging from local bare metal servers, to AWS VMs, to Kubernetes clusters.  Today, we are excited to share that the latest iteration of Gradient can now run across multi-cloud, on-prem, and hybrid environments and is fully infrastructure agnostic.  The new release also introduces GradientCI, the industry's first comprehensive CI/CD engine for building, training and deploying deep learning models.

Today, companies deploy vast resources to manage their own internal ML pipelines that need ongoing support and maintenance. Gradient eliminates this headache by providing a production-ready platform as a service to accelerate the development of their AI applications. Platform capabilities include the ability for ML teams to run Jupyter Notebooks, distributed training, hyperparameter tuning, deploy models as RESTful APIs. Gradient enables developers to construct sophisticated end-to-end pipelines that stretch across heterogeneous infrastructure — all from a single hub.

Core benefits include:

  • An end-to-end platform for developing, training, and deploying models
  • Continuous integration between their git repositories and Gradient
  • Workflow automation with powerful pipelining and deterministic processes
  • Collaborate across teams: add team members, control permissions, and increase visibility across the organization
  • Leverage fully-managed and optimized Intel® Nervana™ NNP-T accelerators or transform existing infrastructure into a powerful MLOps platform

Paperspace is proud to be working with Intel to provide a software abstraction layer for their new AI chips.  

"Paperspace helps inspire the next generation of AI developers; they are also poised to help unlock the power of the upcoming Intel® Nervana™ Neural Network Processors. These new chips are set to deliver groundbreaking performance and Paperspace will help companies quickly and efficiently operationalize this amazing new AI hardware."
— Carlos Morales, GM, AI Software, AI Products Group at Intel.

For more info about this version of Gradient, please contact our Sales team.

Spread the word

Keep reading