Custom Diffusion with Paperspace
Learn how to customize your diffusion model images with multiple concepts!
James is an ML Engineer and Head of Engagement Marketing at Paperspace, who has spent 4 years as a Machine Learning marketing specialist. BSc in Psychology from University of St. Andrews, Scotland.
Learn how to customize your diffusion model images with multiple concepts!
In this article, we look at the steps for creating and updating a container for the Stable Diffusion Web UI, detail how to deploy the Web UI with Gradient, and discuss the newer features from the Stable Diffusion Web UI that have been added to the application since our last update.
In this blogpost, we examined the architecture and capabilities of the Versatile Diffusion framework. We then demonstrated this model within a Gradient Notebook to perform txt2img, img2txt, image variation, text variation, dual-guided, and Latent Image to Text to Image synthesis.
This tutorial shows in detail how to train Textual Inversion for Stable Diffusion in a Gradient Notebook, and use it to generate samples that accurately represent the features of the training images using control over the prompt.
In this article, we walked through each of the steps for creating a Dreambooth concept from scratch within a Gradient Notebook, generated novel images from inputted prompts, and showed how to export the concept as a model checkpoint.
Follow these step-by-step instructions to learn how to train YOLOv7 on custom datasets, and then test it with our sample demo on detecting objects with the Road Sign Detection dataset with Gradient's Free GPU Notebooks
In this tutorial, we show how Whisper can be used with MoviePy to automatically generate and overlay translated subtitles from any video sample. We then walked through setting up this process to run both within a Notebook context and from an application served with Gradient Deployments.
In this tutorial, we walked through the capabilities and architecture of Open AI's Whisper, before showcasing two ways users can make full use of the model in just minutes with demos running in Gradient Notebooks and Deployments.
This guide shows you how to setup the Stable Diffusion web UI in a Gradient Deployment, and get started synthesizing images in just moments with Gradient's powerful GPUs