Inverting Images in Generative Models: An Overview
In this blog post, we take a deeper look at the inversion techniques utilized by GANs to embed features and then manipulate them in the latent space,
In this blog post, we take a deeper look at the inversion techniques utilized by GANs to embed features and then manipulate them in the latent space,
Improve your images versus raw Stable Diffusion with no extra work needed from the user: by adding self-attention guidance, images generated from text prompts are more realistic and nicer to look at.
In this tutorial, we look at the LLaMA model from Meta AI, and show how to implement it in a Gradient Notebook with lightning fast access to the models using the Public Dataset.
In this article, we will see some GANs improvements over time, then we go through the revolutionary ProGAN paper to see how it works and understand it in depth.
In this article, we look at the steps for creating and updating a container for the Stable Diffusion Web UI, detail how to deploy the Web UI with Gradient, and discuss the newer features from the Stable Diffusion Web UI that have been added to the application since our last update.
In this blogpost, we examined the architecture and capabilities of the Versatile Diffusion framework. We then demonstrated this model within a Gradient Notebook to perform txt2img, img2txt, image variation, text variation, dual-guided, and Latent Image to Text to Image synthesis.
This tutorial shows in detail how to train Textual Inversion for Stable Diffusion in a Gradient Notebook, and use it to generate samples that accurately represent the features of the training images using control over the prompt.
In this article, we walked through each of the steps for creating a Dreambooth concept from scratch within a Gradient Notebook, generated novel images from inputted prompts, and showed how to export the concept as a model checkpoint.
In our newest article, we discuss autoencoders and convolutional autoencoders in the context of image data. We then show how to write custom autoencoders of our own with PyTorch, train them, and view our results in a Gradient Notebook.