Rodin: Roll-out Diffusion Network
This article covers the Rodin diffusion model for creating 3D avatars from neyral radiance fields (NeRFs).
This article covers the Rodin diffusion model for creating 3D avatars from neyral radiance fields (NeRFs).
In this article, we take a deeper look at Neural Articulated Radiance Fields, and examine their potential for 3d modeling.
In this blog post, we take a deeper look at the inversion techniques utilized by GANs to embed features and then manipulate them in the latent space,
Improve your images versus raw Stable Diffusion with no extra work needed from the user: by adding self-attention guidance, images generated from text prompts are more realistic and nicer to look at.
In this tutorial, we look at the LLaMA model from Meta AI, and show how to implement it in a Gradient Notebook with lightning fast access to the models using the Public Dataset.
In this article, we will see some GANs improvements over time, then we go through the revolutionary ProGAN paper to see how it works and understand it in depth.
In this article, we look at the steps for creating and updating a container for the Stable Diffusion Web UI, detail how to deploy the Web UI with Gradient, and discuss the newer features from the Stable Diffusion Web UI that have been added to the application since our last update.
In this blogpost, we examined the architecture and capabilities of the Versatile Diffusion framework. We then demonstrated this model within a Gradient Notebook to perform txt2img, img2txt, image variation, text variation, dual-guided, and Latent Image to Text to Image synthesis.
This tutorial shows in detail how to train Textual Inversion for Stable Diffusion in a Gradient Notebook, and use it to generate samples that accurately represent the features of the training images using control over the prompt.