Rodin: Roll-out Diffusion Network
This article covers the Rodin diffusion model for creating 3D avatars from neyral radiance fields (NeRFs).
This article covers the Rodin diffusion model for creating 3D avatars from neyral radiance fields (NeRFs).
In this article, we continue our look at the theory behind recent works on transferring the capabilities of 2D diffusion models to create 3D-Aware Generative Diffusion models.
Improve your images versus raw Stable Diffusion with no extra work needed from the user: by adding self-attention guidance, images generated from text prompts are more realistic and nicer to look at.
In this article, we take a look at several useful tools for generating fast, high quality 3D images with Python.
In this article, we take a look at GLIGEN, one of the latest techniques for controlling the outputs of txt2img models like Stable Diffusion, and show how to run the model in a Gradient Notebook
Part 2 of our series examining techniques for adding control, guidance, and variety to your stable diffusion pipeline.
In this blog post, we review recent diffusion models that have been used not only for generic image generation but also for image editing purposes. Within, we go go over each model and summarize the blog post with the pros and cons of each model.
In this article, we examine four new techniques for brining greater control to your Stable Diffusion pipeline: T2IAdapter, Instruct Pix2Pix, Attend and Excite, and MultiDiffusion
n this article, we looked in depth at ControlNet, a new technique for imparting high levels of control over the shape of synthesized images, and demonstrate how to run it in a Gradient Notebook