3D Generation Deep Learning Models
In this article, we take a look at several useful tools for generating fast, high quality 3D images with Python.
In this article, we take a look at several useful tools for generating fast, high quality 3D images with Python.
In this article, we take a look at GLIGEN, one of the latest techniques for controlling the outputs of txt2img models like Stable Diffusion, and show how to run the model in a Gradient Notebook
Part 2 of our series examining techniques for adding control, guidance, and variety to your stable diffusion pipeline.
In this blog post, we review recent diffusion models that have been used not only for generic image generation but also for image editing purposes. Within, we go go over each model and summarize the blog post with the pros and cons of each model.
In this article, we examine four new techniques for brining greater control to your Stable Diffusion pipeline: T2IAdapter, Instruct Pix2Pix, Attend and Excite, and MultiDiffusion
n this article, we looked in depth at ControlNet, a new technique for imparting high levels of control over the shape of synthesized images, and demonstrate how to run it in a Gradient Notebook
When it comes to image synthesis algorithms, we need a method to quantify the differences between generated images and real images in a way that corresponds with human judgment. In this article, we highlight some of these metrics that are commonly used in the field today.
Learn how to customize your diffusion model images with multiple concepts!
In this article, we look at the steps for creating and updating a container for the Stable Diffusion Web UI, detail how to deploy the Web UI with Gradient, and discuss the newer features from the Stable Diffusion Web UI that have been added to the application since our last update.