Data2Vec: Self-supervised general framework for vision, speech, and text
In this tutorial, we understand Data2Vec model from Meta AI and show how to train your own model with a ready-to-use codebase on the Gradient Notebook.
In this tutorial, we understand Data2Vec model from Meta AI and show how to train your own model with a ready-to-use codebase on the Gradient Notebook.
In this article, we take a look at GLIGEN, one of the latest techniques for controlling the outputs of txt2img models like Stable Diffusion, and show how to run the model in a Gradient Notebook
In this article, we take a look at some of the fundamental concepts required for constructing neural networks from scratch. This includes detailed explanations of NN layers, activation functions, and loss functions.
In this tutorial, we look at the LLaMA model from Meta AI, and show how to implement it in a Gradient Notebook with lightning fast access to the models using the Public Dataset.
Boosting the performance and generalization of models by ensembling multiple neural network models.
Part 2 of our series examining techniques for adding control, guidance, and variety to your stable diffusion pipeline.
In this article, we examine four new techniques for brining greater control to your Stable Diffusion pipeline: T2IAdapter, Instruct Pix2Pix, Attend and Excite, and MultiDiffusion
n this article, we looked in depth at ControlNet, a new technique for imparting high levels of control over the shape of synthesized images, and demonstrate how to run it in a Gradient Notebook
In this article, we explored a broad overview of epistemic uncertainty in deep learning classifiers, and develop intuition about how an ensemble of models can be used to detect its presence for a particular image instance.