Text guided video synthesis with background music and speech in Gradient
Check out our guide on creating a full video synthesis pipeline with audio and speech using VideoFusion, YourTTS, Riffusion, and MoviePy.
Tutorials, sample apps, and more created by the Paperspace internal research team and community
Sign up nowCheck out our guide on creating a full video synthesis pipeline with audio and speech using VideoFusion, YourTTS, Riffusion, and MoviePy.
In this article, we take a look at several useful tools for generating fast, high quality 3D images with Python.
We're excited to announce the addition of the H100 GPU to the Paperspace platform.
In this tutorial, we understand Data2Vec model from Meta AI and show how to train your own model with a ready-to-use codebase on the Gradient Notebook.
In this article, we take a look at GLIGEN, one of the latest techniques for controlling the outputs of txt2img models like Stable Diffusion, and show how to run the model in a Gradient Notebook
In this article, we take a look at some of the fundamental concepts required for constructing neural networks from scratch. This includes detailed explanations of NN layers, activation functions, and loss functions.
In this tutorial, we look at the LLaMA model from Meta AI, and show how to implement it in a Gradient Notebook with lightning fast access to the models using the Public Dataset.
Boosting the performance and generalization of models by ensembling multiple neural network models.
Part 2 of our series examining techniques for adding control, guidance, and variety to your stable diffusion pipeline.