Fine-tuning large language models in practice: LLaMA 2
In this tutorial, we show how to fine-tune the powerful LLaMA 2 model with Paperspace's Nvidia Ampere GPUs.
In this tutorial, we show how to fine-tune the powerful LLaMA 2 model with Paperspace's Nvidia Ampere GPUs.
In this article, we walk through the steps for running MLPerf 3.0 on Paperspace GPUs in order to show how we achieve peak performances for AI training, comparable to Nvidia's own reported results.
In this tutorial, we walkthrough the DiffBIR technique for blind image resoration. This Stable Diffusion based technique shows much promise, so follow along this tutorial to launch DiffBIR and explore more!
In this tutorial, we show how to construct a fully trained transformer-based language model using TorchText in a Paperspace Notebook
Our follow up on the STAR Framework, this time showing how to solve word problems in algebra in Python.
In this tutorial, we show how to clone voices with TorToise TTS, and discuss necessary steps to ensure ideal cloning takes place.
In this tutorial, we go over training a LoRA model on Stable Diffusion XL using your own images.
Follow this guide to create a conversational system with a pretrained LLM in Paperspace.
This tutorial shows how the LLaMA 2 model has improved upon the previous versions of LLaMa, and details how to run it using a Jupyter Notebook.