LangChain: A Beginner's Guide to Harness the Power of Language Models

In this tutorial, we show how to get started with LangChain: a useful package for streamlining your Large Language Model pipelines.

4 months ago   •   7 min read

By Shaoni Mukherjee

Sign up FREE

Build & scale AI models on low-cost cloud GPUs.

Get started Talk to an expert
Table of contents

Bring this project to life

In 2024, we are all now aware of the possibilities chatGPT and other LLMs have brought into our lives, and we also know how powerful chatGPT is.  Now if we want to build something for ourselves to serve the same purpose, we can now easily do so by leveraging the power of Langchain.

Langchain is a framework which can be used to develop applications using Large Language Models (LLM). Large Language Models are advanced machine learning models trained on vast amounts of textual data, known as its corpus, to understand and generate human-like language. Examples include GPT-4 (Generative Pre-trained Transformer 4), developed by OpenAI, and BERT (Bidirectional Encoder Representations from Transformers), developed by Google, PaLM again by Google and the list goes on. 

LangChain can help create applications for various purposes, including text summarization, chatbots, question answering, and many other functionalities.

Introduction to Langchain

LangChain serves as a robust framework for creating applications fueled by language models. These applications possess the capability to:

Embrace Context Awareness: Seamlessly integrate a language model with various sources of context, such as prompt instructions, few-shot examples, and contextual content. This allows the application to ground its responses in relevant information.

Enable Reasoning: Leverage the power of a language model to engage in reasoning processes, facilitating the ability to determine appropriate responses based on provided context. This extends to making informed decisions about actions to take within the application's functionality.

Why use Langchain

One of the biggest question that came to our mind when were first introduced to these concepts was why use Langchain? We have chatGPT already, which is incredibly powerful. So why do we need another framework?

In this article we will try to cover the introduction to Langchain and situations where we might need to use it.

LangChain is a versatile interface for various Language Model Models (LLMs), offering a centralized development environment. Its modular approach enables developers and data scientists to easily compare prompts and foundation models without extensive code rewriting. This flexibility allows for the integration of multiple LLMs in applications, such as using one for interpreting user queries and another for generating responses. 

ChatGPT, an application which uses OpenAI's GPT 3.5 LLM,  controls what can be done in a way it might reject some of the queries. It also has many limitations. For example, it does not have any recent data in its training corpus, and, thankfully, it does not have access to our personnel data. The consequence of this is that, until recently, it was very difficult to to build customized chatGPT instances for a specific task. 

However, using the Langchain framework, we can build our own applications and deploy them as well. 

LangChain supports a wide range of LLMs, making it simple to import them with just an API key. The LLM class ensures a standardized interface for all models. To use these LLMs through LangChain, users typically need to create accounts with providers to obtain API keys, with some associated costs, particularly those from providers like OpenAI. Here, we are using the HuggingFace API key to use LangChain free of cost. That being said, we can also use OpenAI’s API key in our future articles to build applications as desired.

LangChain demo

Bring this project to life

This article guides you through the process of constructing a text summarizer by utilizing the Hugging Face API and leveraging the Bart model for summarization. 

  1. Install the necessary modules
!pip install langchain
!pip install langchain openai tiktoken transformers accelerate cohere --quiet
!pip install -U huggingface_hub
  1. Set the environment variable using the Hugging Face Token
import os

os.environ["HUGGINGFACEHUB_API_TOKEN"] = "your token"

The token is typically obtained by creating an account on the Hugging Face website and generating an API token from the account settings. Here is the screen shot to create and view your token.

  1. Import the "Hugging Face hub"
from langchain import HuggingFaceHub
  1. Initialize the text summarizer using the Hugging Face Hub.
summarizer = HuggingFaceHub(
    model_kwargs={"temperature":0, "max_length":180}

repo_id="facebook/bart-large-cnn": This parameter specifies the repository ID for the Hugging Face model to be used. In this case, it is set to "facebook/bart-large-cnn," indicating the BART (Bidirectional and Auto-Regressive Transformers) model by Facebook.

  1. Start using the model to use it to summarize short articles.
ARTICLE = """ Seventy-five years on, it is crucial to remember the sacrifices of the millions who fought and perished during World War II. Memorials and museums worldwide stand as testaments to the enduring impact of the war on the collective human consciousness.

In retrospect, World War II serves as a stark reminder of the consequences of unchecked aggression and the importance of international collaboration. The lessons learned from this tumultuous period continue to shape global politics, emphasizing the imperative of maintaining peace and fostering understanding in our interconnected world
#create the function to summarize any text
def summarize(llm, text) -> str:
    return llm(f"Summarize this: {text}!")

summarize(summarizer, ARTICLE)


'It is crucial to remember the sacrifices of the millions who fought and perished during World War II. Memorials and museums worldwide stand as testaments to the enduring impact of the war on the collective human consciousness. The lessons learned from this tumultuous period continue to shape global politics, emphasizing the imperative of maintaining peace and fostering understanding.'

To access the complete code click the demo link and use the Paperspace platform with free GPU.

In case an error "TypeError: issubclass() arg 1 must be a class" Please use the code !pip install --force-reinstall typing-extensions==4.5.0 and restart the kernel



A language model prompt is a user-provided set of instructions or input designed to guide the model's response. This aids the model in understanding the context and producing relevant output, whether it involves answering questions, completing sentences, or participating in a conversation.

Chat Models

ChatModels play a central role in LangChain. LangChain serves as a standardized interface for engaging with various models. Specifically, this interface accepts a list of messages as input and outputs a message.

The ChatModel class is intentionally designed to establish a consistent interface across multiple model providers such as OpenAI, Cohere, Hugging Face, and others. 


Agents, at their core, leverage a language model to make decisions about a sequence of actions to be taken. Unlike chains where a predefined sequence of actions is hard coded directly in the code, agents use a llm as a reasoning engine to determine the actions to be taken and their order.


Chains form the backbone of LangChain's workflows, seamlessly integrating Language Model Models (LLMs) with other components to build applications through the execution of a series of functions.

The fundamental chain is the LLMChain, which straightforwardly invokes a model and a prompt template. For example, consider saving a prompt as "ExamplePrompt" and intending to run it with Flan-T5. By importing LLMChain from langchain.chains, you can define a chain_example like so: LLMChain(llm=flan-t5, prompt=ExamplePrompt). Executing the chain for a given input is as simple as calling"input").

For scenarios where the output of one function needs to serve as the input for the next, SimpleSequentialChain comes into play. Each function within this chain can employ diverse prompts, tools, parameters, or even different models, catering to specific requirements.


Memory which is still in beta phase is an essential component in a conversation. This allows us to infer information in past conversations. Users have various options, including preserving the complete history of all conversations, summarizing the ongoing conversation, or retaining the most recent n exchanges.

Memory process in Langchain (Source)


Applications made with LangChain provide great utility for a variety of use cases, from straightforward question-answering and text generation tasks to more complex solutions that use an LLM as a “reasoning engine.”-IBM

Langchain provides an amazing platform to build and deploy applications such as Chatbots (most prominent), Text Summarizer, Question answering and much more. Additionally, the collaboration between HuggingFace and LangChain sets the stage for groundbreaking advancements in Natural Language Processing (NLP), offering the potential for more advanced language models and enhanced language comprehension across a multitude of applications and industries.

Thats all for this article!! In the future we will bring more demos and tutorials on Langchain. This article was a small introduction to this amazing and vast framework.

We hope you enjoyed reading the article!

Thanks for reading.


Introduction | 🦜️🔗 Langchain
LangChain is a framework for developing applications powered by language models. It enables applications that:
Uniting Forces: Integrating Hugging Face with Langchain for Enhanced Natural Language Processing
A Blog post by Ankush Singal on Hugging Face

Add speed and simplicity to your Machine Learning workflow today

Get startedTalk to an expert

For further reading on LLMs

Spread the word

Keep reading