Chatbots have become an integral part of many applications this Decade, providing users with instant responses and assistance. One of the key features that makes a chatbot feel more human-like is its ability to remember previous interactions within a conversation. In this blog post, we’ll explore how to build a chatbot using LangChain, a powerful framework for working with language models, and implement conversational memory to create a more engaging user experience.

What is LangChain?
LangChain is a framework designed to simplify the process of building applications with large language models (LLMs). It provides a set of tools and abstractions that make it easier to create complex, stateful applications that leverage the power of LLMs.
Setting Up the Environment
Before we dive into the code, make sure you have the necessary libraries installed:
pip install langchain langchain_openai python-dotenv openai
You’ll also need an OpenAI API key. Store it in a .env
file in your project directory:
OPENAI_API_KEY=your_api_key_here
Building the Chatbot
Let’s break down the process of building our chatbot with conversational memory:
- Importing Dependencies
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
First, we’ll import the required modules:
- Setting Up the Language Model
We’ll use the OpenAI API to power our chatbot. Load the API key and initialize the model:
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
model = ChatOpenAI(model="gpt-4o-mini", openai_api_key=openai_api_key)
- Implementing Conversational Memory
To give our chatbot memory, we’ll use an in-memory chat history store:
store = {}
def get_session_history(session_id: str):
if session_id not in store:
store[session_id] = InMemoryChatMessageHistory()
return store[session_id]
This function creates a new chat history for each unique session ID or returns an existing one.
- Creating the Chatbot Prompt
We’ll define a prompt template that includes a system message and a placeholder for the conversation history:
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Answer all questions to the best of your ability."),
MessagesPlaceholder(variable_name="messages")
])
- Combining Components
Now, let’s combine the model, prompt, and message history:
chain = prompt | model
with_message_history = RunnableWithMessageHistory(
chain,
get_session_history,
input_messages_key="messages"
)
- Main Conversation Loop
Finally, we’ll create the main loop for our chatbot:
session_id = "abc123" # Unique session identifier
print("Chatbot: Hi! How can I assist you today?")
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]:
print("Chatbot: Goodbye!")
break
print("Chatbot: ", end="", flush=True)
for r in with_message_history.stream(
{"messages": [HumanMessage(content=user_input)]},
config={"configurable": {"session_id": session_id}}
):
print(r.content, end="", flush=True)
print()
This loop continuously prompts the user for input, sends it to the model along with the conversation history, and streams the response back to the user.
How Conversational Memory Works
The key to our chatbot’s memory is the RunnableWithMessageHistory
class. It wraps our language model chain and automatically manages the conversation history for each session. When a new message is sent, it:
- Retrieves the existing conversation history for the session.
- Adds the new message to the history.
- Sends the entire conversation history (up to a certain limit) to the language model.
- Stores the model’s response in the history for future reference.
This process allows the chatbot to maintain context across multiple turns of conversation, creating a more natural and engaging interaction.
Python Full ChatBot code with Conversational Memory
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
# Load your OpenAI API key from a .env file
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
# Initialize the language model (using gpt-4o-mini)
model = ChatOpenAI(model="gpt-4o-mini", openai_api_key=openai_api_key)
# Create an in-memory chat history store
store = {}
def get_session_history(session_id: str):
if session_id not in store:
store[session_id] = InMemoryChatMessageHistory()
return store[session_id]
# Create a prompt template for the chatbot
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Answer all questions to the best of your ability."),
MessagesPlaceholder(variable_name="messages")
])
# Combine the model with the prompt and message history
chain = prompt | model
with_message_history = RunnableWithMessageHistory(
chain,
get_session_history,
input_messages_key="messages"
)
# Conversation with history handling
session_id = "abc123" # Unique session identifier
# Start a conversation
print("Chatbot: Hi! How can I assist you today?")
while True:
user_input = input("You: ")
# Exit condition
if user_input.lower() in ["exit", "quit"]:
print("Chatbot: Goodbye!")
break
# Stream response from the model
print("Chatbot: ", end="", flush=True)
for r in with_message_history.stream(
{"messages": [HumanMessage(content=user_input)]},
config={"configurable": {"session_id": session_id}}
):
print(r.content, end="", flush=True)
print() # For newline after streaming completes
Conclusion
Building a chatbot with conversational memory using LangChain is a powerful way to create more intelligent and context-aware conversational AI. By leveraging LangChain’s abstractions, we can easily implement complex features like stateful conversations and streaming responses.
This example serves as a starting point for creating more advanced chatbots. You can extend this further by implementing features such as:
- Persistent storage for chat histories
- More sophisticated memory management (e.g., summarization for very long conversations)
- Integration with external knowledge bases for more informed responses
- Multi-user support with separate conversation histories
As language models continue to evolve, frameworks like LangChain will play a crucial role in making these powerful tools more accessible and easier to integrate into real-world applications.