Building a Conversational AI with LangChain: A Hands-On Tutorial

Most AI assistants feel robotic. They answer one question at a time, forget what you just asked, and force you to start over. But with LangChain, we can do better.

In this tutorial, we’ll build an interactive AI that:

  • Handles live, back-and-forth conversations
  • Remembers past questions for smarter replies
  • Feels more like chatting with a human than querying a bot

No fluff—just practical code you can adapt for real projects.

Setting Up: The Basics

First, let’s install what we need:

bash

Copy

Download

pip install langchain openai python-dotenv

(Pro tip: Store your OpenAI API key in a .env file to keep it secure.)

A Simple Q&A Script

Here’s a barebones version that answers one question at a time:

python

Copy

Download

from langchain.llms import OpenAI

from langchain.prompts import PromptTemplate

from langchain.chains import LLMChain

 

llm = OpenAI(model=”gpt-3.5-turbo-instruct”, temperature=0.7)  # Slightly creative but focused

 

prompt = PromptTemplate(

input_variables=[“query”],

template=”You’re a technical assistant. Keep answers under 3 sentences. Question: {query}”

)

 

qa = LLMChain(llm=llm, prompt=prompt)

 

response = qa.run({“query”: “Explain Python decorators briefly”})

print(response)

Output:
“Python decorators are functions that modify other functions. They’re used to add functionality (like logging or timing) without changing the original code. Think of them as wrappers.”

Making It Interactive

Let’s upgrade this to a live chat loop. The magic? A simple while True loop.

python

Copy

Download

while True:

user_input = input(“\nYou: “)

if user_input.lower() in [“quit”, “exit”]:

print(“AI: Catch you later!”)

break

print(f”AI: {qa.run({‘query’: user_input})}”)

Try it out:

text

Copy

Download

You: How do I reverse a list in Python?

AI: Use my_list[::-1] or my_list.reverse().

 

You: What about sorting?

AI: my_list.sort() sorts in place; sorted(my_list) returns a new list.

 

You: quit

AI: Catch you later!

Adding Memory (So It Stops Forgetting)

Ever talked to a bot that can’t remember the last thing you said? Annoying, right? Let’s fix that with ConversationBufferMemory.

python

Copy

Download

from langchain.memory import ConversationBufferMemory

from langchain.chains import ConversationChain

 

llm = OpenAI(model=”gpt-3.5-turbo-instruct”, temperature=0.5)  # Less randomness for consistency

memory = ConversationBufferMemory()  # Stores chat history

 

convo = ConversationChain(llm=llm, memory=memory)

 

while True:

user_input = input(“\nYou: “)

if user_input.lower() in [“bye”, “exit”]:

print(“AI: Later! Here’s what we discussed:\n”, memory.buffer)

break

print(“AI:”, convo.run(user_input))

Example Chat:

text

Copy

Download

You: My name is Alex.

AI: Nice to meet you, Alex! What can I help you with today?

 

You: What did I just tell you?

AI: You said your name is Alex.

 

You: Write a haiku about coding.

AI: “Silent keys at night,

Bugs emerge in pale moonlight,

Fixes bring delight.”

 

You: exit

AI: Later! Here’s what we discussed:

Human: My name is Alex.

AI: Nice to meet you, Alex!… [rest of convo]

Why This Matters

  1. Natural Flow
    Unlike single-turn bots, this remembers context. Ask “What did I say earlier?” and it actually knows.
  2. Easy to Extend
    Want to add sentiment analysis or document lookups? Just chain more steps.
  3. Production-Ready
    Swap ConversationBufferMemory for RedisMemory to scale beyond local scripts.

Your Next Steps

  • Experiment: Try integrating this with Discord/Slack using their APIs.
  • Level Up: Add document retrieval (e.g., “Search my notes for ‘meeting notes’”).
  • Optimize: Fine-tune the prompt for your use case (e.g., “You’re a sarcastic IT admin”).

Final Tip: If the bot starts rambling, lower the temperature or add max_tokens=150 to keep responses concise.

This isn’t just another “Hello World” tutorial—it’s the foundation for building assistants people actually enjoy using. Now go make something cool.

 

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *