How to Use LangChain and ChatGPT in Python – An Overview

Sharing is caring!

Last Updated on April 10, 2023 by Jay

I want to show you how to use LangChain and ChatGPT using Python. Large Language Models (LLM) are getting scarily good nowadays and the potential is both exciting and worrying. We should learn how to work with them and control them, so there’s no time to waste!

What is LangChain?

LangChain is a powerful framework for interacting with language models such as ChatGPT. We can use LangChain to build applications powered by ChatGPT in Python.

What does that mean?

We know that an LLM such as chatGPT can generate both natural language and code. However, it can not “run” that code. LangChain can use chatGPT to generate code and execute the code. This is exciting (also worrying) because now the language model can basically operate our computers.

LangChain is very new – first github push was on Jan 15, 2023. However, it’s already collected 21,000 stars on Github as of today April 05, 2023.

langchain github

LangChain can work with various language models, including ChatGPT from OpenAI. It’s also possible to use LangChain with a local language model such as the Alpaca LLama.

This article will focus on integration with ChatGPT.

OpenAI API Access

Go to OpenAI’s website here and register a free account: https://platform.openai.com/

Then, under USER, click on API Keys. Generate a new secret key. Before clicking away, make sure you copy it since you can’t view it later on. If you lost the key, just remove it and create a new one.

OpenAI Account

Upon new account registration, we’ll get some credits that will last a long time.

It seems like we get different amounts based on location. I only got $5 credits, but a lot of users in other countries seem to get $18.

Code Setup

Let’s install a few Python libraries.

pip install openai        #openai official API
pip install langchain     #langchain library
pip install python-dotenv #for managing API keys

I use the python-dotenv library for storing and managing API keys. It’s not a must, but I highly recommend it.

It’s never a good practice to save passwords or secret keys inside your code, hence we use this approach:

  1. Create a .env file inside the folder where your Python script lives.
  2. Open the .env file using any text editor, save the API keys there
How to securely store API keys

LangChain Basic Model – LLM

We start off by building a simple LangChain large language model powered by ChatGPT. By default, this LLM uses the “text-davinci-003” model. We can pass in the argument model_name = ‘gpt-3.5-turbo’ to use the ChatGPT model. It depends what you want to achieve, sometimes the default davinci model works better than gpt-3.5.

The temperature argument (values from 0 to 2) controls the amount of randomness in the language model’s response. A value of 0 makes the response deterministic, meaning every time we’ll get the same response. A higher value will make the output more random.

from langchain.llms import OpenAI
from dotenv import dotenv_values

api_keys=dotenv_values()
openai_api_key=api_keys['openai_api_key']

llm = OpenAI(openai_api_key = openai_api_key, 
             model_name = 'gpt-3.5-turbo',
             temperature=1)

llm
OpenAIChat(cache=None, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x000002C57176A740>, client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, model_name='gpt-3.5-turbo', model_kwargs={'temperature': 1}, openai_api_key='sk-...........m', max_retries=6, prefix_messages=[], streaming=False)

We can then pass a query into the llm object to interact with chatGPT:

llm('which one is the more popular programming language, python or java?')


'As an AI language model, I am not capable of providing an opinion, but according to multiple surveys and statistics, Java is currently ranked as the more popular programming language. However, Python has been consistently rising in popularity in recent years and is widely used in fields such as data science, machine learning, and artificial intelligence.'

LangChain Chat Model

Another way to communicate with ChatGPT is by using the chat model. It’s a different API from the llm and allows for finer control, but it’s similar to llm in terms of functionalities.

from langchain.chat_models import ChatOpenAI
from langchain.schema import (
    AIMessage,
    HumanMessage,
    SystemMessage
)

chat = ChatOpenAI(openai_api_key = openai_api_key, model_name='gpt-3.5-turbo',temperature=0,verbose=True)
response = chat([SystemMessage(content='you are an expert Canadian tour guide'),
                 HumanMessage(content='whats the most popular city in canada?')])

And we can see that the response is an “AIMessage”.

chat model response

Prompt Templates

Often times we won’t be passing a hard-code query to the language model. This is where the Prompt Template can help.

The Prompt Template looks similar to the Python f-string, as shown below. It basically allows us to plug a variable into a string query and reconstruct a new query.

from langchain import PromptTemplate


template = "i want you to act as an local tour guide for {country_city}"
prompt = PromptTemplate(input_variables = ["country_city"], template = template)


prompt.format(country_city='Toronto, Canada')
'i want you to act as an local tour guide for Toronto, Canada'

Of course, we can include multiple variables into a single Prompt Template. Note that we wrap around variable names with a set of curly brackets, but we do not put the f in front of the string.

Then in the PromptTeamplate constructor, we pass a list of variable names to the input_variable argument so the constructor knows that these two names from the query are variables and should be replaced with whatever values we pass in later.

template_mult_var = "you are a helpful assistant to translate the following sentence from {input_lang} to {output_lang}. here's sentence: {sentence}"

prompt_translator = PromptTemplate(input_variables = ["input_lang", "output_lang", "sentence"], template = template_mult_var)


prompt_translator.format(input_lang = 'English', output_lang='French', sentence='i love python')

As shown, the prompt we constructed is:

"you are a helpful assistant to translate the following sentence from English to French. here's sentence: i love python"

We can then pass this prompt into the llm() to communicate with chatGPT.

llm(prompt_translator.format(input_lang='english', output_lang='French', sentence='i love python'))

And the model’s response is:

"j'aime python"

Add Memory to Model

If you used the web-browser based ChatGPT, you know that the AI seems to remember the conversation with in the same session. We can add the same “memory” using LangChain by passing a ConversationBufferMemory argument to ConversationChain.

from langchain.memory import ConversationBufferMemory
from langchain.llms import OpenAI
from langchain.chains import ConversationChain

conversation = ConversationChain(
    llm=llm, 
    verbose=True, 
    memory=ConversationBufferMemory()
)

Note when we turn of the verbose=True argument, it will display the bot’s “thought process”. Basically it shows what prompts we pass to the bot, and what actions it plans, and takes.

conversation.predict(input="My name is Jay. What's your name?")
Add memory to model

The green lines “The following is a friendly conversation between a human and an AI…” I did not type those, LangChain frame added them to the prompt to help the AI better answer our questions.

You can see my prompt is in the line “Human: My name is Jay. What’s your name?” And you can see the AI’s response at the end.

Next I asked another questions “have you heard about stable diffusion?” I didn’t expect chatGPT to know, because Stable diffusion was launched in late 2022. Whereas the chatGPT was trained using data prior to Sept 2021.

conversation.predict(input="have you heard about stable diffusion?")
conversation.predict(input="do you remember my name?")
Add memory to model

Following the stable diffusion, I asked if the bot remembers my name, which I told it at the beginning of the conversation. As expected,

Feed Language model with custom data

We saw that ChatGPT does not have knowledge on Stable Diffusion. So let’s teach it what Stable Diffusion is.

I copied all the text from Stable diffusion Wikipedia page, and save it into a text file named “sd_wiki.txt”.

We are going to use the TextLoader to load the content from the text file, then pass the information to a vector. A lot of things are happening here, but we’ll talk about the details in a later tutorial.

from langchain.indexes import VectorstoreIndexCreator
from langchain.document_loaders import TextLoader

loader = TextLoader('sd_wiki.txt')
index = VectorstoreIndexCreator().from_loaders([loader])

Next, we can ask the model what is stable diffusion, and the AI would know.

index.query('what is stable diffusion? and what is it used for?')
index.query('what programing language is stable diffusion written in')
Teach Chatgpt custom information

In this example we used a text file. The possibilities are unlimited, we can teach the AI information from PDF, Excel, websites, etc.

Agents

An agent is a feature I’m most excited about. You can think of an agent as an autonomous bot that we can control using natural language.

We can create a simple agent using initialize_agent, then prepare the tools our agent is allowed to use by using the load_tools() method. Agents will not have access to tools that they don’t have permissions to.

from langchain.agents import load_tools
from langchain.agents import initialize_agent

tools = load_tools(['serpapi','requests'], llm=llm, serpapi_api_key=api_keys['serp_api_key'])
agent = initialize_agent(tools, llm, agent='zero-shot-react-description', verbose=True)

The “serpapi” is a search services similar to google.

The “requests” is essentially the Python requests module.

Here’s the question I’m asking the agent: “tell me what is midjourney”. ChatGPT also doesn’t have knowledge about this subject, since midjourney came after Sept 2021.

Now you see where we are going?

agent.run("tell me what is midjourney?")
Langchain agent

The bot didn’t know what midjourney is. So it first:

  1. Searched about midjourney using the serpapi. It found some information, but apparently not happy about it.
  2. Then it decided to see if there’s a website with more information.
  3. Then it scraped www.midjourney.com as shown we can see the HTML for midjourny.com.

This experiment didn’t succeed due to the scrapped text (HTML code) exceeded the maximum token size allowed by ChatGPT, but you can see the capability

This is mindblowing, and I think it’s going to change the way we interact with computers and AI. Hope you can see that it’s possible to build really powerful applications using Langchain, Chatgpt and Python.

Leave a Reply

Your email address will not be published. Required fields are marked *