url
stringlengths 30
161
| markdown
stringlengths 27
670k
| last_modified
stringclasses 1
value |
---|---|---|
https://python.langchain.com/v0.2/docs/how_to/query_no_queries/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to handle cases where no queries are generated
On this page
How to handle cases where no queries are generated
==================================================
Sometimes, a query analysis technique may allow for any number of queries to be generated - including no queries! In this case, our overall chain will need to inspect the result of the query analysis before deciding whether to call the retriever or not.
We will use mock data for this example.
Setup[](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
# %pip install -qU langchain langchain-community langchain-openai langchain-chroma
#### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
We'll use OpenAI in this example:
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
### Create Index[](#create-index "Direct link to Create Index")
We will create a vectorstore over fake information.
from langchain_chroma import Chromafrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplittertexts = ["Harrison worked at Kensho"]embeddings = OpenAIEmbeddings(model="text-embedding-3-small")vectorstore = Chroma.from_texts( texts, embeddings,)retriever = vectorstore.as_retriever()
**API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
Query analysis[](#query-analysis "Direct link to Query analysis")
------------------------------------------------------------------
We will use function calling to structure the output. However, we will configure the LLM such that is doesn't NEED to call the function representing a search query (should it decide not to). We will also then use a prompt to do query analysis that explicitly lays when it should and shouldn't make a search.
from typing import Optionalfrom langchain_core.pydantic_v1 import BaseModel, Fieldclass Search(BaseModel): """Search over a database of job records.""" query: str = Field( ..., description="Similarity search query applied to job record.", )
from langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAIsystem = """You have the ability to issue search queries to get information to help answer user information.You do not NEED to look things up. If you don't need to, then just respond normally."""prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ])llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)structured_llm = llm.bind_tools([Search])query_analyzer = {"question": RunnablePassthrough()} | prompt | structured_llm
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
We can see that by invoking this we get an message that sometimes - but not always - returns a tool call.
query_analyzer.invoke("where did Harrison Work")
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_ZnoVX4j9Mn8wgChaORyd1cvq', 'function': {'arguments': '{"query":"Harrison"}', 'name': 'Search'}, 'type': 'function'}]})
query_analyzer.invoke("hi!")
AIMessage(content='Hello! How can I assist you today?')
Retrieval with query analysis[](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis")
---------------------------------------------------------------------------------------------------------------
So how would we include this in a chain? Let's look at an example below.
from langchain_core.output_parsers.openai_tools import PydanticToolsParserfrom langchain_core.runnables import chainoutput_parser = PydanticToolsParser(tools=[Search])
**API Reference:**[PydanticToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.PydanticToolsParser.html) | [chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html)
@chaindef custom_chain(question): response = query_analyzer.invoke(question) if "tool_calls" in response.additional_kwargs: query = output_parser.invoke(response) docs = retriever.invoke(query[0].query) # Could add more logic - like another LLM call - here return docs else: return response
custom_chain.invoke("where did Harrison Work")
Number of requested results 4 is greater than number of elements in index 1, updating n_results = 1
[Document(page_content='Harrison worked at Kensho')]
custom_chain.invoke("hi!")
AIMessage(content='Hello! How can I assist you today?')
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/query_no_queries.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use output parsers to parse an LLM response into structured format
](/v0.2/docs/how_to/output_parser_structured/)[
Next
How to route between sub-chains
](/v0.2/docs/how_to/routing/)
* [Setup](#setup)
* [Create Index](#create-index)
* [Query analysis](#query-analysis)
* [Retrieval with query analysis](#retrieval-with-query-analysis) | null |
https://python.langchain.com/v0.2/docs/how_to/output_parser_structured/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use output parsers to parse an LLM response into structured format
On this page
How to use output parsers to parse an LLM response into structured format
=========================================================================
Language models output text. But there are times where you want to get more structured information than just text back. While some model providers support [built-in ways to return structured output](/v0.2/docs/how_to/structured_output/), not all do.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
* "Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted.
* "Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
* "Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
Get started[](#get-started "Direct link to Get started")
---------------------------------------------------------
Below we go over the main type of output parser, the `PydanticOutputParser`.
from langchain_core.output_parsers import PydanticOutputParserfrom langchain_core.prompts import PromptTemplatefrom langchain_core.pydantic_v1 import BaseModel, Field, validatorfrom langchain_openai import OpenAImodel = OpenAI(model_name="gpt-3.5-turbo-instruct", temperature=0.0)# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator("setup") def question_ends_with_question_mark(cls, field): if field[-1] != "?": raise ValueError("Badly formed question!") return field# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)# And a query intended to prompt a language model to populate the data structure.prompt_and_model = prompt | modeloutput = prompt_and_model.invoke({"query": "Tell me a joke."})parser.invoke(output)
**API Reference:**[PydanticOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.pydantic.PydanticOutputParser.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) | [OpenAI](https://api.python.langchain.com/en/latest/llms/langchain_openai.llms.base.OpenAI.html)
Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')
LCEL[](#lcel "Direct link to LCEL")
------------------------------------
Output parsers implement the [Runnable interface](/v0.2/docs/concepts/#interface), the basic building block of the [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language-lcel). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.
Output parsers accept a string or `BaseMessage` as input and can return an arbitrary type.
parser.invoke(output)
Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')
Instead of manually invoking the parser, we also could've just added it to our `Runnable` sequence:
chain = prompt | model | parserchain.invoke({"query": "Tell me a joke."})
Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')
While all parsers support the streaming interface, only certain parsers can stream through partially parsed objects, since this is highly dependent on the output type. Parsers which cannot construct partial objects will simply yield the fully parsed output.
The `SimpleJsonOutputParser` for example can stream through partial outputs:
from langchain.output_parsers.json import SimpleJsonOutputParserjson_prompt = PromptTemplate.from_template( "Return a JSON object with an `answer` key that answers the following question: {question}")json_parser = SimpleJsonOutputParser()json_chain = json_prompt | model | json_parser
**API Reference:**[SimpleJsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.SimpleJsonOutputParser.html)
list(json_chain.stream({"question": "Who invented the microscope?"}))
[{}, {'answer': ''}, {'answer': 'Ant'}, {'answer': 'Anton'}, {'answer': 'Antonie'}, {'answer': 'Antonie van'}, {'answer': 'Antonie van Lee'}, {'answer': 'Antonie van Leeu'}, {'answer': 'Antonie van Leeuwen'}, {'answer': 'Antonie van Leeuwenho'}, {'answer': 'Antonie van Leeuwenhoek'}]
While the PydanticOutputParser cannot:
list(chain.stream({"query": "Tell me a joke."}))
[Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/output_parser_structured.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to run custom functions
](/v0.2/docs/how_to/functions/)[
Next
How to handle cases where no queries are generated
](/v0.2/docs/how_to/query_no_queries/)
* [Get started](#get-started)
* [LCEL](#lcel) | null |
https://python.langchain.com/v0.2/docs/how_to/toolkits/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use toolkits
How to use toolkits
===================
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods. For a complete list of available ready-made toolkits, visit [Integrations](/v0.2/docs/integrations/toolkits/).
All Toolkits expose a `get_tools` method which returns a list of tools. You can therefore do:
# Initialize a toolkittoolkit = ExampleTookit(...)# Get list of toolstools = toolkit.get_tools()# Create agentagent = create_agent_method(llm, tools, prompt)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/toolkits.mdx)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to return structured data from a model
](/v0.2/docs/how_to/structured_output/)[
Next
How to add ad-hoc tool calling capability to LLMs and Chat Models
](/v0.2/docs/how_to/tools_prompting/) | null |
https://python.langchain.com/v0.2/docs/how_to/functions/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to run custom functions
On this page
How to run custom functions
===========================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
You can use arbitrary functions as [Runnables](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable). This is useful for formatting or when you need functionality not provided by other LangChain components, and custom functions used as Runnables are called [`RunnableLambdas`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html).
Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single dict input and unpacks it into multiple argument.
This guide will cover:
* How to explicitly create a runnable from a custom function using the `RunnableLambda` constructor and the convenience `@chain` decorator
* Coercion of custom functions into runnables when used in chains
* How to accept and use run metadata in your custom function
* How to stream with custom functions by having them return generators
Using the constructor[](#using-the-constructor "Direct link to Using the constructor")
---------------------------------------------------------------------------------------
Below, we explicitly wrap our custom logic using the `RunnableLambda` constructor:
%pip install -qU langchain langchain_openaiimport osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass()
from operator import itemgetterfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnableLambdafrom langchain_openai import ChatOpenAIdef length_function(text): return len(text)def _multiple_length_function(text1, text2): return len(text1) * len(text2)def multiple_length_function(_dict): return _multiple_length_function(_dict["text1"], _dict["text2"])model = ChatOpenAI()prompt = ChatPromptTemplate.from_template("what is {a} + {b}")chain1 = prompt | modelchain = ( { "a": itemgetter("foo") | RunnableLambda(length_function), "b": {"text1": itemgetter("foo"), "text2": itemgetter("bar")} | RunnableLambda(multiple_length_function), } | prompt | model)chain.invoke({"foo": "bar", "bar": "gah"})
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
AIMessage(content='3 + 9 equals 12.', response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 14, 'total_tokens': 22}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-73728de3-e483-49e3-ad54-51bd9570e71a-0')
The convenience `@chain` decorator[](#the-convenience-chain-decorator "Direct link to the-convenience-chain-decorator")
------------------------------------------------------------------------------------------------------------------------
You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping the function in a `RunnableLambda` constructor as shown above. Here's an example:
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import chainprompt1 = ChatPromptTemplate.from_template("Tell me a joke about {topic}")prompt2 = ChatPromptTemplate.from_template("What is the subject of this joke: {joke}")@chaindef custom_chain(text): prompt_val1 = prompt1.invoke({"topic": text}) output1 = ChatOpenAI().invoke(prompt_val1) parsed_output1 = StrOutputParser().invoke(output1) chain2 = prompt2 | ChatOpenAI() | StrOutputParser() return chain2.invoke({"joke": parsed_output1})custom_chain.invoke("bears")
**API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html)
'The subject of the joke is the bear and his girlfriend.'
Above, the `@chain` decorator is used to convert `custom_chain` into a runnable, which we invoke with the `.invoke()` method.
If you are using a tracing with [LangSmith](https://docs.smith.langchain.com/), you should see a `custom_chain` trace in there, with the calls to OpenAI nested underneath.
Automatic coercion in chains[](#automatic-coercion-in-chains "Direct link to Automatic coercion in chains")
------------------------------------------------------------------------------------------------------------
When using custom functions in chains with the pipe operator (`|`), you can omit the `RunnableLambda` or `@chain` constructor and rely on coercion. Here's a simple example with a function that takes the output from the model and returns the first five letters of it:
prompt = ChatPromptTemplate.from_template("tell me a story about {topic}")model = ChatOpenAI()chain_with_coerced_function = prompt | model | (lambda x: x.content[:5])chain_with_coerced_function.invoke({"topic": "bears"})
'Once '
Note that we didn't need to wrap the custom function `(lambda x: x.content[:5])` in a `RunnableLambda` constructor because the `model` on the left of the pipe operator is already a Runnable. The custom function is **coerced** into a runnable. See [this section](/v0.2/docs/how_to/sequence/#coercion) for more information.
Passing run metadata[](#passing-run-metadata "Direct link to Passing run metadata")
------------------------------------------------------------------------------------
Runnable lambdas can optionally accept a [RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html#langchain_core.runnables.config.RunnableConfig) parameter, which they can use to pass callbacks, tags, and other configuration information to nested runs.
import jsonfrom langchain_core.runnables import RunnableConfigdef parse_or_fix(text: str, config: RunnableConfig): fixing_chain = ( ChatPromptTemplate.from_template( "Fix the following text:\n\n```text\n{input}\n```\nError: {error}" " Don't narrate, just respond with the fixed data." ) | model | StrOutputParser() ) for _ in range(3): try: return json.loads(text) except Exception as e: text = fixing_chain.invoke({"input": text, "error": e}, config) return "Failed to parse"from langchain_community.callbacks import get_openai_callbackwith get_openai_callback() as cb: output = RunnableLambda(parse_or_fix).invoke( "{foo: bar}", {"tags": ["my-tag"], "callbacks": [cb]} ) print(output) print(cb)
**API Reference:**[RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html)
{'foo': 'bar'}Tokens Used: 62 Prompt Tokens: 56 Completion Tokens: 6Successful Requests: 1Total Cost (USD): $9.6e-05
from langchain_community.callbacks import get_openai_callbackwith get_openai_callback() as cb: output = RunnableLambda(parse_or_fix).invoke( "{foo: bar}", {"tags": ["my-tag"], "callbacks": [cb]} ) print(output) print(cb)
**API Reference:**[get\_openai\_callback](https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.manager.get_openai_callback.html)
{'foo': 'bar'}Tokens Used: 62 Prompt Tokens: 56 Completion Tokens: 6Successful Requests: 1Total Cost (USD): $9.6e-05
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
note
[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) is best suited for code that does not need to support streaming. If you need to support streaming (i.e., be able to operate on chunks of inputs and yield chunks of outputs), use [RunnableGenerator](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableGenerator.html) instead as in the example below.
You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a chain.
The signature of these generators should be `Iterator[Input] -> Iterator[Output]`. Or for async generators: `AsyncIterator[Input] -> AsyncIterator[Output]`.
These are useful for:
* implementing a custom output parser
* modifying the output of a previous step, while preserving streaming capabilities
Here's an example of a custom output parser for comma-separated lists. First, we create a chain that generates such a list as text:
from typing import Iterator, Listprompt = ChatPromptTemplate.from_template( "Write a comma-separated list of 5 animals similar to: {animal}. Do not include numbers")str_chain = prompt | model | StrOutputParser()for chunk in str_chain.stream({"animal": "bear"}): print(chunk, end="", flush=True)
lion, tiger, wolf, gorilla, panda
Next, we define a custom function that will aggregate the currently streamed output and yield it when the model generates the next comma in the list:
# This is a custom parser that splits an iterator of llm tokens# into a list of strings separated by commasdef split_into_list(input: Iterator[str]) -> Iterator[List[str]]: # hold partial input until we get a comma buffer = "" for chunk in input: # add current chunk to buffer buffer += chunk # while there are commas in the buffer while "," in buffer: # split buffer on comma comma_index = buffer.index(",") # yield everything before the comma yield [buffer[:comma_index].strip()] # save the rest for the next iteration buffer = buffer[comma_index + 1 :] # yield the last chunk yield [buffer.strip()]list_chain = str_chain | split_into_listfor chunk in list_chain.stream({"animal": "bear"}): print(chunk, flush=True)
['lion']['tiger']['wolf']['gorilla']['raccoon']
Invoking it gives a full array of values:
list_chain.invoke({"animal": "bear"})
['lion', 'tiger', 'wolf', 'gorilla', 'raccoon']
Async version[](#async-version "Direct link to Async version")
---------------------------------------------------------------
If you are working in an `async` environment, here is an `async` version of the above example:
from typing import AsyncIteratorasync def asplit_into_list( input: AsyncIterator[str],) -> AsyncIterator[List[str]]: # async def buffer = "" async for ( chunk ) in input: # `input` is a `async_generator` object, so use `async for` buffer += chunk while "," in buffer: comma_index = buffer.index(",") yield [buffer[:comma_index].strip()] buffer = buffer[comma_index + 1 :] yield [buffer.strip()]list_chain = str_chain | asplit_into_listasync for chunk in list_chain.astream({"animal": "bear"}): print(chunk, flush=True)
['lion']['tiger']['wolf']['gorilla']['panda']
await list_chain.ainvoke({"animal": "bear"})
['lion', 'tiger', 'wolf', 'gorilla', 'panda']
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now you've learned a few different ways to use custom logic within your chains, and how to implement streaming.
To learn more, see the other how-to guides on runnables in this section.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/functions.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use few shot examples
](/v0.2/docs/how_to/few_shot_examples/)[
Next
How to use output parsers to parse an LLM response into structured format
](/v0.2/docs/how_to/output_parser_structured/)
* [Using the constructor](#using-the-constructor)
* [The convenience `@chain` decorator](#the-convenience-chain-decorator)
* [Automatic coercion in chains](#automatic-coercion-in-chains)
* [Passing run metadata](#passing-run-metadata)
* [Streaming](#streaming)
* [Async version](#async-version)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/output_parser_yaml/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to parse YAML output
On this page
How to parse YAML output
========================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [Output parsers](/v0.2/docs/concepts/#output-parsers)
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Structured output](/v0.2/docs/how_to/structured_output/)
* [Chaining runnables together](/v0.2/docs/how_to/sequence/)
LLMs from different providers often have different strengths depending on the specific data they are trianed on. This also means that some may be "better" and more reliable at generating output in formats other than JSON.
This output parser allows users to specify an arbitrary schema and query LLMs for outputs that conform to that schema, using YAML to format their response.
note
Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed YAML.
%pip install -qU langchain langchain-openaiimport osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass()
We use [Pydantic](https://docs.pydantic.dev) with the [`YamlOutputParser`](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.yaml.YamlOutputParser.html#langchain.output_parsers.yaml.YamlOutputParser) to declare our data model and give the model more context as to what type of YAML it should generate:
from langchain.output_parsers import YamlOutputParserfrom langchain_core.prompts import PromptTemplatefrom langchain_core.pydantic_v1 import BaseModel, Fieldfrom langchain_openai import ChatOpenAI# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke")model = ChatOpenAI(temperature=0)# And a query intented to prompt a language model to populate the data structure.joke_query = "Tell me a joke."# Set up a parser + inject instructions into the prompt template.parser = YamlOutputParser(pydantic_object=Joke)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)chain = prompt | model | parserchain.invoke({"query": joke_query})
**API Reference:**[YamlOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.yaml.YamlOutputParser.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Joke(setup="Why couldn't the bicycle find its way home?", punchline='Because it lost its bearings!')
The parser will automatically parse the output YAML and create a Pydantic model with the data. We can see the parser's `format_instructions`, which get added to the prompt:
parser.get_format_instructions()
'The output should be formatted as a YAML instance that conforms to the given JSON schema below.\n\n# Examples\n## Schema\n```\n{"title": "Players", "description": "A list of players", "type": "array", "items": {"$ref": "#/definitions/Player"}, "definitions": {"Player": {"title": "Player", "type": "object", "properties": {"name": {"title": "Name", "description": "Player name", "type": "string"}, "avg": {"title": "Avg", "description": "Batting average", "type": "number"}}, "required": ["name", "avg"]}}}\n```\n## Well formatted instance\n```\n- name: John Doe\n avg: 0.3\n- name: Jane Maxfield\n avg: 1.4\n```\n\n## Schema\n```\n{"properties": {"habit": { "description": "A common daily habit", "type": "string" }, "sustainable_alternative": { "description": "An environmentally friendly alternative to the habit", "type": "string"}}, "required": ["habit", "sustainable_alternative"]}\n```\n## Well formatted instance\n```\nhabit: Using disposable water bottles for daily hydration.\nsustainable_alternative: Switch to a reusable water bottle to reduce plastic waste and decrease your environmental footprint.\n``` \n\nPlease follow the standard YAML formatting conventions with an indent of 2 spaces and make sure that the data types adhere strictly to the following JSON schema: \n```\n{"properties": {"setup": {"title": "Setup", "description": "question to set up a joke", "type": "string"}, "punchline": {"title": "Punchline", "description": "answer to resolve the joke", "type": "string"}}, "required": ["setup", "punchline"]}\n```\n\nMake sure to always enclose the YAML output in triple backticks (```). Please do not add anything other than valid YAML output!'
You can and should experiment with adding your own formatting hints in the other parts of your prompt to either augment or replace the default instructions.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to prompt a model to return XML. Next, check out the [broader guide on obtaining structured output](/v0.2/docs/how_to/structured_output/) for other related techniques.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/output_parser_yaml.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to parse XML output
](/v0.2/docs/how_to/output_parser_xml/)[
Next
How to use the Parent Document Retriever
](/v0.2/docs/how_to/parent_document_retriever/)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/structured_output/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to return structured data from a model
On this page
How to return structured data from a model
==========================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [Function/tool calling](/v0.2/docs/concepts/#functiontool-calling)
It is often useful to have a model return output that matches a specific schema. One common use-case is extracting data from text to insert into a database or use with some other downstream system. This guide covers a few strategies for getting structured outputs from a model.
The `.with_structured_output()` method[](#the-with_structured_output-method "Direct link to the-with_structured_output-method")
--------------------------------------------------------------------------------------------------------------------------------
Supported models
You can find a [list of models that support this method here](/v0.2/docs/integrations/chat/).
This is the easiest and most reliable way to get structured outputs. `with_structured_output()` is implemented for models that provide native APIs for structuring outputs, like tool/function calling or JSON mode, and makes use of these capabilities under the hood.
This method takes a schema as input which specifies the names, types, and descriptions of the desired output attributes. The method returns a model-like Runnable, except that instead of outputting strings or Messages it outputs objects corresponding to the given schema. The schema can be specified as a [JSON Schema](https://json-schema.org/) or a Pydantic class. If JSON Schema is used then a dictionary will be returned by the Runnable, and if a Pydantic class is used then Pydantic objects will be returned.
As an example, let's get a model to generate a joke and separate the setup from the punchline:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
If we want the model to return a Pydantic object, we just need to pass in the desired Pydantic class:
from typing import Optionalfrom langchain_core.pydantic_v1 import BaseModel, Fieldclass Joke(BaseModel): """Joke to tell user.""" setup: str = Field(description="The setup of the joke") punchline: str = Field(description="The punchline to the joke") rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")structured_llm = llm.with_structured_output(Joke)structured_llm.invoke("Tell me a joke about cats")
Joke(setup='Why was the cat sitting on the computer?', punchline='To keep an eye on the mouse!', rating=None)
tip
Beyond just the structure of the Pydantic class, the name of the Pydantic class, the docstring, and the names and provided descriptions of parameters are very important. Most of the time `with_structured_output` is using a model's function/tool calling API, and you can effectively think of all of this information as being added to the model prompt.
We can also pass in a [JSON Schema](https://json-schema.org/) dict if you prefer not to use Pydantic. In this case, the response is also a dict:
json_schema = { "title": "joke", "description": "Joke to tell user.", "type": "object", "properties": { "setup": { "type": "string", "description": "The setup of the joke", }, "punchline": { "type": "string", "description": "The punchline to the joke", }, "rating": { "type": "integer", "description": "How funny the joke is, from 1 to 10", }, }, "required": ["setup", "punchline"],}structured_llm = llm.with_structured_output(json_schema)structured_llm.invoke("Tell me a joke about cats")
{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!', 'rating': 8}
### Choosing between multiple schemas[](#choosing-between-multiple-schemas "Direct link to Choosing between multiple schemas")
The simplest way to let the model choose from multiple schemas is to create a parent Pydantic class that has a Union-typed attribute:
from typing import Unionclass ConversationalResponse(BaseModel): """Respond in a conversational manner. Be kind and helpful.""" response: str = Field(description="A conversational response to the user's query")class Response(BaseModel): output: Union[Joke, ConversationalResponse]structured_llm = llm.with_structured_output(Response)structured_llm.invoke("Tell me a joke about cats")
Response(output=Joke(setup='Why was the cat sitting on the computer?', punchline='To keep an eye on the mouse!', rating=8))
structured_llm.invoke("How are you today?")
Response(output=ConversationalResponse(response="I'm just a digital assistant, so I don't have feelings, but I'm here and ready to help you. How can I assist you today?"))
Alternatively, you can use tool calling directly to allow the model to choose between options, if your [chosen model supports it](/v0.2/docs/integrations/chat/). This involves a bit more parsing and setup but in some instances leads to better performance because you don't have to use nested schemas. See [this how-to guide](/v0.2/docs/how_to/tool_calling/) for more details.
### Streaming[](#streaming "Direct link to Streaming")
We can stream outputs from our structured model when the output type is a dict (i.e., when the schema is specified as a JSON Schema dict).
info
Note that what's yielded is already aggregated chunks, not deltas.
structured_llm = llm.with_structured_output(json_schema)for chunk in structured_llm.stream("Tell me a joke about cats"): print(chunk)
{}{'setup': ''}{'setup': 'Why'}{'setup': 'Why was'}{'setup': 'Why was the'}{'setup': 'Why was the cat'}{'setup': 'Why was the cat sitting'}{'setup': 'Why was the cat sitting on'}{'setup': 'Why was the cat sitting on the'}{'setup': 'Why was the cat sitting on the computer'}{'setup': 'Why was the cat sitting on the computer?'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': ''}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!'}{'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!', 'rating': 8}
### Few-shot prompting[](#few-shot-prompting "Direct link to Few-shot prompting")
For more complex schemas it's very useful to add few-shot examples to the prompt. This can be done in a few ways.
The simplest and most universal way is to add examples to a system message in the prompt:
from langchain_core.prompts import ChatPromptTemplatesystem = """You are a hilarious comedian. Your specialty is knock-knock jokes. \Return a joke which has the setup (the response to "Who's there?") and the final punchline (the response to "<setup> who?").Here are some examples of jokes:example_user: Tell me a joke about planesexample_assistant: {{"setup": "Why don't planes ever get tired?", "punchline": "Because they have rest wings!", "rating": 2}}example_user: Tell me another joke about planesexample_assistant: {{"setup": "Cargo", "punchline": "Cargo 'vroom vroom', but planes go 'zoom zoom'!", "rating": 10}}example_user: Now about caterpillarsexample_assistant: {{"setup": "Caterpillar", "punchline": "Caterpillar really slow, but watch me turn into a butterfly and steal the show!", "rating": 5}}"""prompt = ChatPromptTemplate.from_messages([("system", system), ("human", "{input}")])few_shot_structured_llm = prompt | structured_llmfew_shot_structured_llm.invoke("what's something funny about woodpeckers")
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
{'setup': 'Woodpecker', 'punchline': "Woodpecker goes 'knock knock', but don't worry, they never expect you to answer the door!", 'rating': 8}
When the underlying method for structuring outputs is tool calling, we can pass in our examples as explicit tool calls. You can check if the model you're using makes use of tool calling in its API reference.
from langchain_core.messages import AIMessage, HumanMessage, ToolMessageexamples = [ HumanMessage("Tell me a joke about planes", name="example_user"), AIMessage( "", name="example_assistant", tool_calls=[ { "name": "joke", "args": { "setup": "Why don't planes ever get tired?", "punchline": "Because they have rest wings!", "rating": 2, }, "id": "1", } ], ), # Most tool-calling models expect a ToolMessage(s) to follow an AIMessage with tool calls. ToolMessage("", tool_call_id="1"), # Some models also expect an AIMessage to follow any ToolMessages, # so you may need to add an AIMessage here. HumanMessage("Tell me another joke about planes", name="example_user"), AIMessage( "", name="example_assistant", tool_calls=[ { "name": "joke", "args": { "setup": "Cargo", "punchline": "Cargo 'vroom vroom', but planes go 'zoom zoom'!", "rating": 10, }, "id": "2", } ], ), ToolMessage("", tool_call_id="2"), HumanMessage("Now about caterpillars", name="example_user"), AIMessage( "", tool_calls=[ { "name": "joke", "args": { "setup": "Caterpillar", "punchline": "Caterpillar really slow, but watch me turn into a butterfly and steal the show!", "rating": 5, }, "id": "3", } ], ), ToolMessage("", tool_call_id="3"),]system = """You are a hilarious comedian. Your specialty is knock-knock jokes. \Return a joke which has the setup (the response to "Who's there?") \and the final punchline (the response to "<setup> who?")."""prompt = ChatPromptTemplate.from_messages( [("system", system), ("placeholder", "{examples}"), ("human", "{input}")])few_shot_structured_llm = prompt | structured_llmfew_shot_structured_llm.invoke({"input": "crocodiles", "examples": examples})
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html)
{'setup': 'Crocodile', 'punchline': "Crocodile 'see you later', but in a while, it becomes an alligator!", 'rating': 7}
For more on few shot prompting when using tool calling, see [here](/v0.2/docs/how_to/function_calling/#Few-shot-prompting).
### (Advanced) Specifying the method for structuring outputs[](#advanced-specifying-the-method-for-structuring-outputs "Direct link to (Advanced) Specifying the method for structuring outputs")
For models that support more than one means of structuring outputs (i.e., they support both tool calling and JSON mode), you can specify which method to use with the `method=` argument.
JSON mode
If using JSON mode you'll have to still specify the desired schema in the model prompt. The schema you pass to `with_structured_output` will only be used for parsing the model outputs, it will not be passed to the model the way it is with tool calling.
To see if the model you're using supports JSON mode, check its entry in the [API reference](https://api.python.langchain.com/en/latest/langchain_api_reference.html).
structured_llm = llm.with_structured_output(Joke, method="json_mode")structured_llm.invoke( "Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys")
Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=None)
Prompting and parsing model directly[](#prompting-and-parsing-model-directly "Direct link to Prompting and parsing model directly")
------------------------------------------------------------------------------------------------------------------------------------
Not all models support `.with_structured_output()`, since not all models have tool calling or JSON mode support. For such models you'll need to directly prompt the model to use a specific format, and use an output parser to extract the structured response from the raw model output.
### Using `PydanticOutputParser`[](#using-pydanticoutputparser "Direct link to using-pydanticoutputparser")
The following example uses the built-in [`PydanticOutputParser`](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.pydantic.PydanticOutputParser.html) to parse the output of a chat model prompted to match the given Pydantic schema. Note that we are adding `format_instructions` directly to the prompt from a method on the parser:
from typing import Listfrom langchain_core.output_parsers import PydanticOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.pydantic_v1 import BaseModel, Fieldclass Person(BaseModel): """Information about a person.""" name: str = Field(..., description="The name of the person") height_in_meters: float = Field( ..., description="The height of the person expressed in meters." )class People(BaseModel): """Identifying information about all people in a text.""" people: List[Person]# Set up a parserparser = PydanticOutputParser(pydantic_object=People)# Promptprompt = ChatPromptTemplate.from_messages( [ ( "system", "Answer the user query. Wrap the output in `json` tags\n{format_instructions}", ), ("human", "{query}"), ]).partial(format_instructions=parser.get_format_instructions())
**API Reference:**[PydanticOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.pydantic.PydanticOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
Let’s take a look at what information is sent to the model:
query = "Anna is 23 years old and she is 6 feet tall"print(prompt.invoke(query).to_string())
System: Answer the user query. Wrap the output in `json` tagsThe output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {"properties": {"foo": {"title": "Foo", "description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted.Here is the output schema:
{"description": "Identifying information about all people in a text.", "properties": {"people": {"title": "People", "type": "array", "items": {"$ref": "#/definitions/Person"}}}, "required": \["people"\], "definitions": {"Person": {"title": "Person", "description": "Information about a person.", "type": "object", "properties": {"name": {"title": "Name", "description": "The name of the person", "type": "string"}, "height\_in\_meters": {"title": "Height In Meters", "description": "The height of the person expressed in meters.", "type": "number"}}, "required": \["name", "height\_in\_meters"\]}}}
Human: Anna is 23 years old and she is 6 feet tall
And now let's invoke it:
chain = prompt | llm | parserchain.invoke({"query": query})
People(people=[Person(name='Anna', height_in_meters=1.8288)])
For a deeper dive into using output parsers with prompting techniques for structured output, see [this guide](/v0.2/docs/how_to/output_parser_structured/).
### Custom Parsing[](#custom-parsing "Direct link to Custom Parsing")
You can also create a custom prompt and parser with [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language), using a plain function to parse the output from the model:
import jsonimport refrom typing import Listfrom langchain_core.messages import AIMessagefrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.pydantic_v1 import BaseModel, Fieldclass Person(BaseModel): """Information about a person.""" name: str = Field(..., description="The name of the person") height_in_meters: float = Field( ..., description="The height of the person expressed in meters." )class People(BaseModel): """Identifying information about all people in a text.""" people: List[Person]# Promptprompt = ChatPromptTemplate.from_messages( [ ( "system", "Answer the user query. Output your answer as JSON that " "matches the given schema: ```json\n{schema}\n```. " "Make sure to wrap the answer in ```json and ``` tags", ), ("human", "{query}"), ]).partial(schema=People.schema())# Custom parserdef extract_json(message: AIMessage) -> List[dict]: """Extracts JSON content from a string where JSON is embedded between ```json and ``` tags. Parameters: text (str): The text containing the JSON content. Returns: list: A list of extracted JSON strings. """ text = message.content # Define the regular expression pattern to match JSON blocks pattern = r"```json(.*?)```" # Find all non-overlapping matches of the pattern in the string matches = re.findall(pattern, text, re.DOTALL) # Return the list of matched JSON strings, stripping any leading or trailing whitespace try: return [json.loads(match.strip()) for match in matches] except Exception: raise ValueError(f"Failed to parse: {message}")
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
Here is the prompt sent to the model:
query = "Anna is 23 years old and she is 6 feet tall"print(prompt.format_prompt(query=query).to_string())
System: Answer the user query. Output your answer as JSON that matches the given schema: ```json{'title': 'People', 'description': 'Identifying information about all people in a text.', 'type': 'object', 'properties': {'people': {'title': 'People', 'type': 'array', 'items': {'$ref': '#/definitions/Person'}}}, 'required': ['people'], 'definitions': {'Person': {'title': 'Person', 'description': 'Information about a person.', 'type': 'object', 'properties': {'name': {'title': 'Name', 'description': 'The name of the person', 'type': 'string'}, 'height_in_meters': {'title': 'Height In Meters', 'description': 'The height of the person expressed in meters.', 'type': 'number'}}, 'required': ['name', 'height_in_meters']}}}```. Make sure to wrap the answer in ```json and ``` tagsHuman: Anna is 23 years old and she is 6 feet tall
And here's what it looks like when we invoke it:
chain = prompt | llm | extract_jsonchain.invoke({"query": query})
[{'people': [{'name': 'Anna', 'height_in_meters': 1.8288}]}]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/structured_output.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to route between sub-chains
](/v0.2/docs/how_to/routing/)[
Next
How to use toolkits
](/v0.2/docs/how_to/toolkits/)
* [The `.with_structured_output()` method](#the-with_structured_output-method)
* [Choosing between multiple schemas](#choosing-between-multiple-schemas)
* [Streaming](#streaming)
* [Few-shot prompting](#few-shot-prompting)
* [(Advanced) Specifying the method for structuring outputs](#advanced-specifying-the-method-for-structuring-outputs)
* [Prompting and parsing model directly](#prompting-and-parsing-model-directly)
* [Using `PydanticOutputParser`](#using-pydanticoutputparser)
* [Custom Parsing](#custom-parsing) | null |
https://python.langchain.com/v0.2/docs/how_to/routing/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to route between sub-chains
On this page
How to route between sub-chains
===============================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
* [Configuring chain parameters at runtime](/v0.2/docs/how_to/configure/)
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Chat Messages](/v0.2/docs/concepts/#message-types)
Routing allows you to create non-deterministic chains where the output of a previous step defines the next step. Routing can help provide structure and consistency around interactions with models by allowing you to define states and use information related to those states as context to model calls.
There are two ways to perform routing:
1. Conditionally return runnables from a [`RunnableLambda`](/v0.2/docs/how_to/functions/) (recommended)
2. Using a `RunnableBranch` (legacy)
We'll illustrate both methods using a two step sequence where the first step classifies an input question as being about `LangChain`, `Anthropic`, or `Other`, then routes to a corresponding prompt chain.
Example Setup[](#example-setup "Direct link to Example Setup")
---------------------------------------------------------------
First, let's create a chain that will identify incoming questions as being about `LangChain`, `Anthropic`, or `Other`:
from langchain_anthropic import ChatAnthropicfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import PromptTemplatechain = ( PromptTemplate.from_template( """Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.Do not respond with more than one word.<question>{question}</question>Classification:""" ) | ChatAnthropic(model_name="claude-3-haiku-20240307") | StrOutputParser())chain.invoke({"question": "how do I call Anthropic?"})
**API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html)
'Anthropic'
Now, let's create three sub chains:
langchain_chain = PromptTemplate.from_template( """You are an expert in langchain. \Always answer questions starting with "As Harrison Chase told me". \Respond to the following question:Question: {question}Answer:""") | ChatAnthropic(model_name="claude-3-haiku-20240307")anthropic_chain = PromptTemplate.from_template( """You are an expert in anthropic. \Always answer questions starting with "As Dario Amodei told me". \Respond to the following question:Question: {question}Answer:""") | ChatAnthropic(model_name="claude-3-haiku-20240307")general_chain = PromptTemplate.from_template( """Respond to the following question:Question: {question}Answer:""") | ChatAnthropic(model_name="claude-3-haiku-20240307")
Using a custom function (Recommended)[](#using-a-custom-function-recommended "Direct link to Using a custom function (Recommended)")
-------------------------------------------------------------------------------------------------------------------------------------
You can also use a custom function to route between different outputs. Here's an example:
def route(info): if "anthropic" in info["topic"].lower(): return anthropic_chain elif "langchain" in info["topic"].lower(): return langchain_chain else: return general_chain
from langchain_core.runnables import RunnableLambdafull_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda( route)
**API Reference:**[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html)
full_chain.invoke({"question": "how do I use Anthropic?"})
AIMessage(content="As Dario Amodei told me, to use Anthropic, you can start by exploring the company's website and learning about their mission, values, and the different services and products they offer. Anthropic is focused on developing safe and ethical AI systems, so they have a strong emphasis on transparency and responsible AI development. \n\nDepending on your specific needs, you can look into Anthropic's AI research and development services, which cover areas like natural language processing, computer vision, and reinforcement learning. They also offer consulting and advisory services to help organizations navigate the challenges and opportunities of AI integration.\n\nAdditionally, Anthropic has released some open-source AI models and tools that you can explore and experiment with. These can be a great way to get hands-on experience with Anthropic's approach to AI development.\n\nOverall, Anthropic aims to be a reliable and trustworthy partner in the AI space, so I'd encourage you to reach out to them directly to discuss how they can best support your specific requirements.", response_metadata={'id': 'msg_01CtLFgFSwvTaJomrihE87Ra', 'content': [ContentBlock(text="As Dario Amodei told me, to use Anthropic, you can start by exploring the company's website and learning about their mission, values, and the different services and products they offer. Anthropic is focused on developing safe and ethical AI systems, so they have a strong emphasis on transparency and responsible AI development. \n\nDepending on your specific needs, you can look into Anthropic's AI research and development services, which cover areas like natural language processing, computer vision, and reinforcement learning. They also offer consulting and advisory services to help organizations navigate the challenges and opportunities of AI integration.\n\nAdditionally, Anthropic has released some open-source AI models and tools that you can explore and experiment with. These can be a great way to get hands-on experience with Anthropic's approach to AI development.\n\nOverall, Anthropic aims to be a reliable and trustworthy partner in the AI space, so I'd encourage you to reach out to them directly to discuss how they can best support your specific requirements.", type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=53, output_tokens=219)})
full_chain.invoke({"question": "how do I use LangChain?"})
AIMessage(content="As Harrison Chase told me, using LangChain involves a few key steps:\n\n1. **Set up your environment**: Install the necessary Python packages, including the LangChain library itself, as well as any other dependencies your application might require, such as language models or other integrations.\n\n2. **Understand the core concepts**: LangChain revolves around a few core concepts, like Agents, Chains, and Tools. Familiarize yourself with these concepts and how they work together to build powerful language-based applications.\n\n3. **Identify your use case**: Determine what kind of task or application you want to build using LangChain, such as a chatbot, a question-answering system, or a document summarization tool.\n\n4. **Choose the appropriate components**: Based on your use case, select the right LangChain components, such as agents, chains, and tools, to build your application.\n\n5. **Integrate with language models**: LangChain is designed to work seamlessly with various language models, such as OpenAI's GPT-3 or Anthropic's models. Connect your chosen language model to your LangChain application.\n\n6. **Implement your application logic**: Use LangChain's building blocks to implement the specific functionality of your application, such as prompting the language model, processing the response, and integrating with other services or data sources.\n\n7. **Test and iterate**: Thoroughly test your application, gather feedback, and iterate on your design and implementation to improve its performance and user experience.\n\nAs Harrison Chase emphasized, LangChain provides a flexible and powerful framework for building language-based applications, making it easier to leverage the capabilities of modern language models. By following these steps, you can get started with LangChain and create innovative solutions tailored to your specific needs.", response_metadata={'id': 'msg_01H3UXAAHG4TwxJLpxwuuVU7', 'content': [ContentBlock(text="As Harrison Chase told me, using LangChain involves a few key steps:\n\n1. **Set up your environment**: Install the necessary Python packages, including the LangChain library itself, as well as any other dependencies your application might require, such as language models or other integrations.\n\n2. **Understand the core concepts**: LangChain revolves around a few core concepts, like Agents, Chains, and Tools. Familiarize yourself with these concepts and how they work together to build powerful language-based applications.\n\n3. **Identify your use case**: Determine what kind of task or application you want to build using LangChain, such as a chatbot, a question-answering system, or a document summarization tool.\n\n4. **Choose the appropriate components**: Based on your use case, select the right LangChain components, such as agents, chains, and tools, to build your application.\n\n5. **Integrate with language models**: LangChain is designed to work seamlessly with various language models, such as OpenAI's GPT-3 or Anthropic's models. Connect your chosen language model to your LangChain application.\n\n6. **Implement your application logic**: Use LangChain's building blocks to implement the specific functionality of your application, such as prompting the language model, processing the response, and integrating with other services or data sources.\n\n7. **Test and iterate**: Thoroughly test your application, gather feedback, and iterate on your design and implementation to improve its performance and user experience.\n\nAs Harrison Chase emphasized, LangChain provides a flexible and powerful framework for building language-based applications, making it easier to leverage the capabilities of modern language models. By following these steps, you can get started with LangChain and create innovative solutions tailored to your specific needs.", type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=50, output_tokens=400)})
full_chain.invoke({"question": "whats 2 + 2"})
AIMessage(content='4', response_metadata={'id': 'msg_01UAKP81jTZu9fyiyFYhsbHc', 'content': [ContentBlock(text='4', type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=28, output_tokens=5)})
Using a RunnableBranch[](#using-a-runnablebranch "Direct link to Using a RunnableBranch")
------------------------------------------------------------------------------------------
A `RunnableBranch` is a special type of runnable that allows you to define a set of conditions and runnables to execute based on the input. It does **not** offer anything that you can't achieve in a custom function as described above, so we recommend using a custom function instead.
A `RunnableBranch` is initialized with a list of (condition, runnable) pairs and a default runnable. It selects which branch by passing each condition the input it's invoked with. It selects the first condition to evaluate to True, and runs the corresponding runnable to that condition with the input.
If no provided conditions match, it runs the default runnable.
Here's an example of what it looks like in action:
from langchain_core.runnables import RunnableBranchbranch = RunnableBranch( (lambda x: "anthropic" in x["topic"].lower(), anthropic_chain), (lambda x: "langchain" in x["topic"].lower(), langchain_chain), general_chain,)full_chain = {"topic": chain, "question": lambda x: x["question"]} | branchfull_chain.invoke({"question": "how do I use Anthropic?"})
**API Reference:**[RunnableBranch](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.branch.RunnableBranch.html)
AIMessage(content="As Dario Amodei told me, to use Anthropic, you should first familiarize yourself with our mission and principles. Anthropic is committed to developing safe and beneficial artificial intelligence that can help solve important problems facing humanity. \n\nTo get started, I recommend exploring the resources on our website, which cover our research, products, and approach to AI development. You can also reach out to our team to learn more about how Anthropic's technology and services can support your specific needs.\n\nThe key is to engage with us in a way that aligns with our values of transparency, ethical AI, and a commitment to the wellbeing of humanity. We're here to collaborate and help you harness the power of advanced AI responsibly.", response_metadata={'id': 'msg_0187BVnpniPDJnVvwf3M1LdY', 'content': [ContentBlock(text="As Dario Amodei told me, to use Anthropic, you should first familiarize yourself with our mission and principles. Anthropic is committed to developing safe and beneficial artificial intelligence that can help solve important problems facing humanity. \n\nTo get started, I recommend exploring the resources on our website, which cover our research, products, and approach to AI development. You can also reach out to our team to learn more about how Anthropic's technology and services can support your specific needs.\n\nThe key is to engage with us in a way that aligns with our values of transparency, ethical AI, and a commitment to the wellbeing of humanity. We're here to collaborate and help you harness the power of advanced AI responsibly.", type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=53, output_tokens=160)})
full_chain.invoke({"question": "how do I use LangChain?"})
AIMessage(content="As Harrison Chase told me, using LangChain involves several key steps. First, you'll need to install the LangChain library and import the necessary modules. Then, you'll want to define your language model, any data sources you plan to use, and the specific tasks you want to accomplish, such as question answering, text generation, or agent-based reasoning. \n\nLangChain provides a flexible framework for building applications that leverage large language models. It includes abstractions for things like retrievers, prompts, and chains, which allow you to compose different components together to create powerful workflows. \n\nThe documentation on the LangChain website is excellent and covers many common use cases in detail. I'd recommend starting there to get a solid understanding of the core concepts and how to apply them to your specific needs. And of course, feel free to reach out if you have any other questions - I'm always happy to share more insights from my conversations with Harrison.", response_metadata={'id': 'msg_01T1naS99wGPkEAP4LME8iAv', 'content': [ContentBlock(text="As Harrison Chase told me, using LangChain involves several key steps. First, you'll need to install the LangChain library and import the necessary modules. Then, you'll want to define your language model, any data sources you plan to use, and the specific tasks you want to accomplish, such as question answering, text generation, or agent-based reasoning. \n\nLangChain provides a flexible framework for building applications that leverage large language models. It includes abstractions for things like retrievers, prompts, and chains, which allow you to compose different components together to create powerful workflows. \n\nThe documentation on the LangChain website is excellent and covers many common use cases in detail. I'd recommend starting there to get a solid understanding of the core concepts and how to apply them to your specific needs. And of course, feel free to reach out if you have any other questions - I'm always happy to share more insights from my conversations with Harrison.", type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=50, output_tokens=205)})
full_chain.invoke({"question": "whats 2 + 2"})
AIMessage(content='4', response_metadata={'id': 'msg_01T6T3TS6hRCtU8JayN93QEi', 'content': [ContentBlock(text='4', type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=28, output_tokens=5)})
Routing by semantic similarity[](#routing-by-semantic-similarity "Direct link to Routing by semantic similarity")
------------------------------------------------------------------------------------------------------------------
One especially useful technique is to use embeddings to route a query to the most relevant prompt. Here's an example.
from langchain_community.utils.math import cosine_similarityfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import PromptTemplatefrom langchain_core.runnables import RunnableLambda, RunnablePassthroughfrom langchain_openai import OpenAIEmbeddingsphysics_template = """You are a very smart physics professor. \You are great at answering questions about physics in a concise and easy to understand manner. \When you don't know the answer to a question you admit that you don't know.Here is a question:{query}"""math_template = """You are a very good mathematician. You are great at answering math questions. \You are so good because you are able to break down hard problems into their component parts, \answer the component parts, and then put them together to answer the broader question.Here is a question:{query}"""embeddings = OpenAIEmbeddings()prompt_templates = [physics_template, math_template]prompt_embeddings = embeddings.embed_documents(prompt_templates)def prompt_router(input): query_embedding = embeddings.embed_query(input["query"]) similarity = cosine_similarity([query_embedding], prompt_embeddings)[0] most_similar = prompt_templates[similarity.argmax()] print("Using MATH" if most_similar == math_template else "Using PHYSICS") return PromptTemplate.from_template(most_similar)chain = ( {"query": RunnablePassthrough()} | RunnableLambda(prompt_router) | ChatAnthropic(model="claude-3-haiku-20240307") | StrOutputParser())
**API Reference:**[cosine\_similarity](https://api.python.langchain.com/en/latest/utils/langchain_community.utils.math.cosine_similarity.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) | [RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
print(chain.invoke("What's a black hole"))
Using PHYSICSAs a physics professor, I would be happy to provide a concise and easy-to-understand explanation of what a black hole is.A black hole is an incredibly dense region of space-time where the gravitational pull is so strong that nothing, not even light, can escape from it. This means that if you were to get too close to a black hole, you would be pulled in and crushed by the intense gravitational forces.The formation of a black hole occurs when a massive star, much larger than our Sun, reaches the end of its life and collapses in on itself. This collapse causes the matter to become extremely dense, and the gravitational force becomes so strong that it creates a point of no return, known as the event horizon.Beyond the event horizon, the laws of physics as we know them break down, and the intense gravitational forces create a singularity, which is a point of infinite density and curvature in space-time.Black holes are fascinating and mysterious objects, and there is still much to be learned about their properties and behavior. If I were unsure about any specific details or aspects of black holes, I would readily admit that I do not have a complete understanding and would encourage further research and investigation.
print(chain.invoke("What's a path integral"))
Using MATHA path integral is a powerful mathematical concept in physics, particularly in the field of quantum mechanics. It was developed by the renowned physicist Richard Feynman as an alternative formulation of quantum mechanics.In a path integral, instead of considering a single, definite path that a particle might take from one point to another, as in classical mechanics, the particle is considered to take all possible paths simultaneously. Each path is assigned a complex-valued weight, and the total probability amplitude for the particle to go from one point to another is calculated by summing (integrating) over all possible paths.The key ideas behind the path integral formulation are:1. Superposition principle: In quantum mechanics, particles can exist in a superposition of multiple states or paths simultaneously.2. Probability amplitude: The probability amplitude for a particle to go from one point to another is calculated by summing the complex-valued weights of all possible paths.3. Weighting of paths: Each path is assigned a weight based on the action (the time integral of the Lagrangian) along that path. Paths with lower action have a greater weight.4. Feynman's approach: Feynman developed the path integral formulation as an alternative to the traditional wave function approach in quantum mechanics, providing a more intuitive and conceptual understanding of quantum phenomena.The path integral approach is particularly useful in quantum field theory, where it provides a powerful framework for calculating transition probabilities and understanding the behavior of quantum systems. It has also found applications in various areas of physics, such as condensed matter, statistical mechanics, and even in finance (the path integral approach to option pricing).The mathematical construction of the path integral involves the use of advanced concepts from functional analysis and measure theory, making it a powerful and sophisticated tool in the physicist's arsenal.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to add routing to your composed LCEL chains.
Next, check out the other how-to guides on runnables in this section.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/routing.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to handle cases where no queries are generated
](/v0.2/docs/how_to/query_no_queries/)[
Next
How to return structured data from a model
](/v0.2/docs/how_to/structured_output/)
* [Example Setup](#example-setup)
* [Using a custom function (Recommended)](#using-a-custom-function-recommended)
* [Using a RunnableBranch](#using-a-runnablebranch)
* [Routing by semantic similarity](#routing-by-semantic-similarity)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/pydantic_compatibility/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use LangChain with different Pydantic versions
On this page
How to use LangChain with different Pydantic versions
=====================================================
* Pydantic v2 was released in June, 2023 ([https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/](https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/))
* v2 contains has a number of breaking changes ([https://docs.pydantic.dev/2.0/migration/](https://docs.pydantic.dev/2.0/migration/))
* Pydantic v2 and v1 are under the same package name, so both versions cannot be installed at the same time
LangChain Pydantic migration plan[](#langchain-pydantic-migration-plan "Direct link to LangChain Pydantic migration plan")
---------------------------------------------------------------------------------------------------------------------------
As of `langchain>=0.0.267`, LangChain will allow users to install either Pydantic V1 or V2.
* Internally LangChain will continue to [use V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features).
* During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below).
User can either pin to pydantic v1, and upgrade their code in one go once LangChain has migrated to v2 internally, or they can start a partial migration to v2, but must avoid mixing v1 and v2 code for LangChain.
Below are two examples of showing how to avoid mixing pydantic v1 and v2 code in the case of inheritance and in the case of passing objects to LangChain.
**Example 1: Extending via inheritance**
**YES**
from pydantic.v1 import root_validator, validatorfrom langchain_core.tools import BaseToolclass CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello" @validator('x') # v1 code @classmethod def validate_x(cls, x: int) -> int: return 1 CustomTool( name='custom_tool', description="hello", x=1,)
**API Reference:**[BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html)
Mixing Pydantic v2 primitives with Pydantic v1 primitives can raise cryptic errors
**NO**
from pydantic import Field, field_validator # pydantic v2from langchain_core.tools import BaseToolclass CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello" @field_validator('x') # v2 code @classmethod def validate_x(cls, x: int) -> int: return 1 CustomTool( name='custom_tool', description="hello", x=1,)
**API Reference:**[BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html)
**Example 2: Passing objects to LangChain**
**YES**
from langchain_core.tools import Toolfrom pydantic.v1 import BaseModel, Field # <-- Uses v1 namespaceclass CalculatorInput(BaseModel): question: str = Field()Tool.from_function( # <-- tool uses v1 namespace func=lambda question: 'hello', name="Calculator", description="useful for when you need to answer questions about math", args_schema=CalculatorInput)
**API Reference:**[Tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.Tool.html)
**NO**
from langchain_core.tools import Toolfrom pydantic import BaseModel, Field # <-- Uses v2 namespaceclass CalculatorInput(BaseModel): question: str = Field()Tool.from_function( # <-- tool uses v1 namespace func=lambda question: 'hello', name="Calculator", description="useful for when you need to answer questions about math", args_schema=CalculatorInput)
**API Reference:**[Tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.Tool.html)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/pydantic_compatibility.md)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use the Parent Document Retriever
](/v0.2/docs/how_to/parent_document_retriever/)[
Next
How to add chat history
](/v0.2/docs/how_to/qa_chat_history_how_to/)
* [LangChain Pydantic migration plan](#langchain-pydantic-migration-plan) | null |
https://python.langchain.com/v0.2/docs/how_to/parent_document_retriever/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use the Parent Document Retriever
On this page
How to use the Parent Document Retriever
========================================
When splitting documents for retrieval, there are often conflicting desires:
1. You may want to have small documents, so that their embeddings can most accurately reflect their meaning. If too long, then the embeddings can lose meaning.
2. You want to have long enough documents that the context of each chunk is retained.
The `ParentDocumentRetriever` strikes that balance by splitting and storing small chunks of data. During retrieval, it first fetches the small chunks but then looks up the parent ids for those chunks and returns those larger documents.
Note that "parent document" refers to the document that a small chunk originated from. This can either be the whole raw document OR a larger chunk.
from langchain.retrievers import ParentDocumentRetriever
**API Reference:**[ParentDocumentRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.parent_document_retriever.ParentDocumentRetriever.html)
from langchain.storage import InMemoryStorefrom langchain_chroma import Chromafrom langchain_community.document_loaders import TextLoaderfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitter
**API Reference:**[InMemoryStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryStore.html) | [TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
loaders = [ TextLoader("paul_graham_essay.txt"), TextLoader("state_of_the_union.txt"),]docs = []for loader in loaders: docs.extend(loader.load())
Retrieving full documents[](#retrieving-full-documents "Direct link to Retrieving full documents")
---------------------------------------------------------------------------------------------------
In this mode, we want to retrieve the full documents. Therefore, we only specify a child splitter.
# This text splitter is used to create the child documentschild_splitter = RecursiveCharacterTextSplitter(chunk_size=400)# The vectorstore to use to index the child chunksvectorstore = Chroma( collection_name="full_documents", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter,)
retriever.add_documents(docs, ids=None)
This should yield two keys, because we added two documents.
list(store.yield_keys())
['9a63376c-58cc-42c9-b0f7-61f0e1a3a688', '40091598-e918-4a18-9be0-f46413a95ae4']
Let's now call the vector store search functionality - we should see that it returns small chunks (since we're storing the small chunks).
sub_docs = vectorstore.similarity_search("justice breyer")
print(sub_docs[0].page_content)
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
Let's now retrieve from the overall retriever. This should return large documents - since it returns the documents where the smaller chunks are located.
retrieved_docs = retriever.invoke("justice breyer")
len(retrieved_docs[0].page_content)
38540
Retrieving larger chunks[](#retrieving-larger-chunks "Direct link to Retrieving larger chunks")
------------------------------------------------------------------------------------------------
Sometimes, the full documents can be too big to want to retrieve them as is. In that case, what we really want to do is to first split the raw documents into larger chunks, and then split it into smaller chunks. We then index the smaller chunks, but on retrieval we retrieve the larger chunks (but still not the full documents).
# This text splitter is used to create the parent documentsparent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)# This text splitter is used to create the child documents# It should create documents smaller than the parentchild_splitter = RecursiveCharacterTextSplitter(chunk_size=400)# The vectorstore to use to index the child chunksvectorstore = Chroma( collection_name="split_parents", embedding_function=OpenAIEmbeddings())# The storage layer for the parent documentsstore = InMemoryStore()
retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter, parent_splitter=parent_splitter,)
retriever.add_documents(docs)
We can see that there are much more than two documents now - these are the larger chunks.
len(list(store.yield_keys()))
66
Let's make sure the underlying vector store still retrieves the small chunks.
sub_docs = vectorstore.similarity_search("justice breyer")
print(sub_docs[0].page_content)
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
retrieved_docs = retriever.invoke("justice breyer")
len(retrieved_docs[0].page_content)
1849
print(retrieved_docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/parent_document_retriever.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to parse YAML output
](/v0.2/docs/how_to/output_parser_yaml/)[
Next
How to use LangChain with different Pydantic versions
](/v0.2/docs/how_to/pydantic_compatibility/)
* [Retrieving full documents](#retrieving-full-documents)
* [Retrieving larger chunks](#retrieving-larger-chunks) | null |
https://python.langchain.com/v0.2/docs/how_to/tools_prompting/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add ad-hoc tool calling capability to LLMs and Chat Models
On this page
How to add ad-hoc tool calling capability to LLMs and Chat Models
=================================================================
caution
Some models have been fine-tuned for tool calling and provide a dedicated API for tool calling. Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling. Please see the [how to use a chat model to call tools](/v0.2/docs/how_to/tool_calling/) guide for more information.
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Tools](/v0.2/docs/concepts/#tools)
* [Function/tool calling](https://python.langchain.com/v0.2/docs/concepts/#functiontool-calling)
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [LLMs](/v0.2/docs/concepts/#llms)
In this guide, we'll see how to add **ad-hoc** tool calling support to a chat model. This is an alternative method to invoke tools if you're using a model that does not natively support [tool calling](/v0.2/docs/how_to/tool_calling/).
We'll do this by simply writing a prompt that will get the model to invoke the appropriate tools. Here's a diagram of the logic:
![chain](/v0.2/assets/images/tool_chain-3571e7fbc481d648aff93a2630f812ab.svg)
Setup[](#setup "Direct link to Setup")
---------------------------------------
We'll need to install the following packages:
%pip install --upgrade --quiet langchain langchain-community
If you'd like to use LangSmith, uncomment the below:
import getpassimport os# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
You can select any of the given models for this how-to guide. Keep in mind that most of these models already [support native tool calling](/v0.2/docs/integrations/chat/), so using the prompting strategy shown here doesn't make sense for these models, and instead you should follow the [how to use a chat model to call tools](/v0.2/docs/how_to/tool_calling/) guide.
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI(model="gpt-4")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAImodel = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAImodel = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoheremodel = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksmodel = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqmodel = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAImodel = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
To illustrate the idea, we'll use `phi3` via Ollama, which does **NOT** have native support for tool calling. If you'd like to use `Ollama` as well follow [these instructions](/v0.2/docs/integrations/chat/ollama/).
from langchain_community.llms import Ollamamodel = Ollama(model="phi3")
**API Reference:**[Ollama](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html)
Create a tool[](#create-a-tool "Direct link to Create a tool")
---------------------------------------------------------------
First, let's create an `add` and `multiply` tools. For more information on creating custom tools, please see [this guide](/v0.2/docs/how_to/custom_tools/).
from langchain_core.tools import tool@tooldef multiply(x: float, y: float) -> float: """Multiply two numbers together.""" return x * y@tooldef add(x: int, y: int) -> int: "Add two numbers." return x + ytools = [multiply, add]# Let's inspect the toolsfor t in tools: print("--") print(t.name) print(t.description) print(t.args)
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
--multiplyMultiply two numbers together.{'x': {'title': 'X', 'type': 'number'}, 'y': {'title': 'Y', 'type': 'number'}}--addAdd two numbers.{'x': {'title': 'X', 'type': 'integer'}, 'y': {'title': 'Y', 'type': 'integer'}}
multiply.invoke({"x": 4, "y": 5})
20.0
Creating our prompt[](#creating-our-prompt "Direct link to Creating our prompt")
---------------------------------------------------------------------------------
We'll want to write a prompt that specifies the tools the model has access to, the arguments to those tools, and the desired output format of the model. In this case we'll instruct it to output a JSON blob of the form `{"name": "...", "arguments": {...}}`.
from langchain_core.output_parsers import JsonOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.tools import render_text_descriptionrendered_tools = render_text_description(tools)print(rendered_tools)
**API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [render\_text\_description](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.render_text_description.html)
multiply(x: float, y: float) -> float - Multiply two numbers together.add(x: int, y: int) -> int - Add two numbers.
system_prompt = f"""\You are an assistant that has access to the following set of tools. Here are the names and descriptions for each tool:{rendered_tools}Given the user input, return the name and input of the tool to use. Return your response as a JSON blob with 'name' and 'arguments' keys.The `arguments` should be a dictionary, with keys corresponding to the argument names and the values corresponding to the requested values."""prompt = ChatPromptTemplate.from_messages( [("system", system_prompt), ("user", "{input}")])
chain = prompt | modelmessage = chain.invoke({"input": "what's 3 plus 1132"})# Let's take a look at the output from the model# if the model is an LLM (not a chat model), the output will be a string.if isinstance(message, str): print(message)else: # Otherwise it's a chat model print(message.content)
{ "name": "add", "arguments": { "x": 3, "y": 1132 }}
Adding an output parser[](#adding-an-output-parser "Direct link to Adding an output parser")
---------------------------------------------------------------------------------------------
We'll use the `JsonOutputParser` for parsing our models output to JSON.
from langchain_core.output_parsers import JsonOutputParserchain = prompt | model | JsonOutputParser()chain.invoke({"input": "what's thirteen times 4"})
**API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html)
{'name': 'multiply', 'arguments': {'x': 13.0, 'y': 4.0}}
info
🎉 Amazing! 🎉 We now instructed our model on how to **request** that a tool be invoked.
Now, let's create some logic to actually run the tool!
Invoking the tool 🏃[](#invoking-the-tool- "Direct link to Invoking the tool 🏃")
----------------------------------------------------------------------------------
Now that the model can request that a tool be invoked, we need to write a function that can actually invoke the tool.
The function will select the appropriate tool by name, and pass to it the arguments chosen by the model.
from typing import Any, Dict, Optional, TypedDictfrom langchain_core.runnables import RunnableConfigclass ToolCallRequest(TypedDict): """A typed dict that shows the inputs into the invoke_tool function.""" name: str arguments: Dict[str, Any]def invoke_tool( tool_call_request: ToolCallRequest, config: Optional[RunnableConfig] = None): """A function that we can use the perform a tool invocation. Args: tool_call_request: a dict that contains the keys name and arguments. The name must match the name of a tool that exists. The arguments are the arguments to that tool. config: This is configuration information that LangChain uses that contains things like callbacks, metadata, etc.See LCEL documentation about RunnableConfig. Returns: output from the requested tool """ tool_name_to_tool = {tool.name: tool for tool in tools} name = tool_call_request["name"] requested_tool = tool_name_to_tool[name] return requested_tool.invoke(tool_call_request["arguments"], config=config)
**API Reference:**[RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html)
Let's test this out 🧪!
invoke_tool({"name": "multiply", "arguments": {"x": 3, "y": 5}})
15.0
Let's put it together[](#lets-put-it-together "Direct link to Let's put it together")
--------------------------------------------------------------------------------------
Let's put it together into a chain that creates a calculator with add and multiplication capabilities.
chain = prompt | model | JsonOutputParser() | invoke_toolchain.invoke({"input": "what's thirteen times 4.14137281"})
53.83784653
Returning tool inputs[](#returning-tool-inputs "Direct link to Returning tool inputs")
---------------------------------------------------------------------------------------
It can be helpful to return not only tool outputs but also tool inputs. We can easily do this with LCEL by `RunnablePassthrough.assign`\-ing the tool output. This will take whatever the input is to the RunnablePassrthrough components (assumed to be a dictionary) and add a key to it while still passing through everything that's currently in the input:
from langchain_core.runnables import RunnablePassthroughchain = ( prompt | model | JsonOutputParser() | RunnablePassthrough.assign(output=invoke_tool))chain.invoke({"input": "what's thirteen times 4.14137281"})
**API Reference:**[RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
{'name': 'multiply', 'arguments': {'x': 13, 'y': 4.14137281}, 'output': 53.83784653}
What's next?[](#whats-next "Direct link to What's next?")
----------------------------------------------------------
This how-to guide shows the "happy path" when the model correctly outputs all the required tool information.
In reality, if you're using more complex tools, you will start encountering errors from the model, especially for models that have not been fine tuned for tool calling and for less capable models.
You will need to be prepared to add strategies to improve the output from the model; e.g.,
1. Provide few shot examples.
2. Add error handling (e.g., catch the exception and feed it back to the LLM to ask it to correct its previous output).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_prompting.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use toolkits
](/v0.2/docs/how_to/toolkits/)[
Next
Build an Agent with AgentExecutor (Legacy)
](/v0.2/docs/how_to/agent_executor/)
* [Setup](#setup)
* [Create a tool](#create-a-tool)
* [Creating our prompt](#creating-our-prompt)
* [Adding an output parser](#adding-an-output-parser)
* [Invoking the tool 🏃](#invoking-the-tool-)
* [Let's put it together](#lets-put-it-together)
* [Returning tool inputs](#returning-tool-inputs)
* [What's next?](#whats-next) | null |
https://python.langchain.com/v0.2/docs/how_to/multimodal_prompts/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use multimodal prompts
How to use multimodal prompts
=============================
Here we demonstrate how to use prompt templates to format multimodal inputs to models.
In this example we will ask a model to describe an image.
import base64import httpximage_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
from langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAImodel = ChatOpenAI(model="gpt-4o")
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
prompt = ChatPromptTemplate.from_messages( [ ("system", "Describe the image provided"), ( "user", [ { "type": "image_url", "image_url": {"url": "data:image/jpeg;base64,{image_data}"}, } ], ), ])
chain = prompt | model
response = chain.invoke({"image_data": image_data})print(response.content)
The image depicts a sunny day with a beautiful blue sky filled with scattered white clouds. The sky has varying shades of blue, ranging from a deeper hue near the horizon to a lighter, almost pale blue higher up. The white clouds are fluffy and scattered across the expanse of the sky, creating a peaceful and serene atmosphere. The lighting and cloud patterns suggest pleasant weather conditions, likely during the daytime hours on a mild, sunny day in an outdoor natural setting.
We can also pass in multiple images.
prompt = ChatPromptTemplate.from_messages( [ ("system", "compare the two pictures provided"), ( "user", [ { "type": "image_url", "image_url": {"url": "data:image/jpeg;base64,{image_data1}"}, }, { "type": "image_url", "image_url": {"url": "data:image/jpeg;base64,{image_data2}"}, }, ], ), ])
chain = prompt | model
response = chain.invoke({"image_data1": image_data, "image_data2": image_data})print(response.content)
The two images provided are identical. Both images feature a wooden boardwalk path extending through a lush green field under a bright blue sky with some clouds. The perspective, colors, and elements in both images are exactly the same.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/multimodal_prompts.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to pass multimodal data directly to models
](/v0.2/docs/how_to/multimodal_inputs/)[
Next
How to create a custom Output Parser
](/v0.2/docs/how_to/output_parser_custom/) | null |
https://python.langchain.com/v0.2/docs/how_to/qa_citations/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to get a RAG application to add citations
On this page
How to get a RAG application to add citations
=============================================
This guide reviews methods to get a model to cite which parts of the source documents it referenced in generating its response.
We will cover five methods:
1. Using tool-calling to cite document IDs;
2. Using tool-calling to cite documents IDs and provide text snippets;
3. Direct prompting;
4. Retrieval post-processing (i.e., compressing the retrieved context to make it more relevant);
5. Generation post-processing (i.e., issuing a second LLM call to annotate a generated answer with citations).
We generally suggest using the first item of the list that works for your use-case. That is, if your model supports tool-calling, try methods 1 or 2; otherwise, or if those fail, advance down the list.
Let's first create a simple RAG chain. To start we'll just retrieve from Wikipedia using the [WikipediaRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html).
Setup[](#setup "Direct link to Setup")
---------------------------------------
First we'll need to install some dependencies and set environment vars for the models we'll be using.
%pip install -qU langchain langchain-openai langchain-anthropic langchain-community wikipedia
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()os.environ["ANTHROPIC_API_KEY"] = getpass.getpass()# Uncomment if you want to log to LangSmith# os.environ["LANGCHAIN_TRACING_V2"] = "true# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Let's first select a LLM:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
from langchain_community.retrievers import WikipediaRetrieverfrom langchain_core.prompts import ChatPromptTemplatesystem_prompt = ( "You're a helpful AI assistant. Given a user question " "and some Wikipedia article snippets, answer the user " "question. If none of the articles answer the question, " "just say you don't know." "\n\nHere are the Wikipedia articles: " "{context}")retriever = WikipediaRetriever(top_k_results=6, doc_content_chars_max=2000)prompt = ChatPromptTemplate.from_messages( [ ("system", system_prompt), ("human", "{input}"), ])prompt.pretty_print()
**API Reference:**[WikipediaRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
================================[1m System Message [0m================================You're a helpful AI assistant. Given a user question and some Wikipedia article snippets, answer the user question. If none of the articles answer the question, just say you don't know.Here are the Wikipedia articles: [33;1m[1;3m{context}[0m================================[1m Human Message [0m=================================[33;1m[1;3m{input}[0m
Now that we've got a model, retriver and prompt, let's chain them all together. We'll need to add some logic for formatting our retrieved Documents to a string that can be passed to our prompt. Following the how-to guide on [adding citations](/v0.2/docs/how_to/qa_citations/) to a RAG application, we'll make it so our chain returns both the answer and the retrieved Documents.
from typing import Listfrom langchain_core.documents import Documentfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnablePassthroughdef format_docs(docs: List[Document]): return "\n\n".join(doc.page_content for doc in docs)rag_chain_from_docs = ( RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"]))) | prompt | llm | StrOutputParser())retrieve_docs = (lambda x: x["input"]) | retrieverchain = RunnablePassthrough.assign(context=retrieve_docs).assign( answer=rag_chain_from_docs)
**API Reference:**[Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
result = chain.invoke({"input": "How fast are cheetahs?"})
print(result.keys())
dict_keys(['input', 'context', 'answer'])
print(result["context"][0])
page_content='The cheetah (Acinonyx jubatus) is a large cat and the fastest land animal. It has a tawny to creamy white or pale buff fur that is marked with evenly spaced, solid black spots. The head is small and rounded, with a short snout and black tear-like facial streaks. It reaches 67–94 cm (26–37 in) at the shoulder, and the head-and-body length is between 1.1 and 1.5 m (3 ft 7 in and 4 ft 11 in). Adults weigh between 21 and 72 kg (46 and 159 lb). The cheetah is capable of running at 93 to 104 km/h (58 to 65 mph); it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tail.\nThe cheetah was first described in the late 18th century. Four subspecies are recognised today that are native to Africa and central Iran. An African subspecies was introduced to India in 2022. It is now distributed mainly in small, fragmented populations in northwestern, eastern and southern Africa and central Iran. It lives in a variety of habitats such as savannahs in the Serengeti, arid mountain ranges in the Sahara, and hilly desert terrain.\nThe cheetah lives in three main social groups: females and their cubs, male "coalitions", and solitary males. While females lead a nomadic life searching for prey in large home ranges, males are more sedentary and instead establish much smaller territories in areas with plentiful prey and access to females. The cheetah is active during the day, with peaks during dawn and dusk. It feeds on small- to medium-sized prey, mostly weighing under 40 kg (88 lb), and prefers medium-sized ungulates such as impala, springbok and Thomson\'s gazelles. The cheetah typically stalks its prey within 60–100 m (200–330 ft) before charging towards it, trips it during the chase and bites its throat to suffocate it to death. It breeds throughout the year. After a gestation of nearly three months, females give birth to a litter of three or four cubs. Cheetah cubs are highly vulnerable to predation by other large carnivores. They are weaned a' metadata={'title': 'Cheetah', 'summary': 'The cheetah (Acinonyx jubatus) is a large cat and the fastest land animal. It has a tawny to creamy white or pale buff fur that is marked with evenly spaced, solid black spots. The head is small and rounded, with a short snout and black tear-like facial streaks. It reaches 67–94 cm (26–37 in) at the shoulder, and the head-and-body length is between 1.1 and 1.5 m (3 ft 7 in and 4 ft 11 in). Adults weigh between 21 and 72 kg (46 and 159 lb). The cheetah is capable of running at 93 to 104 km/h (58 to 65 mph); it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tail.\nThe cheetah was first described in the late 18th century. Four subspecies are recognised today that are native to Africa and central Iran. An African subspecies was introduced to India in 2022. It is now distributed mainly in small, fragmented populations in northwestern, eastern and southern Africa and central Iran. It lives in a variety of habitats such as savannahs in the Serengeti, arid mountain ranges in the Sahara, and hilly desert terrain.\nThe cheetah lives in three main social groups: females and their cubs, male "coalitions", and solitary males. While females lead a nomadic life searching for prey in large home ranges, males are more sedentary and instead establish much smaller territories in areas with plentiful prey and access to females. The cheetah is active during the day, with peaks during dawn and dusk. It feeds on small- to medium-sized prey, mostly weighing under 40 kg (88 lb), and prefers medium-sized ungulates such as impala, springbok and Thomson\'s gazelles. The cheetah typically stalks its prey within 60–100 m (200–330 ft) before charging towards it, trips it during the chase and bites its throat to suffocate it to death. It breeds throughout the year. After a gestation of nearly three months, females give birth to a litter of three or four cubs. Cheetah cubs are highly vulnerable to predation by other large carnivores. They are weaned at around four months and are independent by around 20 months of age.\nThe cheetah is threatened by habitat loss, conflict with humans, poaching and high susceptibility to diseases. In 2016, the global cheetah population was estimated at 7,100 individuals in the wild; it is listed as Vulnerable on the IUCN Red List. It has been widely depicted in art, literature, advertising, and animation. It was tamed in ancient Egypt and trained for hunting ungulates in the Arabian Peninsula and India. It has been kept in zoos since the early 19th century.', 'source': 'https://en.wikipedia.org/wiki/Cheetah'}
print(result["answer"])
Cheetahs are capable of running at speeds of 93 to 104 km/h (58 to 65 mph). They have evolved specialized adaptations for speed, including a light build, long thin legs, and a long tail.
LangSmith trace: [https://smith.langchain.com/public/0472c5d1-49dc-4c1c-8100-61910067d7ed/r](https://smith.langchain.com/public/0472c5d1-49dc-4c1c-8100-61910067d7ed/r)
Function-calling[](#function-calling "Direct link to Function-calling")
------------------------------------------------------------------------
If your LLM of choice implements a [tool-calling](/v0.2/docs/concepts/#functiontool-calling) feature, you can use it to make the model specify which of the provided documents it's referencing when generating its answer. LangChain tool-calling models implement a `.with_structured_output` method which will force generation adhering to a desired schema (see for example [here](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.with_structured_output)).
### Cite documents[](#cite-documents "Direct link to Cite documents")
To cite documents using an identifier, we format the identifiers into the prompt, then use `.with_structured_output` to coerce the LLM to reference these identifiers in its output.
First we define a schema for the output. The `.with_structured_output` supports multiple formats, including JSON schema and Pydantic. Here we will use Pydantic:
from langchain_core.pydantic_v1 import BaseModel, Fieldclass CitedAnswer(BaseModel): """Answer the user question based only on the given sources, and cite the sources used.""" answer: str = Field( ..., description="The answer to the user question, which is based only on the given sources.", ) citations: List[int] = Field( ..., description="The integer IDs of the SPECIFIC sources which justify the answer.", )
Let's see what the model output is like when we pass in our functions and a user input:
structured_llm = llm.with_structured_output(CitedAnswer)example_q = """What Brian's height?Source: 1Information: Suzy is 6'2"Source: 2Information: Jeremiah is blondeSource: 3Information: Brian is 3 inches shorter than Suzy"""result = structured_llm.invoke(example_q)result
CitedAnswer(answer='Brian\'s height is 5\'11".', citations=[1, 3])
Or as a dict:
result.dict()
{'answer': 'Brian\'s height is 5\'11".', 'citations': [1, 3]}
Now we structure the source identifiers into the prompt to replicate with our chain. We will make three changes:
1. Update the prompt to include source identifiers;
2. Use the `structured_llm` (i.e., \`llm.with\_structured\_output(CitedAnswer));
3. Remove the `StrOutputParser`, to retain the Pydantic object in the output.
def format_docs_with_id(docs: List[Document]) -> str: formatted = [ f"Source ID: {i}\nArticle Title: {doc.metadata['title']}\nArticle Snippet: {doc.page_content}" for i, doc in enumerate(docs) ] return "\n\n" + "\n\n".join(formatted)rag_chain_from_docs = ( RunnablePassthrough.assign(context=(lambda x: format_docs_with_id(x["context"]))) | prompt | structured_llm)retrieve_docs = (lambda x: x["input"]) | retrieverchain = RunnablePassthrough.assign(context=retrieve_docs).assign( answer=rag_chain_from_docs)
result = chain.invoke({"input": "How fast are cheetahs?"})
print(result["answer"])
answer='Cheetahs can run at speeds of 93 to 104 km/h (58 to 65 mph). They are known as the fastest land animals.' citations=[0]
We can inspect the document at index 0, which the model cited:
print(result["context"][0])
page_content='The cheetah (Acinonyx jubatus) is a large cat and the fastest land animal. It has a tawny to creamy white or pale buff fur that is marked with evenly spaced, solid black spots. The head is small and rounded, with a short snout and black tear-like facial streaks. It reaches 67–94 cm (26–37 in) at the shoulder, and the head-and-body length is between 1.1 and 1.5 m (3 ft 7 in and 4 ft 11 in). Adults weigh between 21 and 72 kg (46 and 159 lb). The cheetah is capable of running at 93 to 104 km/h (58 to 65 mph); it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tail.\nThe cheetah was first described in the late 18th century. Four subspecies are recognised today that are native to Africa and central Iran. An African subspecies was introduced to India in 2022. It is now distributed mainly in small, fragmented populations in northwestern, eastern and southern Africa and central Iran. It lives in a variety of habitats such as savannahs in the Serengeti, arid mountain ranges in the Sahara, and hilly desert terrain.\nThe cheetah lives in three main social groups: females and their cubs, male "coalitions", and solitary males. While females lead a nomadic life searching for prey in large home ranges, males are more sedentary and instead establish much smaller territories in areas with plentiful prey and access to females. The cheetah is active during the day, with peaks during dawn and dusk. It feeds on small- to medium-sized prey, mostly weighing under 40 kg (88 lb), and prefers medium-sized ungulates such as impala, springbok and Thomson\'s gazelles. The cheetah typically stalks its prey within 60–100 m (200–330 ft) before charging towards it, trips it during the chase and bites its throat to suffocate it to death. It breeds throughout the year. After a gestation of nearly three months, females give birth to a litter of three or four cubs. Cheetah cubs are highly vulnerable to predation by other large carnivores. They are weaned a' metadata={'title': 'Cheetah', 'summary': 'The cheetah (Acinonyx jubatus) is a large cat and the fastest land animal. It has a tawny to creamy white or pale buff fur that is marked with evenly spaced, solid black spots. The head is small and rounded, with a short snout and black tear-like facial streaks. It reaches 67–94 cm (26–37 in) at the shoulder, and the head-and-body length is between 1.1 and 1.5 m (3 ft 7 in and 4 ft 11 in). Adults weigh between 21 and 72 kg (46 and 159 lb). The cheetah is capable of running at 93 to 104 km/h (58 to 65 mph); it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tail.\nThe cheetah was first described in the late 18th century. Four subspecies are recognised today that are native to Africa and central Iran. An African subspecies was introduced to India in 2022. It is now distributed mainly in small, fragmented populations in northwestern, eastern and southern Africa and central Iran. It lives in a variety of habitats such as savannahs in the Serengeti, arid mountain ranges in the Sahara, and hilly desert terrain.\nThe cheetah lives in three main social groups: females and their cubs, male "coalitions", and solitary males. While females lead a nomadic life searching for prey in large home ranges, males are more sedentary and instead establish much smaller territories in areas with plentiful prey and access to females. The cheetah is active during the day, with peaks during dawn and dusk. It feeds on small- to medium-sized prey, mostly weighing under 40 kg (88 lb), and prefers medium-sized ungulates such as impala, springbok and Thomson\'s gazelles. The cheetah typically stalks its prey within 60–100 m (200–330 ft) before charging towards it, trips it during the chase and bites its throat to suffocate it to death. It breeds throughout the year. After a gestation of nearly three months, females give birth to a litter of three or four cubs. Cheetah cubs are highly vulnerable to predation by other large carnivores. They are weaned at around four months and are independent by around 20 months of age.\nThe cheetah is threatened by habitat loss, conflict with humans, poaching and high susceptibility to diseases. In 2016, the global cheetah population was estimated at 7,100 individuals in the wild; it is listed as Vulnerable on the IUCN Red List. It has been widely depicted in art, literature, advertising, and animation. It was tamed in ancient Egypt and trained for hunting ungulates in the Arabian Peninsula and India. It has been kept in zoos since the early 19th century.', 'source': 'https://en.wikipedia.org/wiki/Cheetah'}
LangSmith trace: [https://smith.langchain.com/public/aff39dc7-3e09-4d64-8083-87026d975534/r](https://smith.langchain.com/public/aff39dc7-3e09-4d64-8083-87026d975534/r)
### Cite snippets[](#cite-snippets "Direct link to Cite snippets")
To return text spans (perhaps in addition to source identifiers), we can use the same approach. The only change will be to build a more complex output schema, here using Pydantic, that includes a "quote" alongside a source identifier.
_Aside: Note that if we break up our documents so that we have many documents with only a sentence or two instead of a few long documents, citing documents becomes roughly equivalent to citing snippets, and may be easier for the model because the model just needs to return an identifier for each snippet instead of the actual text. Probably worth trying both approaches and evaluating._
class Citation(BaseModel): source_id: int = Field( ..., description="The integer ID of a SPECIFIC source which justifies the answer.", ) quote: str = Field( ..., description="The VERBATIM quote from the specified source that justifies the answer.", )class QuotedAnswer(BaseModel): """Answer the user question based only on the given sources, and cite the sources used.""" answer: str = Field( ..., description="The answer to the user question, which is based only on the given sources.", ) citations: List[Citation] = Field( ..., description="Citations from the given sources that justify the answer." )
rag_chain_from_docs = ( RunnablePassthrough.assign(context=(lambda x: format_docs_with_id(x["context"]))) | prompt | llm.with_structured_output(QuotedAnswer))retrieve_docs = (lambda x: x["input"]) | retrieverchain = RunnablePassthrough.assign(context=retrieve_docs).assign( answer=rag_chain_from_docs)
result = chain.invoke({"input": "How fast are cheetahs?"})
Here we see that the model has extracted a relevant snippet of text from source 0:
result["answer"]
QuotedAnswer(answer='Cheetahs can run at speeds of 93 to 104 km/h (58 to 65 mph).', citations=[Citation(source_id=0, quote='The cheetah is capable of running at 93 to 104 km/h (58 to 65 mph); it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tail.')])
LangSmith trace: [https://smith.langchain.com/public/0f638cc9-8409-4a53-9010-86ac28144129/r](https://smith.langchain.com/public/0f638cc9-8409-4a53-9010-86ac28144129/r)
Direct prompting[](#direct-prompting "Direct link to Direct prompting")
------------------------------------------------------------------------
Many models don't support function-calling. We can achieve similar results with direct prompting. Let's try instructing a model to generate structured XML for its output:
xml_system = """You're a helpful AI assistant. Given a user question and some Wikipedia article snippets, \answer the user question and provide citations. If none of the articles answer the question, just say you don't know.Remember, you must return both an answer and citations. A citation consists of a VERBATIM quote that \justifies the answer and the ID of the quote article. Return a citation for every quote across all articles \that justify the answer. Use the following format for your final output:<cited_answer> <answer></answer> <citations> <citation><source_id></source_id><quote></quote></citation> <citation><source_id></source_id><quote></quote></citation> ... </citations></cited_answer>Here are the Wikipedia articles:{context}"""xml_prompt = ChatPromptTemplate.from_messages( [("system", xml_system), ("human", "{input}")])
We now make similar small updates to our chain:
1. We update the formatting function to wrap the retrieved context in XML tags;
2. We do not use `.with_structured_output` (e.g., because it does not exist for a model);
3. We use [XMLOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.xml.XMLOutputParser.html) in place of `StrOutputParser` to parse the answer into a dict.
from langchain_core.output_parsers import XMLOutputParserdef format_docs_xml(docs: List[Document]) -> str: formatted = [] for i, doc in enumerate(docs): doc_str = f"""\ <source id=\"{i}\"> <title>{doc.metadata['title']}</title> <article_snippet>{doc.page_content}</article_snippet> </source>""" formatted.append(doc_str) return "\n\n<sources>" + "\n".join(formatted) + "</sources>"rag_chain_from_docs = ( RunnablePassthrough.assign(context=(lambda x: format_docs_xml(x["context"]))) | xml_prompt | llm | XMLOutputParser())retrieve_docs = (lambda x: x["input"]) | retrieverchain = RunnablePassthrough.assign(context=retrieve_docs).assign( answer=rag_chain_from_docs)
**API Reference:**[XMLOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.xml.XMLOutputParser.html)
result = chain.invoke({"input": "How fast are cheetahs?"})
Note that citations are again structured into the answer:
result["answer"]
{'cited_answer': [{'answer': 'Cheetahs are capable of running at 93 to 104 km/h (58 to 65 mph).'}, {'citations': [{'citation': [{'source_id': '0'}, {'quote': 'The cheetah is capable of running at 93 to 104 km/h (58 to 65 mph); it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tail.'}]}]}]}
LangSmith trace: [https://smith.langchain.com/public/a3636c70-39c6-4c8f-bc83-1c7a174c237e/r](https://smith.langchain.com/public/a3636c70-39c6-4c8f-bc83-1c7a174c237e/r)
Retrieval post-processing[](#retrieval-post-processing "Direct link to Retrieval post-processing")
---------------------------------------------------------------------------------------------------
Another approach is to post-process our retrieved documents to compress the content, so that the source content is already minimal enough that we don't need the model to cite specific sources or spans. For example, we could break up each document into a sentence or two, embed those and keep only the most relevant ones. LangChain has some built-in components for this. Here we'll use a [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/text_splitter/langchain_text_splitters.RecursiveCharacterTextSplitter.html#langchain_text_splitters.RecursiveCharacterTextSplitter), which creates chunks of a sepacified size by splitting on separator substrings, and an [EmbeddingsFilter](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.embeddings_filter.EmbeddingsFilter.html#langchain.retrievers.document_compressors.embeddings_filter.EmbeddingsFilter), which keeps only the texts with the most relevant embeddings.
This approach effectively swaps our original retriever with an updated one that compresses the documents. To start, we build the retriever:
from langchain.retrievers.document_compressors import EmbeddingsFilterfrom langchain_core.runnables import RunnableParallelfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplittersplitter = RecursiveCharacterTextSplitter( chunk_size=400, chunk_overlap=0, separators=["\n\n", "\n", ".", " "], keep_separator=False,)compressor = EmbeddingsFilter(embeddings=OpenAIEmbeddings(), k=10)def split_and_filter(input) -> List[Document]: docs = input["docs"] question = input["question"] split_docs = splitter.split_documents(docs) stateful_docs = compressor.compress_documents(split_docs, question) return [stateful_doc for stateful_doc in stateful_docs]new_retriever = ( RunnableParallel(question=RunnablePassthrough(), docs=retriever) | split_and_filter)docs = new_retriever.invoke("How fast are cheetahs?")for doc in docs: print(doc.page_content) print("\n\n")
**API Reference:**[EmbeddingsFilter](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.embeddings_filter.EmbeddingsFilter.html) | [RunnableParallel](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
Adults weigh between 21 and 72 kg (46 and 159 lb). The cheetah is capable of running at 93 to 104 km/h (58 to 65 mph); it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tailThe cheetah (Acinonyx jubatus) is a large cat and the fastest land animal. It has a tawny to creamy white or pale buff fur that is marked with evenly spaced, solid black spots. The head is small and rounded, with a short snout and black tear-like facial streaks. It reaches 67–94 cm (26–37 in) at the shoulder, and the head-and-body length is between 1.1 and 1.5 m (3 ft 7 in and 4 ft 11 in)2 mph), or 171 body lengths per second. The cheetah, the fastest land mammal, scores at only 16 body lengths per second, while Anna's hummingbird has the highest known length-specific velocity attained by any vertebrateIt feeds on small- to medium-sized prey, mostly weighing under 40 kg (88 lb), and prefers medium-sized ungulates such as impala, springbok and Thomson's gazelles. The cheetah typically stalks its prey within 60–100 m (200–330 ft) before charging towards it, trips it during the chase and bites its throat to suffocate it to death. It breeds throughout the yearThe cheetah was first described in the late 18th century. Four subspecies are recognised today that are native to Africa and central Iran. An African subspecies was introduced to India in 2022. It is now distributed mainly in small, fragmented populations in northwestern, eastern and southern Africa and central IranThe cheetah lives in three main social groups: females and their cubs, male "coalitions", and solitary males. While females lead a nomadic life searching for prey in large home ranges, males are more sedentary and instead establish much smaller territories in areas with plentiful prey and access to females. The cheetah is active during the day, with peaks during dawn and duskThe Southeast African cheetah (Acinonyx jubatus jubatus) is the nominate cheetah subspecies native to East and Southern Africa. The Southern African cheetah lives mainly in the lowland areas and deserts of the Kalahari, the savannahs of Okavango Delta, and the grasslands of the Transvaal region in South Africa. In Namibia, cheetahs are mostly found in farmlandsSubpopulations have been called "South African cheetah" and "Namibian cheetah."In India, four cheetahs of the subspecies are living in Kuno National Park in Madhya Pradesh after having been introduced thereAcinonyx jubatus velox proposed in 1913 by Edmund Heller on basis of a cheetah that was shot by Kermit Roosevelt in June 1909 in the Kenyan highlands.Acinonyx rex proposed in 1927 by Reginald Innes Pocock on basis of a specimen from the Umvukwe Range in Rhodesia.
Next, we assemble it into our chain as before:
rag_chain_from_docs = ( RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"]))) | prompt | llm | StrOutputParser())chain = RunnablePassthrough.assign( context=(lambda x: x["input"]) | new_retriever).assign(answer=rag_chain_from_docs)
result = chain.invoke({"input": "How fast are cheetahs?"})print(result["answer"])
Cheetahs are capable of running at speeds between 93 to 104 km/h (58 to 65 mph), making them the fastest land animals.
Note that the document content is now compressed, although the document objects retain the original content in a "summary" key in their metadata. These summaries are not passed to the model; only the condensed content is.
result["context"][0].page_content # passed to model
'Adults weigh between 21 and 72 kg (46 and 159 lb). The cheetah is capable of running at 93 to 104 km/h (58 to 65 mph); it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tail'
result["context"][0].metadata["summary"] # original document
'The cheetah (Acinonyx jubatus) is a large cat and the fastest land animal. It has a tawny to creamy white or pale buff fur that is marked with evenly spaced, solid black spots. The head is small and rounded, with a short snout and black tear-like facial streaks. It reaches 67–94 cm (26–37 in) at the shoulder, and the head-and-body length is between 1.1 and 1.5 m (3 ft 7 in and 4 ft 11 in). Adults weigh between 21 and 72 kg (46 and 159 lb). The cheetah is capable of running at 93 to 104 km/h (58 to 65 mph); it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tail.\nThe cheetah was first described in the late 18th century. Four subspecies are recognised today that are native to Africa and central Iran. An African subspecies was introduced to India in 2022. It is now distributed mainly in small, fragmented populations in northwestern, eastern and southern Africa and central Iran. It lives in a variety of habitats such as savannahs in the Serengeti, arid mountain ranges in the Sahara, and hilly desert terrain.\nThe cheetah lives in three main social groups: females and their cubs, male "coalitions", and solitary males. While females lead a nomadic life searching for prey in large home ranges, males are more sedentary and instead establish much smaller territories in areas with plentiful prey and access to females. The cheetah is active during the day, with peaks during dawn and dusk. It feeds on small- to medium-sized prey, mostly weighing under 40 kg (88 lb), and prefers medium-sized ungulates such as impala, springbok and Thomson\'s gazelles. The cheetah typically stalks its prey within 60–100 m (200–330 ft) before charging towards it, trips it during the chase and bites its throat to suffocate it to death. It breeds throughout the year. After a gestation of nearly three months, females give birth to a litter of three or four cubs. Cheetah cubs are highly vulnerable to predation by other large carnivores. They are weaned at around four months and are independent by around 20 months of age.\nThe cheetah is threatened by habitat loss, conflict with humans, poaching and high susceptibility to diseases. In 2016, the global cheetah population was estimated at 7,100 individuals in the wild; it is listed as Vulnerable on the IUCN Red List. It has been widely depicted in art, literature, advertising, and animation. It was tamed in ancient Egypt and trained for hunting ungulates in the Arabian Peninsula and India. It has been kept in zoos since the early 19th century.'
LangSmith trace: [https://smith.langchain.com/public/a61304fa-e5a5-4c64-a268-b0aef1130d53/r](https://smith.langchain.com/public/a61304fa-e5a5-4c64-a268-b0aef1130d53/r)
Generation post-processing[](#generation-post-processing "Direct link to Generation post-processing")
------------------------------------------------------------------------------------------------------
Another approach is to post-process our model generation. In this example we'll first generate just an answer, and then we'll ask the model to annotate it's own answer with citations. The downside of this approach is of course that it is slower and more expensive, because two model calls need to be made.
Let's apply this to our initial chain.
class Citation(BaseModel): source_id: int = Field( ..., description="The integer ID of a SPECIFIC source which justifies the answer.", ) quote: str = Field( ..., description="The VERBATIM quote from the specified source that justifies the answer.", )class AnnotatedAnswer(BaseModel): """Annotate the answer to the user question with quote citations that justify the answer.""" citations: List[Citation] = Field( ..., description="Citations from the given sources that justify the answer." )structured_llm = llm.with_structured_output(AnnotatedAnswer)
from langchain_core.prompts import MessagesPlaceholderprompt = ChatPromptTemplate.from_messages( [ ("system", system_prompt), ("human", "{question}"), MessagesPlaceholder("chat_history", optional=True), ])answer = prompt | llmannotation_chain = prompt | structured_llmchain = ( RunnableParallel( question=RunnablePassthrough(), docs=(lambda x: x["input"]) | retriever ) .assign(context=format) .assign(ai_message=answer) .assign( chat_history=(lambda x: [x["ai_message"]]), answer=(lambda x: x["ai_message"].content), ) .assign(annotations=annotation_chain) .pick(["answer", "docs", "annotations"]))
**API Reference:**[MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html)
result = chain.invoke({"input": "How fast are cheetahs?"})
print(result["answer"])
Cheetahs are capable of running at speeds between 93 to 104 km/h (58 to 65 mph). Their specialized adaptations for speed, such as a light build, long thin legs, and a long tail, allow them to be the fastest land animals.
result["annotations"]
AnnotatedAnswer(citations=[Citation(source_id=0, quote='The cheetah is capable of running at 93 to 104 km/h (58 to 65 mph); it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tail.')])
LangSmith trace: [https://smith.langchain.com/public/bf5e8856-193b-4ff2-af8d-c0f4fbd1d9cb/r](https://smith.langchain.com/public/bf5e8856-193b-4ff2-af8d-c0f4fbd1d9cb/r)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/qa_citations.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add chat history
](/v0.2/docs/how_to/qa_chat_history_how_to/)[
Next
How to do per-user retrieval
](/v0.2/docs/how_to/qa_per_user/)
* [Setup](#setup)
* [Function-calling](#function-calling)
* [Cite documents](#cite-documents)
* [Cite snippets](#cite-snippets)
* [Direct prompting](#direct-prompting)
* [Retrieval post-processing](#retrieval-post-processing)
* [Generation post-processing](#generation-post-processing) | null |
https://python.langchain.com/v0.2/docs/how_to/qa_chat_history_how_to/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add chat history
On this page
How to add chat history
=======================
In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of "memory" of past questions and answers, and some logic for incorporating those into its current thinking.
In this guide we focus on **adding logic for incorporating historical messages.**
This is largely a condensed version of the [Conversational RAG tutorial](/v0.2/docs/tutorials/qa_chat_history/).
We will cover two approaches:
1. [Chains](/v0.2/docs/how_to/qa_chat_history_how_to/#chains), in which we always execute a retrieval step;
2. [Agents](/v0.2/docs/how_to/qa_chat_history_how_to/#agents), in which we give an LLM discretion over whether and how to execute a retrieval step (or multiple steps).
For the external knowledge source, we will use the same [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng from the [RAG tutorial](/v0.2/docs/tutorials/rag/).
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Dependencies[](#dependencies "Direct link to Dependencies")
We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/v0.2/docs/concepts/#embedding-models), and [VectorStore](/v0.2/docs/concepts/#vectorstores) or [Retriever](/v0.2/docs/concepts/#retrievers).
We'll use the following packages:
%%capture --no-stderr%pip install --upgrade --quiet langchain langchain-community langchain-chroma bs4
We need to set environment variable `OPENAI_API_KEY`, which can be done directly or loaded from a `.env` file like so:
import getpassimport osif not os.environ.get("OPENAI_API_KEY"): os.environ["OPENAI_API_KEY"] = getpass.getpass()# import dotenv# dotenv.load_dotenv()
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
os.environ["LANGCHAIN_TRACING_V2"] = "true"if not os.environ.get("LANGCHAIN_API_KEY"): os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Chains[](#chains "Direct link to Chains")
------------------------------------------
In a conversational RAG application, queries issued to the retriever should be informed by the context of the conversation. LangChain provides a [create\_history\_aware\_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) constructor to simplify this. It constructs a chain that accepts keys `input` and `chat_history` as input, and has the same output schema as a retriever. `create_history_aware_retriever` requires as inputs:
1. LLM;
2. Retriever;
3. Prompt.
First we obtain these objects:
### LLM[](#llm "Direct link to LLM")
We can use any supported chat model:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
### Retriever[](#retriever "Direct link to Retriever")
For the retriever, we will use [WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) to load the content of a web page. Here we instantiate a `Chroma` vectorstore and then use its [.as\_retriever](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.as_retriever) method to build a retriever that can be incorporated into [LCEL](/v0.2/docs/concepts/#langchain-expression-language) chains.
import bs4from langchain.chains import create_retrieval_chainfrom langchain.chains.combine_documents import create_stuff_documents_chainfrom langchain_chroma import Chromafrom langchain_community.document_loaders import WebBaseLoaderfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitterloader = WebBaseLoader( web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",), bs_kwargs=dict( parse_only=bs4.SoupStrainer( class_=("post-content", "post-title", "post-header") ) ),)docs = loader.load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)splits = text_splitter.split_documents(docs)vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()
**API Reference:**[create\_retrieval\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) | [create\_stuff\_documents\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) | [WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
### Prompt[](#prompt "Direct link to Prompt")
We'll use a prompt that includes a `MessagesPlaceholder` variable under the name "chat\_history". This allows us to pass in a list of Messages to the prompt using the "chat\_history" input key, and these messages will be inserted after the system message and before the human message containing the latest question.
from langchain.chains import create_history_aware_retrieverfrom langchain_core.prompts import MessagesPlaceholdercontextualize_q_system_prompt = ( "Given a chat history and the latest user question " "which might reference context in the chat history, " "formulate a standalone question which can be understood " "without the chat history. Do NOT answer the question, " "just reformulate it if needed and otherwise return it as is.")contextualize_q_prompt = ChatPromptTemplate.from_messages( [ ("system", contextualize_q_system_prompt), MessagesPlaceholder("chat_history"), ("human", "{input}"), ])
**API Reference:**[create\_history\_aware\_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html)
### Assembling the chain[](#assembling-the-chain "Direct link to Assembling the chain")
We can then instantiate the history-aware retriever:
history_aware_retriever = create_history_aware_retriever( llm, retriever, contextualize_q_prompt)
This chain prepends a rephrasing of the input query to our retriever, so that the retrieval incorporates the context of the conversation.
Now we can build our full QA chain.
As in the [RAG tutorial](/v0.2/docs/tutorials/rag/), we will use [create\_stuff\_documents\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) to generate a `question_answer_chain`, with input keys `context`, `chat_history`, and `input`\-- it accepts the retrieved context alongside the conversation history and query to generate an answer.
We build our final `rag_chain` with [create\_retrieval\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html). This chain applies the `history_aware_retriever` and `question_answer_chain` in sequence, retaining intermediate outputs such as the retrieved context for convenience. It has input keys `input` and `chat_history`, and includes `input`, `chat_history`, `context`, and `answer` in its output.
system_prompt = ( "You are an assistant for question-answering tasks. " "Use the following pieces of retrieved context to answer " "the question. If you don't know the answer, say that you " "don't know. Use three sentences maximum and keep the " "answer concise." "\n\n" "{context}")qa_prompt = ChatPromptTemplate.from_messages( [ ("system", system_prompt), MessagesPlaceholder("chat_history"), ("human", "{input}"), ])question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
### Adding chat history[](#adding-chat-history "Direct link to Adding chat history")
To manage the chat history, we will need:
1. An object for storing the chat history;
2. An object that wraps our chain and manages updates to the chat history.
For these we will use [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html) and [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html). The latter is a wrapper for an LCEL chain and a `BaseChatMessageHistory` that handles injecting chat history into inputs and updating it after each invocation.
For a detailed walkthrough of how to use these classes together to create a stateful conversational chain, head to the [How to add message history (memory)](/v0.2/docs/how_to/message_history/) LCEL how-to guide.
Below, we implement a simple example of the second option, in which chat histories are stored in a simple dict. LangChain manages memory integrations with [Redis](/v0.2/docs/integrations/memory/redis_chat_message_history/) and other technologies to provide for more robust persistence.
Instances of `RunnableWithMessageHistory` manage the chat history for you. They accept a config with a key (`"session_id"` by default) that specifies what conversation history to fetch and prepend to the input, and append the output to the same conversation history. Below is an example:
from langchain_community.chat_message_histories import ChatMessageHistoryfrom langchain_core.chat_history import BaseChatMessageHistoryfrom langchain_core.runnables.history import RunnableWithMessageHistorystore = {}def get_session_history(session_id: str) -> BaseChatMessageHistory: if session_id not in store: store[session_id] = ChatMessageHistory() return store[session_id]conversational_rag_chain = RunnableWithMessageHistory( rag_chain, get_session_history, input_messages_key="input", history_messages_key="chat_history", output_messages_key="answer",)
**API Reference:**[ChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.ChatMessageHistory.html) | [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html)
conversational_rag_chain.invoke( {"input": "What is Task Decomposition?"}, config={ "configurable": {"session_id": "abc123"} }, # constructs a key "abc123" in `store`.)["answer"]
'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.'
conversational_rag_chain.invoke( {"input": "What are common ways of doing it?"}, config={"configurable": {"session_id": "abc123"}},)["answer"]
'Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions, or human inputs to break down complex tasks into smaller and more manageable steps. Additionally, task decomposition can involve utilizing resources like internet access for information gathering, long-term memory management, and GPT-3.5 powered agents for delegation of simple tasks.'
The conversation history can be inspected in the `store` dict:
from langchain_core.messages import AIMessagefor message in store["abc123"].messages: if isinstance(message, AIMessage): prefix = "AI" else: prefix = "User" print(f"{prefix}: {message.content}\n")
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html)
User: What is Task Decomposition?AI: Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.User: What are common ways of doing it?AI: Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions, or human inputs to break down complex tasks into smaller and more manageable steps. Additionally, task decomposition can involve utilizing resources like internet access for information gathering, long-term memory management, and GPT-3.5 powered agents for delegation of simple tasks.
### Tying it together[](#tying-it-together "Direct link to Tying it together")
![](/v0.2/assets/images/conversational_retrieval_chain-5c7a96abe29e582bc575a0a0d63f86b0.png)
For convenience, we tie together all of the necessary steps in a single code cell:
import bs4from langchain.chains import create_history_aware_retriever, create_retrieval_chainfrom langchain.chains.combine_documents import create_stuff_documents_chainfrom langchain_chroma import Chromafrom langchain_community.chat_message_histories import ChatMessageHistoryfrom langchain_community.document_loaders import WebBaseLoaderfrom langchain_core.chat_history import BaseChatMessageHistoryfrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_core.runnables.history import RunnableWithMessageHistoryfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitterllm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)### Construct retriever ###loader = WebBaseLoader( web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",), bs_kwargs=dict( parse_only=bs4.SoupStrainer( class_=("post-content", "post-title", "post-header") ) ),)docs = loader.load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)splits = text_splitter.split_documents(docs)vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()### Contextualize question ###contextualize_q_system_prompt = ( "Given a chat history and the latest user question " "which might reference context in the chat history, " "formulate a standalone question which can be understood " "without the chat history. Do NOT answer the question, " "just reformulate it if needed and otherwise return it as is.")contextualize_q_prompt = ChatPromptTemplate.from_messages( [ ("system", contextualize_q_system_prompt), MessagesPlaceholder("chat_history"), ("human", "{input}"), ])history_aware_retriever = create_history_aware_retriever( llm, retriever, contextualize_q_prompt)### Answer question ###system_prompt = ( "You are an assistant for question-answering tasks. " "Use the following pieces of retrieved context to answer " "the question. If you don't know the answer, say that you " "don't know. Use three sentences maximum and keep the " "answer concise." "\n\n" "{context}")qa_prompt = ChatPromptTemplate.from_messages( [ ("system", system_prompt), MessagesPlaceholder("chat_history"), ("human", "{input}"), ])question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)### Statefully manage chat history ###store = {}def get_session_history(session_id: str) -> BaseChatMessageHistory: if session_id not in store: store[session_id] = ChatMessageHistory() return store[session_id]conversational_rag_chain = RunnableWithMessageHistory( rag_chain, get_session_history, input_messages_key="input", history_messages_key="chat_history", output_messages_key="answer",)
**API Reference:**[create\_history\_aware\_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) | [create\_retrieval\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) | [create\_stuff\_documents\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) | [ChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.ChatMessageHistory.html) | [WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
conversational_rag_chain.invoke( {"input": "What is Task Decomposition?"}, config={ "configurable": {"session_id": "abc123"} }, # constructs a key "abc123" in `store`.)["answer"]
'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable. Techniques like Chain of Thought (CoT) and Tree of Thoughts help in decomposing hard tasks into multiple manageable tasks by instructing models to think step by step and explore multiple reasoning possibilities at each step. Task decomposition can be achieved through various methods such as using prompting techniques, task-specific instructions, or human inputs.'
conversational_rag_chain.invoke( {"input": "What are common ways of doing it?"}, config={"configurable": {"session_id": "abc123"}},)["answer"]
'Task decomposition can be done in common ways such as using prompting techniques like Chain of Thought (CoT) or Tree of Thoughts, which instruct models to think step by step and explore multiple reasoning possibilities at each step. Another way is to provide task-specific instructions, such as asking to "Write a story outline" for writing a novel, to guide the decomposition process. Additionally, task decomposition can also involve human inputs to break down complex tasks into smaller and simpler steps.'
Agents[](#agents "Direct link to Agents")
------------------------------------------
Agents leverage the reasoning capabilities of LLMs to make decisions during execution. Using agents allow you to offload some discretion over the retrieval process. Although their behavior is less predictable than chains, they offer some advantages in this context:
* Agents generate the input to the retriever directly, without necessarily needing us to explicitly build in contextualization, as we did above;
* Agents can execute multiple retrieval steps in service of a query, or refrain from executing a retrieval step altogether (e.g., in response to a generic greeting from a user).
### Retrieval tool[](#retrieval-tool "Direct link to Retrieval tool")
Agents can access "tools" and manage their execution. In this case, we will convert our retriever into a LangChain tool to be wielded by the agent:
from langchain.tools.retriever import create_retriever_tooltool = create_retriever_tool( retriever, "blog_post_retriever", "Searches and returns excerpts from the Autonomous Agents blog post.",)tools = [tool]
**API Reference:**[create\_retriever\_tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.create_retriever_tool.html)
### Agent constructor[](#agent-constructor "Direct link to Agent constructor")
Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/v0.2/docs/concepts/#langgraph) to construct the agent. Currently we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic.
from langgraph.prebuilt import create_react_agentagent_executor = create_react_agent(llm, tools)
We can now try it out. Note that so far it is not stateful (we still need to add in memory)
from langchain_core.messages import HumanMessagequery = "What is Task Decomposition?"for s in agent_executor.stream( {"messages": [HumanMessage(content=query)]},): print(s) print("----")
**API Reference:**[HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html)
Error in LangChainTracer.on_tool_end callback: TracerException("Found chain run at ID 5cd28d13-88dd-4eac-a465-3770ac27eff6, but expected {'tool'} run.")``````output{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_TbhPPPN05GKi36HLeaN4QM90', 'function': {'arguments': '{"query":"Task Decomposition"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 68, 'total_tokens': 87}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-2e60d910-879a-4a2a-b1e9-6a6c5c7d7ebc-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_TbhPPPN05GKi36HLeaN4QM90'}])]}}----{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\n\nFig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\n\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.\n\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_TbhPPPN05GKi36HLeaN4QM90')]}}----{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps in transforming big tasks into multiple manageable tasks, making it easier for autonomous agents to handle and interpret the thinking process. One common method for task decomposition is the Chain of Thought (CoT) technique, where models are instructed to "think step by step" to decompose hard tasks. Another extension of CoT is the Tree of Thoughts, which explores multiple reasoning possibilities at each step by creating a tree structure of multiple thoughts per step. Task decomposition can be facilitated through various methods such as using simple prompts, task-specific instructions, or human inputs.', response_metadata={'token_usage': {'completion_tokens': 130, 'prompt_tokens': 636, 'total_tokens': 766}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-3ef17638-65df-4030-a7fe-795e6da91c69-0')]}}----
LangGraph comes with built in persistence, so we don't need to use ChatMessageHistory! Rather, we can pass in a checkpointer to our LangGraph agent directly.
Distinct conversations are managed by specifying a key for a conversation thread in the config dict, as shown below.
from langgraph.checkpoint.sqlite import SqliteSavermemory = SqliteSaver.from_conn_string(":memory:")agent_executor = create_react_agent(llm, tools, checkpointer=memory)
This is all we need to construct a conversational RAG agent.
Let's observe its behavior. Note that if we input a query that does not require a retrieval step, the agent does not execute one:
config = {"configurable": {"thread_id": "abc123"}}for s in agent_executor.stream( {"messages": [HumanMessage(content="Hi! I'm bob")]}, config=config): print(s) print("----")
{'agent': {'messages': [AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 67, 'total_tokens': 78}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-1cd17562-18aa-4839-b41b-403b17a0fc20-0')]}}----
Further, if we input a query that does require a retrieval step, the agent generates the input to the tool:
query = "What is Task Decomposition?"for s in agent_executor.stream( {"messages": [HumanMessage(content=query)]}, config=config): print(s) print("----")
Error in LangChainTracer.on_tool_end callback: TracerException("Found chain run at ID c54381c0-c5d9-495a-91a0-aca4ae755663, but expected {'tool'} run.")``````output{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_rg7zKTE5e0ICxVSslJ1u9LMg', 'function': {'arguments': '{"query":"Task Decomposition"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 91, 'total_tokens': 110}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-122bf097-7ff1-49aa-b430-e362b51354ad-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_rg7zKTE5e0ICxVSslJ1u9LMg'}])]}}----{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\n\nFig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\n\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.\n\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_rg7zKTE5e0ICxVSslJ1u9LMg')]}}----{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps in managing and solving intricate problems by dividing them into more manageable components. By decomposing tasks, agents or models can better understand the steps involved and plan their actions accordingly. Techniques like Chain of Thought (CoT) and Tree of Thoughts are examples of methods that enhance model performance on complex tasks by breaking them down into smaller steps.', response_metadata={'token_usage': {'completion_tokens': 87, 'prompt_tokens': 659, 'total_tokens': 746}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-b9166386-83e5-4b82-9a4b-590e5fa76671-0')]}}----
Above, instead of inserting our query verbatim into the tool, the agent stripped unnecessary words like "what" and "is".
This same principle allows the agent to use the context of the conversation when necessary:
query = "What according to the blog post are common ways of doing it? redo the search"for s in agent_executor.stream( {"messages": [HumanMessage(content=query)]}, config=config): print(s) print("----")
{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6kbxTU5CDWLmF9mrvR7bWSkI', 'function': {'arguments': '{"query":"Common ways of task decomposition"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 769, 'total_tokens': 790}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-2d2c8327-35cd-484a-b8fd-52436657c2d8-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Common ways of task decomposition'}, 'id': 'call_6kbxTU5CDWLmF9mrvR7bWSkI'}])]}}----``````outputError in LangChainTracer.on_tool_end callback: TracerException("Found chain run at ID 29553415-e0f4-41a9-8921-ba489e377f68, but expected {'tool'} run.")``````output{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\n\nFig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\n\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.\n\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_6kbxTU5CDWLmF9mrvR7bWSkI')]}}----{'agent': {'messages': [AIMessage(content='Common ways of task decomposition include:\n1. Using LLM with simple prompting like "Steps for XYZ" or "What are the subgoals for achieving XYZ?"\n2. Using task-specific instructions, for example, "Write a story outline" for writing a novel.\n3. Involving human inputs in the task decomposition process.', response_metadata={'token_usage': {'completion_tokens': 67, 'prompt_tokens': 1339, 'total_tokens': 1406}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-9ad14cde-ca75-4238-a868-f865e0fc50dd-0')]}}----
Note that the agent was able to infer that "it" in our query refers to "task decomposition", and generated a reasonable search query as a result-- in this case, "common ways of task decomposition".
### Tying it together[](#tying-it-together-1 "Direct link to Tying it together")
For convenience, we tie together all of the necessary steps in a single code cell:
import bs4from langchain.tools.retriever import create_retriever_toolfrom langchain_chroma import Chromafrom langchain_community.document_loaders import WebBaseLoaderfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitterfrom langgraph.checkpoint.sqlite import SqliteSavermemory = SqliteSaver.from_conn_string(":memory:")llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)### Construct retriever ###loader = WebBaseLoader( web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",), bs_kwargs=dict( parse_only=bs4.SoupStrainer( class_=("post-content", "post-title", "post-header") ) ),)docs = loader.load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)splits = text_splitter.split_documents(docs)vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()### Build retriever tool ###tool = create_retriever_tool( retriever, "blog_post_retriever", "Searches and returns excerpts from the Autonomous Agents blog post.",)tools = [tool]agent_executor = create_react_agent(llm, tools, checkpointer=memory)
**API Reference:**[create\_retriever\_tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.create_retriever_tool.html) | [WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
We've covered the steps to build a basic conversational Q&A application:
* We used chains to build a predictable application that generates search queries for each user input;
* We used agents to build an application that "decides" when and how to generate search queries.
To explore different types of retrievers and retrieval strategies, visit the [retrievers](/v0.2/docs/how_to/#retrievers) section of the how-to guides.
For a detailed walkthrough of LangChain's conversation memory abstractions, visit the [How to add message history (memory)](/v0.2/docs/how_to/message_history/) LCEL page.
To learn more about agents, head to the [Agents Modules](/v0.2/docs/tutorials/agents/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/qa_chat_history_how_to.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use LangChain with different Pydantic versions
](/v0.2/docs/how_to/pydantic_compatibility/)[
Next
How to get a RAG application to add citations
](/v0.2/docs/how_to/qa_citations/)
* [Setup](#setup)
* [Dependencies](#dependencies)
* [LangSmith](#langsmith)
* [Chains](#chains)
* [LLM](#llm)
* [Retriever](#retriever)
* [Prompt](#prompt)
* [Assembling the chain](#assembling-the-chain)
* [Adding chat history](#adding-chat-history)
* [Tying it together](#tying-it-together)
* [Agents](#agents)
* [Retrieval tool](#retrieval-tool)
* [Agent constructor](#agent-constructor)
* [Tying it together](#tying-it-together-1)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/qa_sources/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to get your RAG application to return sources
On this page
How to get your RAG application to return sources
=================================================
Often in Q&A applications it's important to show users the sources that were used to generate the answer. The simplest way to do this is for the chain to return the Documents that were retrieved in each generation.
We'll work off of the Q&A app we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [RAG tutorial](/v0.2/docs/tutorials/rag/).
We will cover two approaches:
1. Using the built-in [create\_retrieval\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html), which returns sources by default;
2. Using a simple [LCEL](/v0.2/docs/concepts/#langchain-expression-language-lcel) implementation, to show the operating principle.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Dependencies[](#dependencies "Direct link to Dependencies")
We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/v0.2/docs/concepts/#embedding-models), [VectorStore](/v0.2/docs/concepts/#vectorstores) or [Retriever](/v0.2/docs/concepts/#retrievers).
We'll use the following packages:
%pip install --upgrade --quiet langchain langchain-community langchainhub langchain-openai langchain-chroma bs4
We need to set environment variable `OPENAI_API_KEY`, which can be done directly or loaded from a `.env` file like so:
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# import dotenv# dotenv.load_dotenv()
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
os.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Using `create_retrieval_chain`[](#using-create_retrieval_chain "Direct link to using-create_retrieval_chain")
--------------------------------------------------------------------------------------------------------------
Let's first select a LLM:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
Here is Q&A app with sources we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [RAG tutorial](/v0.2/docs/tutorials/rag/):
import bs4from langchain.chains import create_retrieval_chainfrom langchain.chains.combine_documents import create_stuff_documents_chainfrom langchain_chroma import Chromafrom langchain_community.document_loaders import WebBaseLoaderfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitter# 1. Load, chunk and index the contents of the blog to create a retriever.loader = WebBaseLoader( web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",), bs_kwargs=dict( parse_only=bs4.SoupStrainer( class_=("post-content", "post-title", "post-header") ) ),)docs = loader.load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)splits = text_splitter.split_documents(docs)vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()# 2. Incorporate the retriever into a question-answering chain.system_prompt = ( "You are an assistant for question-answering tasks. " "Use the following pieces of retrieved context to answer " "the question. If you don't know the answer, say that you " "don't know. Use three sentences maximum and keep the " "answer concise." "\n\n" "{context}")prompt = ChatPromptTemplate.from_messages( [ ("system", system_prompt), ("human", "{input}"), ])question_answer_chain = create_stuff_documents_chain(llm, prompt)rag_chain = create_retrieval_chain(retriever, question_answer_chain)
**API Reference:**[create\_retrieval\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) | [create\_stuff\_documents\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) | [WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
result = rag_chain.invoke({"input": "What is Task Decomposition?"})
Note that `result` is a dict with keys `"input"`, `"context"`, and `"answer"`:
result
{'input': 'What is Task Decomposition?', 'context': [Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content='Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content='Resources:\n1. Internet access for searches and information gathering.\n2. Long Term memory management.\n3. GPT-3.5 powered Agents for delegation of simple tasks.\n4. File output.\n\nPerformance Evaluation:\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n2. Constructively self-criticize your big-picture behavior constantly.\n3. Reflect on past decisions and strategies to refine your approach.\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content="(3) Task execution: Expert models execute on the specific tasks and log results.\nInstruction:\n\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user's request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.", metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'})], 'answer': 'Task decomposition involves breaking down a complex task into smaller and simpler steps. This process helps agents or models handle challenging tasks by dividing them into more manageable subtasks. Techniques like Chain of Thought and Tree of Thoughts are used to decompose tasks into multiple steps for better problem-solving.'}
Here, `"context"` contains the sources that the LLM used in generating the response in `"answer"`.
Custom LCEL implementation[](#custom-lcel-implementation "Direct link to Custom LCEL implementation")
------------------------------------------------------------------------------------------------------
Below we construct a chain similar to those built by `create_retrieval_chain`. It works by building up a dict:
1. Starting with a dict with the input query, add the retrieved docs in the `"context"` key;
2. Feed both the query and context into a RAG chain and add the result to the dict.
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnablePassthroughdef format_docs(docs): return "\n\n".join(doc.page_content for doc in docs)rag_chain_from_docs = ( RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"]))) | prompt | llm | StrOutputParser())retrieve_docs = (lambda x: x["input"]) | retrieverchain = RunnablePassthrough.assign(context=retrieve_docs).assign( answer=rag_chain_from_docs)chain.invoke({"input": "What is Task Decomposition"})
**API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
{'input': 'What is Task Decomposition', 'context': [Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content='Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content='The AI assistant can parse user input to several tasks: [{"task": task, "id", task_id, "dep": dependency_task_ids, "args": {"text": text, "image": URL, "audio": URL, "video": URL}}]. The "dep" field denotes the id of the previous task which generates a new resource that the current task relies on. A special tag "-task_id" refers to the generated text image, audio and video in the dependency task with id as task_id. The task MUST be selected from the following options: {{ Available Task List }}. There is a logical relationship between tasks, please note their order. If the user input can\'t be parsed, you need to reply empty JSON. Here are several cases for your reference: {{ Demonstrations }}. The chat history is recorded as {{ Chat History }}. From this chat history, you can find the path of the user-mentioned resources for your task planning.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content='Fig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\nThe system comprises of 4 stages:\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.\nInstruction:', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'})], 'answer': 'Task decomposition involves breaking down complex tasks into smaller and simpler steps to make them more manageable for autonomous agents or models. This process can be achieved by techniques like Chain of Thought (CoT) or Tree of Thoughts, which guide the model to think step by step or explore multiple reasoning possibilities at each step. Task decomposition can be done through simple prompting with language models, task-specific instructions, or human inputs.'}
tip
Check out the [LangSmith trace](https://smith.langchain.com/public/0cb42685-e29e-4280-a503-bef2014d7ba2/r)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/qa_sources.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to do per-user retrieval
](/v0.2/docs/how_to/qa_per_user/)[
Next
How to stream results from your RAG application
](/v0.2/docs/how_to/qa_streaming/)
* [Setup](#setup)
* [Dependencies](#dependencies)
* [LangSmith](#langsmith)
* [Using `create_retrieval_chain`](#using-create_retrieval_chain)
* [Custom LCEL implementation](#custom-lcel-implementation) | null |
https://python.langchain.com/v0.2/docs/how_to/serialization/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to save and load LangChain objects
On this page
How to save and load LangChain objects
======================================
LangChain classes implement standard methods for serialization. Serializing LangChain objects using these methods confer some advantages:
* Secrets, such as API keys, are separated from other parameters and can be loaded back to the object on de-serialization;
* De-serialization is kept compatible across package versions, so objects that were serialized with one version of LangChain can be properly de-serialized with another.
To save and load LangChain objects using this system, use the `dumpd`, `dumps`, `load`, and `loads` functions in the [load module](https://api.python.langchain.com/en/latest/core_api_reference.html#module-langchain_core.load) of `langchain-core`. These functions support JSON and JSON-serializable objects.
All LangChain objects that inherit from [Serializable](https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.Serializable.html) are JSON-serializable. Examples include [messages](https://api.python.langchain.com/en/latest/core_api_reference.html#module-langchain_core.messages), [document objects](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) (e.g., as returned from [retrievers](/v0.2/docs/concepts/#retrievers)), and most [Runnables](/v0.2/docs/concepts/#langchain-expression-language-lcel), such as chat models, retrievers, and [chains](/v0.2/docs/how_to/sequence/) implemented with the LangChain Expression Language.
Below we walk through an example with a simple [LLM chain](/v0.2/docs/tutorials/llm_chain/).
caution
De-serialization using `load` and `loads` can instantiate any serializable LangChain object. Only use this feature with trusted inputs!
De-serialization is a beta feature and is subject to change.
from langchain_core.load import dumpd, dumps, load, loadsfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIprompt = ChatPromptTemplate.from_messages( [ ("system", "Translate the following into {language}:"), ("user", "{text}"), ],)llm = ChatOpenAI(model="gpt-3.5-turbo-0125", api_key="llm-api-key")chain = prompt | llm
**API Reference:**[dumpd](https://api.python.langchain.com/en/latest/load/langchain_core.load.dump.dumpd.html) | [dumps](https://api.python.langchain.com/en/latest/load/langchain_core.load.dump.dumps.html) | [load](https://api.python.langchain.com/en/latest/load/langchain_core.load.load.load.html) | [loads](https://api.python.langchain.com/en/latest/load/langchain_core.load.load.loads.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Saving objects[](#saving-objects "Direct link to Saving objects")
------------------------------------------------------------------
### To json[](#to-json "Direct link to To json")
string_representation = dumps(chain, pretty=True)print(string_representation[:500])
{ "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "runnable", "RunnableSequence" ], "kwargs": { "first": { "lc": 1, "type": "constructor", "id": [ "langchain", "prompts", "chat", "ChatPromptTemplate" ], "kwargs": { "input_variables": [ "language", "text" ], "messages": [ { "lc": 1, "type": "constructor",
### To a json-serializable Python dict[](#to-a-json-serializable-python-dict "Direct link to To a json-serializable Python dict")
dict_representation = dumpd(chain)print(type(dict_representation))
<class 'dict'>
### To disk[](#to-disk "Direct link to To disk")
import jsonwith open("/tmp/chain.json", "w") as fp: json.dump(string_representation, fp)
Note that the API key is withheld from the serialized representations. Parameters that are considered secret are specified by the `.lc_secrets` attribute of the LangChain object:
chain.last.lc_secrets
{'openai_api_key': 'OPENAI_API_KEY'}
Loading objects[](#loading-objects "Direct link to Loading objects")
---------------------------------------------------------------------
Specifying `secrets_map` in `load` and `loads` will load the corresponding secrets onto the de-serialized LangChain object.
### From string[](#from-string "Direct link to From string")
chain = loads(string_representation, secrets_map={"OPENAI_API_KEY": "llm-api-key"})
### From dict[](#from-dict "Direct link to From dict")
chain = load(dict_representation, secrets_map={"OPENAI_API_KEY": "llm-api-key"})
### From disk[](#from-disk "Direct link to From disk")
with open("/tmp/chain.json", "r") as fp: chain = loads(json.load(fp), secrets_map={"OPENAI_API_KEY": "llm-api-key"})
Note that we recover the API key specified at the start of the guide:
chain.last.openai_api_key.get_secret_value()
'llm-api-key'
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/serialization.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to chain runnables
](/v0.2/docs/how_to/sequence/)[
Next
How to split text by tokens
](/v0.2/docs/how_to/split_by_token/)
* [Saving objects](#saving-objects)
* [To json](#to-json)
* [To a json-serializable Python dict](#to-a-json-serializable-python-dict)
* [To disk](#to-disk)
* [Loading objects](#loading-objects)
* [From string](#from-string)
* [From dict](#from-dict)
* [From disk](#from-disk) | null |
https://python.langchain.com/v0.2/docs/how_to/qa_per_user/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to do per-user retrieval
On this page
How to do per-user retrieval
============================
This guide demonstrates how to configure runtime properties of a retrieval chain. An example application is to limit the documents available to a retriever based on the user.
When building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one user, but for many different users, and they should not be able to see eachother's data. This means that you need to be able to configure your retrieval chain to only retrieve certain information. This generally involves two steps.
**Step 1: Make sure the retriever you are using supports multiple users**
At the moment, there is no unified flag or filter for this in LangChain. Rather, each vectorstore and retriever may have their own, and may be called different things (namespaces, multi-tenancy, etc). For vectorstores, this is generally exposed as a keyword argument that is passed in during `similarity_search`. By reading the documentation or source code, figure out whether the retriever you are using supports multiple users, and, if so, how to use it.
Note: adding documentation and/or support for multiple users for retrievers that do not support it (or document it) is a GREAT way to contribute to LangChain
**Step 2: Add that parameter as a configurable field for the chain**
This will let you easily call the chain and configure any relevant flags at runtime. See [this documentation](/v0.2/docs/how_to/configure/) for more information on configuration.
Now, at runtime you can call this chain with configurable field.
Code Example[](#code-example "Direct link to Code Example")
------------------------------------------------------------
Let's see a concrete example of what this looks like in code. We will use Pinecone for this example.
To configure Pinecone, set the following environment variable:
* `PINECONE_API_KEY`: Your Pinecone API key
from langchain_openai import OpenAIEmbeddingsfrom langchain_pinecone import PineconeVectorStoreembeddings = OpenAIEmbeddings()vectorstore = PineconeVectorStore(index_name="test-example", embedding=embeddings)vectorstore.add_texts(["i worked at kensho"], namespace="harrison")vectorstore.add_texts(["i worked at facebook"], namespace="ankush")
**API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [PineconeVectorStore](https://api.python.langchain.com/en/latest/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html)
['ce15571e-4e2f-44c9-98df-7e83f6f63095']
The pinecone kwarg for `namespace` can be used to separate documents
# This will only get documents for Ankushvectorstore.as_retriever(search_kwargs={"namespace": "ankush"}).get_relevant_documents( "where did i work?")
[Document(page_content='i worked at facebook')]
# This will only get documents for Harrisonvectorstore.as_retriever( search_kwargs={"namespace": "harrison"}).get_relevant_documents("where did i work?")
[Document(page_content='i worked at kensho')]
We can now create the chain that we will use to do question-answering over.
Let's first select a LLM. import ChatModelTabs from "@theme/ChatModelTabs";
This is basic question-answering chain set up.
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import ( ConfigurableField, RunnablePassthrough,)template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)retriever = vectorstore.as_retriever()
**API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [ConfigurableField](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.utils.ConfigurableField.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
Here we mark the retriever as having a configurable field. All vectorstore retrievers have `search_kwargs` as a field. This is just a dictionary, with vectorstore specific fields.
This will let us pass in a value for `search_kwargs` when invoking the chain.
configurable_retriever = retriever.configurable_fields( search_kwargs=ConfigurableField( id="search_kwargs", name="Search Kwargs", description="The search kwargs to use", ))
We can now create the chain using our configurable retriever
chain = ( {"context": configurable_retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser())
We can now invoke the chain with configurable options. `search_kwargs` is the id of the configurable field. The value is the search kwargs to use for Pinecone
chain.invoke( "where did the user work?", config={"configurable": {"search_kwargs": {"namespace": "harrison"}}},)
'The user worked at Kensho.'
chain.invoke( "where did the user work?", config={"configurable": {"search_kwargs": {"namespace": "ankush"}}},)
'The user worked at Facebook.'
For more vectorstore implementations for multi-user, please refer to specific pages, such as [Milvus](/v0.2/docs/integrations/vectorstores/milvus/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/qa_per_user.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to get a RAG application to add citations
](/v0.2/docs/how_to/qa_citations/)[
Next
How to get your RAG application to return sources
](/v0.2/docs/how_to/qa_sources/)
* [Code Example](#code-example) | null |
https://python.langchain.com/v0.2/docs/how_to/qa_streaming/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to stream results from your RAG application
On this page
How to stream results from your RAG application
===============================================
This guide explains how to stream results from a RAG application. It covers streaming tokens from the final output as well as intermediate steps of a chain (e.g., from query re-writing).
We'll work off of the Q&A app with sources we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [RAG tutorial](/v0.2/docs/tutorials/rag/).
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Dependencies[](#dependencies "Direct link to Dependencies")
We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/v0.2/docs/concepts/#embedding-models), [VectorStore](/v0.2/docs/concepts/#vectorstores) or [Retriever](/v0.2/docs/concepts/#retrievers).
We'll use the following packages:
%pip install --upgrade --quiet langchain langchain-community langchainhub langchain-openai langchain-chroma bs4
We need to set environment variable `OPENAI_API_KEY`, which can be done directly or loaded from a `.env` file like so:
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# import dotenv# dotenv.load_dotenv()
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
os.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
RAG chain[](#rag-chain "Direct link to RAG chain")
---------------------------------------------------
Let's first select a LLM:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
Here is Q&A app with sources we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [RAG tutorial](/v0.2/docs/tutorials/rag/):
import bs4from langchain.chains import create_retrieval_chainfrom langchain.chains.combine_documents import create_stuff_documents_chainfrom langchain_chroma import Chromafrom langchain_community.document_loaders import WebBaseLoaderfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitter# 1. Load, chunk and index the contents of the blog to create a retriever.loader = WebBaseLoader( web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",), bs_kwargs=dict( parse_only=bs4.SoupStrainer( class_=("post-content", "post-title", "post-header") ) ),)docs = loader.load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)splits = text_splitter.split_documents(docs)vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()# 2. Incorporate the retriever into a question-answering chain.system_prompt = ( "You are an assistant for question-answering tasks. " "Use the following pieces of retrieved context to answer " "the question. If you don't know the answer, say that you " "don't know. Use three sentences maximum and keep the " "answer concise." "\n\n" "{context}")prompt = ChatPromptTemplate.from_messages( [ ("system", system_prompt), ("human", "{input}"), ])question_answer_chain = create_stuff_documents_chain(llm, prompt)rag_chain = create_retrieval_chain(retriever, question_answer_chain)
**API Reference:**[create\_retrieval\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) | [create\_stuff\_documents\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) | [WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
Streaming final outputs[](#streaming-final-outputs "Direct link to Streaming final outputs")
---------------------------------------------------------------------------------------------
The chain constructed by `create_retrieval_chain` returns a dict with keys `"input"`, `"context"`, and `"answer"`. The `.stream` method will by default stream each key in a sequence.
Note that here only the `"answer"` key is streamed token-by-token, as the other components-- such as retrieval-- do not support token-level streaming.
for chunk in rag_chain.stream({"input": "What is Task Decomposition?"}): print(chunk)
{'input': 'What is Task Decomposition?'}{'context': [Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content='Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content='Resources:\n1. Internet access for searches and information gathering.\n2. Long Term memory management.\n3. GPT-3.5 powered Agents for delegation of simple tasks.\n4. File output.\n\nPerformance Evaluation:\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n2. Constructively self-criticize your big-picture behavior constantly.\n3. Reflect on past decisions and strategies to refine your approach.\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content="(3) Task execution: Expert models execute on the specific tasks and log results.\nInstruction:\n\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user's request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.", metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'})]}{'answer': ''}{'answer': 'Task'}{'answer': ' decomposition'}{'answer': ' involves'}{'answer': ' breaking'}{'answer': ' down'}{'answer': ' complex'}{'answer': ' tasks'}{'answer': ' into'}{'answer': ' smaller'}{'answer': ' and'}{'answer': ' simpler'}{'answer': ' steps'}{'answer': ' to'}{'answer': ' make'}{'answer': ' them'}{'answer': ' more'}{'answer': ' manageable'}{'answer': '.'}{'answer': ' This'}{'answer': ' process'}{'answer': ' can'}{'answer': ' be'}{'answer': ' facilitated'}{'answer': ' by'}{'answer': ' techniques'}{'answer': ' like'}{'answer': ' Chain'}{'answer': ' of'}{'answer': ' Thought'}{'answer': ' ('}{'answer': 'Co'}{'answer': 'T'}{'answer': ')'}{'answer': ' and'}{'answer': ' Tree'}{'answer': ' of'}{'answer': ' Thoughts'}{'answer': ','}{'answer': ' which'}{'answer': ' help'}{'answer': ' agents'}{'answer': ' plan'}{'answer': ' and'}{'answer': ' execute'}{'answer': ' tasks'}{'answer': ' effectively'}{'answer': ' by'}{'answer': ' dividing'}{'answer': ' them'}{'answer': ' into'}{'answer': ' sub'}{'answer': 'goals'}{'answer': ' or'}{'answer': ' multiple'}{'answer': ' reasoning'}{'answer': ' possibilities'}{'answer': '.'}{'answer': ' Task'}{'answer': ' decomposition'}{'answer': ' can'}{'answer': ' be'}{'answer': ' initiated'}{'answer': ' through'}{'answer': ' simple'}{'answer': ' prompts'}{'answer': ','}{'answer': ' task'}{'answer': '-specific'}{'answer': ' instructions'}{'answer': ','}{'answer': ' or'}{'answer': ' human'}{'answer': ' inputs'}{'answer': ' to'}{'answer': ' guide'}{'answer': ' the'}{'answer': ' agent'}{'answer': ' in'}{'answer': ' achieving'}{'answer': ' its'}{'answer': ' goals'}{'answer': ' efficiently'}{'answer': '.'}{'answer': ''}
We are free to process chunks as they are streamed out. If we just want to stream the answer tokens, for example, we can select chunks with the corresponding key:
for chunk in rag_chain.stream({"input": "What is Task Decomposition?"}): if answer_chunk := chunk.get("answer"): print(f"{answer_chunk}|", end="")
Task| decomposition| is| a| technique| used| to| break| down| complex| tasks| into| smaller| and| more| manageable| steps|.| This| process| helps| agents| or| models| handle| intricate| tasks| by| dividing| them| into| simpler| sub|tasks|.| By| decom|posing| tasks|,| the| model| can| effectively| plan| and| execute| each| step| towards| achieving| the| overall| goal|.|
More simply, we can use the [.pick](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.pick) method to select only the desired key:
chain = rag_chain.pick("answer")for chunk in chain.stream({"input": "What is Task Decomposition?"}): print(f"{chunk}|", end="")
|Task| decomposition| involves| breaking| down| complex| tasks| into| smaller| and| simpler| steps| to| make| them| more| manageable| for| an| agent| or| model| to| handle|.| This| process| helps| in| planning| and| executing| tasks| efficiently| by| dividing| them| into| a| series| of| sub|goals| or| actions|.| Task| decomposition| can| be| achieved| through| techniques| like| Chain| of| Thought| (|Co|T|)| or| Tree| of| Thoughts|,| which| enhance| model| performance| on| intricate| tasks| by| guiding| them| through| step|-by|-step| thinking| processes|.||
Streaming intermediate steps[](#streaming-intermediate-steps "Direct link to Streaming intermediate steps")
------------------------------------------------------------------------------------------------------------
Suppose we want to stream not only the final outputs of the chain, but also some intermediate steps. As an example let's take our [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history/) chain. Here we reformulate the user question before passing it to the retriever. This reformulated question is not returned as part of the final output. We could modify our chain to return the new question, but for demonstration purposes we'll leave it as is.
from langchain.chains import create_history_aware_retrieverfrom langchain_core.prompts import MessagesPlaceholder### Contextualize question ###contextualize_q_system_prompt = ( "Given a chat history and the latest user question " "which might reference context in the chat history, " "formulate a standalone question which can be understood " "without the chat history. Do NOT answer the question, " "just reformulate it if needed and otherwise return it as is.")contextualize_q_prompt = ChatPromptTemplate.from_messages( [ ("system", contextualize_q_system_prompt), MessagesPlaceholder("chat_history"), ("human", "{input}"), ])contextualize_q_llm = llm.with_config(tags=["contextualize_q_llm"])history_aware_retriever = create_history_aware_retriever( contextualize_q_llm, retriever, contextualize_q_prompt)### Answer question ###system_prompt = ( "You are an assistant for question-answering tasks. " "Use the following pieces of retrieved context to answer " "the question. If you don't know the answer, say that you " "don't know. Use three sentences maximum and keep the " "answer concise." "\n\n" "{context}")qa_prompt = ChatPromptTemplate.from_messages( [ ("system", system_prompt), MessagesPlaceholder("chat_history"), ("human", "{input}"), ])question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
**API Reference:**[create\_history\_aware\_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html)
Note that above we use `.with_config` to assign a tag to the LLM that is used for the question re-phrasing step. This is not necessary but will make it more convenient to stream output from that specific step.
To demonstrate, we will pass in an artificial message history:
Human: What is task decomposition?AI: Task decomposition involves breaking up a complex task into smaller and simpler steps.
We then ask a follow up question: "What are some common ways of doing it?" Leading into the retrieval step, our `history_aware_retriever` will rephrase this question using the conversation's context to ensure that the retrieval is meaningful.
To stream intermediate output, we recommend use of the async `.astream_events` method. This method will stream output from all "events" in the chain, and can be quite verbose. We can filter using tags, event types, and other criteria, as we do here.
Below we show a typical `.astream_events` loop, where we pass in the chain input and emit desired results. See the [API reference](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.astream_events) and [streaming guide](/v0.2/docs/how_to/streaming/) for more detail.
first_question = "What is task decomposition?"first_answer = ( "Task decomposition involves breaking up " "a complex task into smaller and simpler " "steps.")follow_up_question = "What are some common ways of doing it?"chat_history = [ ("human", first_question), ("ai", first_answer),]async for event in rag_chain.astream_events( { "input": follow_up_question, "chat_history": chat_history, }, version="v1",): if ( event["event"] == "on_chat_model_stream" and "contextualize_q_llm" in event["tags"] ): ai_message_chunk = event["data"]["chunk"] print(f"{ai_message_chunk.content}|", end="")
|What| are| some| typical| methods| used| for| task| decomposition|?||
Here we recover, token-by-token, the query that is passed into the retriever given our question "What are some common ways of doing it?"
If we wanted to get our retrieved docs, we could filter on name "Retriever":
async for event in rag_chain.astream_events( { "input": follow_up_question, "chat_history": chat_history, }, version="v1",): if event["name"] == "Retriever": print(event) print()
{'event': 'on_retriever_start', 'name': 'Retriever', 'run_id': '6834097c-07fe-42f5-a566-a4780af4d1d0', 'tags': ['seq:step:4', 'Chroma', 'OpenAIEmbeddings'], 'metadata': {}, 'data': {'input': {'query': 'What are some typical methods used for task decomposition?'}}}{'event': 'on_retriever_end', 'name': 'Retriever', 'run_id': '6834097c-07fe-42f5-a566-a4780af4d1d0', 'tags': ['seq:step:4', 'Chroma', 'OpenAIEmbeddings'], 'metadata': {}, 'data': {'input': {'query': 'What are some typical methods used for task decomposition?'}, 'output': {'documents': [Document(page_content='Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content='Resources:\n1. Internet access for searches and information gathering.\n2. Long Term memory management.\n3. GPT-3.5 powered Agents for delegation of simple tasks.\n4. File output.\n\nPerformance Evaluation:\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n2. Constructively self-criticize your big-picture behavior constantly.\n3. Reflect on past decisions and strategies to refine your approach.\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}), Document(page_content='Fig. 9. Comparison of MIPS algorithms, measured in recall@10. (Image source: Google Blog, 2020)\nCheck more MIPS algorithms and performance comparison in ann-benchmarks.com.\nComponent Three: Tool Use#\nTool use is a remarkable and distinguishing characteristic of human beings. We create, modify and utilize external objects to do things that go beyond our physical and cognitive limits. Equipping LLMs with external tools can significantly extend the model capabilities.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'})]}}}
For more on how to stream intermediate steps check out the [streaming guide](/v0.2/docs/how_to/streaming/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/qa_streaming.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to get your RAG application to return sources
](/v0.2/docs/how_to/qa_sources/)[
Next
How to split JSON data
](/v0.2/docs/how_to/recursive_json_splitter/)
* [Setup](#setup)
* [Dependencies](#dependencies)
* [LangSmith](#langsmith)
* [RAG chain](#rag-chain)
* [Streaming final outputs](#streaming-final-outputs)
* [Streaming intermediate steps](#streaming-intermediate-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/response_metadata/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* Response metadata
On this page
Response metadata
=================
Many model providers include some metadata in their chat generation responses. This metadata can be accessed via the `AIMessage.response_metadata: Dict` attribute. Depending on the model provider and model configuration, this can contain information like [token counts](/v0.2/docs/how_to/chat_token_usage_tracking/), [logprobs](/v0.2/docs/how_to/logprobs/), and more.
Here's what the response metadata looks like for a few different providers:
OpenAI[](#openai "Direct link to OpenAI")
------------------------------------------
from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-4-turbo")msg = llm.invoke([("human", "What's the oldest known example of cuneiform")])msg.response_metadata
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
{'token_usage': {'completion_tokens': 164, 'prompt_tokens': 17, 'total_tokens': 181}, 'model_name': 'gpt-4-turbo', 'system_fingerprint': 'fp_76f018034d', 'finish_reason': 'stop', 'logprobs': None}
Anthropic[](#anthropic "Direct link to Anthropic")
---------------------------------------------------
from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")msg = llm.invoke([("human", "What's the oldest known example of cuneiform")])msg.response_metadata
**API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html)
{'id': 'msg_01CzQyD7BX8nkhDNfT1QqvEp', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 17, 'output_tokens': 296}}
Google VertexAI[](#google-vertexai "Direct link to Google VertexAI")
---------------------------------------------------------------------
from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")msg = llm.invoke([("human", "What's the oldest known example of cuneiform")])msg.response_metadata
{'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 10, 'candidates_token_count': 30, 'total_token_count': 40}}
Bedrock (Anthropic)[](#bedrock-anthropic "Direct link to Bedrock (Anthropic)")
-------------------------------------------------------------------------------
from langchain_aws import ChatBedrockllm = ChatBedrock(model_id="anthropic.claude-v2")msg = llm.invoke([("human", "What's the oldest known example of cuneiform")])msg.response_metadata
{'model_id': 'anthropic.claude-v2', 'usage': {'prompt_tokens': 19, 'completion_tokens': 371, 'total_tokens': 390}}
MistralAI[](#mistralai "Direct link to MistralAI")
---------------------------------------------------
from langchain_mistralai import ChatMistralAIllm = ChatMistralAI()msg = llm.invoke([("human", "What's the oldest known example of cuneiform")])msg.response_metadata
**API Reference:**[ChatMistralAI](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html)
{'token_usage': {'prompt_tokens': 19, 'total_tokens': 141, 'completion_tokens': 122}, 'model': 'mistral-small', 'finish_reason': 'stop'}
Groq[](#groq "Direct link to Groq")
------------------------------------
from langchain_groq import ChatGroqllm = ChatGroq()msg = llm.invoke([("human", "What's the oldest known example of cuneiform")])msg.response_metadata
**API Reference:**[ChatGroq](https://api.python.langchain.com/en/latest/chat_models/langchain_groq.chat_models.ChatGroq.html)
{'token_usage': {'completion_time': 0.243, 'completion_tokens': 132, 'prompt_time': 0.022, 'prompt_tokens': 22, 'queue_time': None, 'total_time': 0.265, 'total_tokens': 154}, 'model_name': 'mixtral-8x7b-32768', 'system_fingerprint': 'fp_7b44c65f25', 'finish_reason': 'stop', 'logprobs': None}
TogetherAI[](#togetherai "Direct link to TogetherAI")
------------------------------------------------------
import osfrom langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)msg = llm.invoke([("human", "What's the oldest known example of cuneiform")])msg.response_metadata
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
{'token_usage': {'completion_tokens': 208, 'prompt_tokens': 20, 'total_tokens': 228}, 'model_name': 'mistralai/Mixtral-8x7B-Instruct-v0.1', 'system_fingerprint': None, 'finish_reason': 'eos', 'logprobs': None}
FireworksAI[](#fireworksai "Direct link to FireworksAI")
---------------------------------------------------------
from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")msg = llm.invoke([("human", "What's the oldest known example of cuneiform")])msg.response_metadata
**API Reference:**[ChatFireworks](https://api.python.langchain.com/en/latest/chat_models/langchain_fireworks.chat_models.ChatFireworks.html)
{'token_usage': {'prompt_tokens': 19, 'total_tokens': 219, 'completion_tokens': 200}, 'model_name': 'accounts/fireworks/models/mixtral-8x7b-instruct', 'system_fingerprint': '', 'finish_reason': 'length', 'logprobs': None}
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/response_metadata.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to recursively split text by characters
](/v0.2/docs/how_to/recursive_text_splitter/)[
Next
How to do "self-querying" retrieval
](/v0.2/docs/how_to/self_query/)
* [OpenAI](#openai)
* [Anthropic](#anthropic)
* [Google VertexAI](#google-vertexai)
* [Bedrock (Anthropic)](#bedrock-anthropic)
* [MistralAI](#mistralai)
* [Groq](#groq)
* [TogetherAI](#togetherai)
* [FireworksAI](#fireworksai) | null |
https://python.langchain.com/v0.2/docs/how_to/recursive_json_splitter/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to split JSON data
On this page
How to split JSON data
======================
This json splitter splits json data while allowing control over chunk sizes. It traverses json data depth first and builds smaller json chunks. It attempts to keep nested json objects whole but will split them if needed to keep chunks between a min\_chunk\_size and the max\_chunk\_size.
If the value is not a nested json, but rather a very large string the string will not be split. If you need a hard cap on the chunk size consider composing this with a Recursive Text splitter on those chunks. There is an optional pre-processing step to split lists, by first converting them to json (dict) and then splitting them as such.
1. How the text is split: json value.
2. How the chunk size is measured: by number of characters.
%pip install -qU langchain-text-splitters
First we load some json data:
import jsonimport requests# This is a large nested json object and will be loaded as a python dictjson_data = requests.get("https://api.smith.langchain.com/openapi.json").json()
Basic usage[](#basic-usage "Direct link to Basic usage")
---------------------------------------------------------
Specify `max_chunk_size` to constrain chunk sizes:
from langchain_text_splitters import RecursiveJsonSplittersplitter = RecursiveJsonSplitter(max_chunk_size=300)
**API Reference:**[RecursiveJsonSplitter](https://api.python.langchain.com/en/latest/json/langchain_text_splitters.json.RecursiveJsonSplitter.html)
To obtain json chunks, use the `.split_json` method:
# Recursively split json data - If you need to access/manipulate the smaller json chunksjson_chunks = splitter.split_json(json_data=json_data)for chunk in json_chunks[:3]: print(chunk)
{'openapi': '3.1.0', 'info': {'title': 'LangSmith', 'version': '0.1.0'}, 'servers': [{'url': 'https://api.smith.langchain.com', 'description': 'LangSmith API endpoint.'}]}{'paths': {'/api/v1/sessions/{session_id}': {'get': {'tags': ['tracer-sessions'], 'summary': 'Read Tracer Session', 'description': 'Get a specific session.', 'operationId': 'read_tracer_session_api_v1_sessions__session_id__get'}}}}{'paths': {'/api/v1/sessions/{session_id}': {'get': {'security': [{'API Key': []}, {'Tenant ID': []}, {'Bearer Auth': []}]}}}}
To obtain LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects, use the `.create_documents` method:
# The splitter can also output documentsdocs = splitter.create_documents(texts=[json_data])for doc in docs[:3]: print(doc)
page_content='{"openapi": "3.1.0", "info": {"title": "LangSmith", "version": "0.1.0"}, "servers": [{"url": "https://api.smith.langchain.com", "description": "LangSmith API endpoint."}]}'page_content='{"paths": {"/api/v1/sessions/{session_id}": {"get": {"tags": ["tracer-sessions"], "summary": "Read Tracer Session", "description": "Get a specific session.", "operationId": "read_tracer_session_api_v1_sessions__session_id__get"}}}}'page_content='{"paths": {"/api/v1/sessions/{session_id}": {"get": {"security": [{"API Key": []}, {"Tenant ID": []}, {"Bearer Auth": []}]}}}}'
Or use `.split_text` to obtain string content directly:
texts = splitter.split_text(json_data=json_data)print(texts[0])print(texts[1])
{"openapi": "3.1.0", "info": {"title": "LangSmith", "version": "0.1.0"}, "servers": [{"url": "https://api.smith.langchain.com", "description": "LangSmith API endpoint."}]}{"paths": {"/api/v1/sessions/{session_id}": {"get": {"tags": ["tracer-sessions"], "summary": "Read Tracer Session", "description": "Get a specific session.", "operationId": "read_tracer_session_api_v1_sessions__session_id__get"}}}}
How to manage chunk sizes from list content[](#how-to-manage-chunk-sizes-from-list-content "Direct link to How to manage chunk sizes from list content")
---------------------------------------------------------------------------------------------------------------------------------------------------------
Note that one of the chunks in this example is larger than the specified `max_chunk_size` of 300. Reviewing one of these chunks that was bigger we see there is a list object there:
print([len(text) for text in texts][:10])print()print(texts[3])
[171, 231, 126, 469, 210, 213, 237, 271, 191, 232]{"paths": {"/api/v1/sessions/{session_id}": {"get": {"parameters": [{"name": "session_id", "in": "path", "required": true, "schema": {"type": "string", "format": "uuid", "title": "Session Id"}}, {"name": "include_stats", "in": "query", "required": false, "schema": {"type": "boolean", "default": false, "title": "Include Stats"}}, {"name": "accept", "in": "header", "required": false, "schema": {"anyOf": [{"type": "string"}, {"type": "null"}], "title": "Accept"}}]}}}}
The json splitter by default does not split lists.
Specify `convert_lists=True` to preprocess the json, converting list content to dicts with `index:item` as `key:val` pairs:
texts = splitter.split_text(json_data=json_data, convert_lists=True)
Let's look at the size of the chunks. Now they are all under the max
print([len(text) for text in texts][:10])
[176, 236, 141, 203, 212, 221, 210, 213, 242, 291]
The list has been converted to a dict, but retains all the needed contextual information even if split into many chunks:
print(texts[1])
{"paths": {"/api/v1/sessions/{session_id}": {"get": {"tags": {"0": "tracer-sessions"}, "summary": "Read Tracer Session", "description": "Get a specific session.", "operationId": "read_tracer_session_api_v1_sessions__session_id__get"}}}}
# We can also look at the documentsdocs[1]
Document(page_content='{"paths": {"/api/v1/sessions/{session_id}": {"get": {"tags": ["tracer-sessions"], "summary": "Read Tracer Session", "description": "Get a specific session.", "operationId": "read_tracer_session_api_v1_sessions__session_id__get"}}}}')
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/recursive_json_splitter.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to stream results from your RAG application
](/v0.2/docs/how_to/qa_streaming/)[
Next
How to recursively split text by characters
](/v0.2/docs/how_to/recursive_text_splitter/)
* [Basic usage](#basic-usage)
* [How to manage chunk sizes from list content](#how-to-manage-chunk-sizes-from-list-content) | null |
https://python.langchain.com/v0.2/docs/how_to/sequence/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to chain runnables
On this page
How to chain runnables
======================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [Output parser](/v0.2/docs/concepts/#output-parsers)
One point about [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language) is that any two runnables can be "chained" together into sequences. The output of the previous runnable's `.invoke()` call is passed as input to the next runnable. This can be done using the pipe operator (`|`), or the more explicit `.pipe()` method, which does the same thing.
The resulting [`RunnableSequence`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableSequence.html) is itself a runnable, which means it can be invoked, streamed, or further chained just like any other runnable. Advantages of chaining runnables in this way are efficient streaming (the sequence will stream output as soon as it is available), and debugging and tracing with tools like [LangSmith](/v0.2/docs/how_to/debugging/).
The pipe operator: `|`[](#the-pipe-operator- "Direct link to the-pipe-operator-")
----------------------------------------------------------------------------------
To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/v0.2/docs/how_to/#prompt-templates) to format input into a [chat model](/v0.2/docs/how_to/#chat-models), and finally converting the chat message output into a string with an [output parser](/v0.2/docs/how_to/#output-parsers).
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAImodel = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAImodel = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoheremodel = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksmodel = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqmodel = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAImodel = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")chain = prompt | model | StrOutputParser()
**API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
Prompts and models are both runnable, and the output type from the prompt call is the same as the input type of the chat model, so we can chain them together. We can then invoke the resulting sequence like any other runnable:
chain.invoke({"topic": "bears"})
"Here's a bear joke for you:\n\nWhy did the bear dissolve in water?\nBecause it was a polar bear!"
### Coercion[](#coercion "Direct link to Coercion")
We can even combine this chain with more runnables to create another chain. This may involve some input/output formatting using other types of runnables, depending on the required inputs and outputs of the chain components.
For example, let's say we wanted to compose the joke generating chain with another chain that evaluates whether or not the generated joke was funny.
We would need to be careful with how we format the input into the next chain. In the below example, the dict in the chain is automatically parsed and converted into a [`RunnableParallel`](/v0.2/docs/how_to/parallel/), which runs all of its values in parallel and returns a dict with the results.
This happens to be the same format the next prompt template expects. Here it is in action:
from langchain_core.output_parsers import StrOutputParseranalysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()composed_chain.invoke({"topic": "bears"})
**API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html)
'Haha, that\'s a clever play on words! Using "polar" to imply the bear dissolved or became polar/polarized when put in water. Not the most hilarious joke ever, but it has a cute, groan-worthy pun that makes it mildly amusing. I appreciate a good pun or wordplay joke.'
Functions will also be coerced into runnables, so you can add custom logic to your chains too. The below chain results in the same logical flow as before:
composed_chain_with_lambda = ( chain | (lambda input: {"joke": input}) | analysis_prompt | model | StrOutputParser())composed_chain_with_lambda.invoke({"topic": "beets"})
"Haha, that's a cute and punny joke! I like how it plays on the idea of beets blushing or turning red like someone blushing. Food puns can be quite amusing. While not a total knee-slapper, it's a light-hearted, groan-worthy dad joke that would make me chuckle and shake my head. Simple vegetable humor!"
However, keep in mind that using functions like this may interfere with operations like streaming. See [this section](/v0.2/docs/how_to/functions/) for more information.
The `.pipe()` method[](#the-pipe-method "Direct link to the-pipe-method")
--------------------------------------------------------------------------
We could also compose the same sequence using the `.pipe()` method. Here's what that looks like:
from langchain_core.runnables import RunnableParallelcomposed_chain_with_pipe = ( RunnableParallel({"joke": chain}) .pipe(analysis_prompt) .pipe(model) .pipe(StrOutputParser()))composed_chain_with_pipe.invoke({"topic": "battlestar galactica"})
**API Reference:**[RunnableParallel](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html)
"I cannot reproduce any copyrighted material verbatim, but I can try to analyze the humor in the joke you provided without quoting it directly.\n\nThe joke plays on the idea that the Cylon raiders, who are the antagonists in the Battlestar Galactica universe, failed to locate the human survivors after attacking their home planets (the Twelve Colonies) due to using an outdated and poorly performing operating system (Windows Vista) for their targeting systems.\n\nThe humor stems from the juxtaposition of a futuristic science fiction setting with a relatable real-world frustration – the use of buggy, slow, or unreliable software or technology. It pokes fun at the perceived inadequacies of Windows Vista, which was widely criticized for its performance issues and other problems when it was released.\n\nBy attributing the Cylons' failure to locate the humans to their use of Vista, the joke creates an amusing and unexpected connection between a fictional advanced race of robots and a familiar technological annoyance experienced by many people in the real world.\n\nOverall, the joke relies on incongruity and relatability to generate humor, but without reproducing any copyrighted material directly."
Or the abbreviated:
composed_chain_with_pipe = RunnableParallel({"joke": chain}).pipe( analysis_prompt, model, StrOutputParser())
Related[](#related "Direct link to Related")
---------------------------------------------
* [Streaming](/v0.2/docs/how_to/streaming/): Check out the streaming guide to understand the streaming behavior of a chain
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/sequence.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to split text based on semantic similarity
](/v0.2/docs/how_to/semantic-chunker/)[
Next
How to save and load LangChain objects
](/v0.2/docs/how_to/serialization/)
* [The pipe operator: `|`](#the-pipe-operator-)
* [Coercion](#coercion)
* [The `.pipe()` method](#the-pipe-method)
* [Related](#related) | null |
https://python.langchain.com/v0.2/docs/how_to/recursive_text_splitter/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to recursively split text by characters
On this page
How to recursively split text by characters
===========================================
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is `["\n\n", "\n", " ", ""]`. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.
1. How the text is split: by list of characters.
2. How the chunk size is measured: by number of characters.
Below we show example usage.
To obtain the string content directly, use `.split_text`.
To create LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects (e.g., for use in downstream tasks), use `.create_documents`.
%pip install -qU langchain-text-splitters
from langchain_text_splitters import RecursiveCharacterTextSplitter# Load example documentwith open("state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size=100, chunk_overlap=20, length_function=len, is_separator_regex=False,)texts = text_splitter.create_documents([state_of_the_union])print(texts[0])print(texts[1])
**API Reference:**[RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and'page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.'
text_splitter.split_text(state_of_the_union)[:2]
['Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and', 'of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.']
Let's go through the parameters set above for `RecursiveCharacterTextSplitter`:
* `chunk_size`: The maximum size of a chunk, where size is determined by the `length_function`.
* `chunk_overlap`: Target overlap between chunks. Overlapping chunks helps to mitigate loss of information when context is divided between chunks.
* `length_function`: Function determining the chunk size.
* `is_separator_regex`: Whether the separator list (defaulting to `["\n\n", "\n", " ", ""]`) should be interpreted as regex.
Splitting text from languages without word boundaries[](#splitting-text-from-languages-without-word-boundaries "Direct link to Splitting text from languages without word boundaries")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Some writing systems do not have [word boundaries](https://en.wikipedia.org/wiki/Category:Writing_systems_without_word_boundaries), for example Chinese, Japanese, and Thai. Splitting text with the default separator list of `["\n\n", "\n", " ", ""]` can cause words to be split between chunks. To keep words together, you can override the list of separators to include additional punctuation:
* Add ASCII full-stop "`.`", [Unicode fullwidth](https://en.wikipedia.org/wiki/Halfwidth_and_Fullwidth_Forms_\(Unicode_block\)) full stop "`.`" (used in Chinese text), and [ideographic full stop](https://en.wikipedia.org/wiki/CJK_Symbols_and_Punctuation) "`。`" (used in Japanese and Chinese)
* Add [Zero-width space](https://en.wikipedia.org/wiki/Zero-width_space) used in Thai, Myanmar, Kmer, and Japanese.
* Add ASCII comma "`,`", Unicode fullwidth comma "`,`", and Unicode ideographic comma "`、`"
text_splitter = RecursiveCharacterTextSplitter( separators=[ "\n\n", "\n", " ", ".", ",", "\u200b", # Zero-width space "\uff0c", # Fullwidth comma "\u3001", # Ideographic comma "\uff0e", # Fullwidth full stop "\u3002", # Ideographic full stop "", ], # Existing args)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/recursive_text_splitter.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to split JSON data
](/v0.2/docs/how_to/recursive_json_splitter/)[
Next
Response metadata
](/v0.2/docs/how_to/response_metadata/)
* [Splitting text from languages without word boundaries](#splitting-text-from-languages-without-word-boundaries) | null |
https://python.langchain.com/v0.2/docs/how_to/self_query/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to do "self-querying" retrieval
On this page
How to do "self-querying" retrieval
===================================
info
Head to [Integrations](/v0.2/docs/integrations/retrievers/self_query/) for documentation on vector stores with built-in support for self-querying.
A self-querying retriever is one that, as the name suggests, has the ability to query itself. Specifically, given any natural language query, the retriever uses a query-constructing LLM chain to write a structured query and then applies that structured query to its underlying VectorStore. This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documents but to also extract filters from the user query on the metadata of stored documents and to execute those filters.
![](/v0.2/assets/images/self_querying-26ac0fc8692e85bc3cd9b8640509404f.jpg)
Get started[](#get-started "Direct link to Get started")
---------------------------------------------------------
For demonstration purposes we'll use a `Chroma` vector store. We've created a small demo set of documents that contain summaries of movies.
**Note:** The self-query retriever requires you to have `lark` package installed.
%pip install --upgrade --quiet lark langchain-chroma
from langchain_chroma import Chromafrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsdocs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "thriller", "rating": 9.9, }, ),]vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())
**API Reference:**[Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
### Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import ChatOpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie. One of ['science fiction', 'comedy', 'drama', 'thriller', 'romance', 'action', 'animated']", type="string", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = ChatOpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info,)
**API Reference:**[AttributeInfo](https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.schema.AttributeInfo.html) | [SelfQueryRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.base.SelfQueryRetriever.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
### Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can actually try using our retriever!
# This example only specifies a filterretriever.invoke("I want to watch a movie rated higher than 8.5")
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': 'thriller', 'rating': 9.9, 'year': 1979}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006})]
# This example specifies a query and a filterretriever.invoke("Has Greta Gerwig directed any movies about women")
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019})]
# This example specifies a composite filterretriever.invoke("What's a highly rated (above 8.5) science fiction film?")
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': 'thriller', 'rating': 9.9, 'year': 1979})]
# This example specifies a query and composite filterretriever.invoke( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
[Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995})]
### Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True,)# This example only specifies a relevant queryretriever.invoke("What are two movies about dinosaurs")
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995})]
Constructing from scratch with LCEL[](#constructing-from-scratch-with-lcel "Direct link to Constructing from scratch with LCEL")
---------------------------------------------------------------------------------------------------------------------------------
To see what's going on under the hood, and to have more custom control, we can reconstruct our retriever from scratch.
First, we need to create a query-construction chain. This chain will take a user query and generated a `StructuredQuery` object which captures the filters specified by the user. We provide some helper functions for creating a prompt and output parser. These have a number of tunable params that we'll ignore here for simplicity.
from langchain.chains.query_constructor.base import ( StructuredQueryOutputParser, get_query_constructor_prompt,)prompt = get_query_constructor_prompt( document_content_description, metadata_field_info,)output_parser = StructuredQueryOutputParser.from_components()query_constructor = prompt | llm | output_parser
**API Reference:**[StructuredQueryOutputParser](https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.base.StructuredQueryOutputParser.html) | [get\_query\_constructor\_prompt](https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.base.get_query_constructor_prompt.html)
Let's look at our prompt:
print(prompt.format(query="dummy question"))
Your goal is to structure the user's query to match the request schema provided below.<< Structured Request Schema >>When responding use a markdown code snippet with a JSON object formatted in the following schema:```json{ "query": string \ text string to compare to document contents "filter": string \ logical condition statement for filtering documents}
The query string should contain only text that is expected to match the contents of documents. Any conditions in the filter should not be mentioned in the query as well.
A logical condition statement is composed of one or more comparison and logical operation statements.
A comparison statement takes the form: `comp(attr, val)`:
* `comp` (eq | ne | gt | gte | lt | lte | contain | like | in | nin): comparator
* `attr` (string): name of attribute to apply the comparison to
* `val` (string): is the comparison value
A logical operation statement takes the form `op(statement1, statement2, ...)`:
* `op` (and | or | not): logical operator
* `statement1`, `statement2`, ... (comparison statements or logical operation statements): one or more statements to apply the operation to
Make sure that you only use the comparators and logical operators listed above and no others. Make sure that filters only refer to attributes that exist in the data source. Make sure that filters only use the attributed names with its function names if there are functions applied on them. Make sure that filters only use format `YYYY-MM-DD` when handling date data typed values. Make sure that filters take into account the descriptions of attributes and only make comparisons that are feasible given the type of data being stored. Make sure that filters are only used as needed. If there are no filters that should be applied return "NO\_FILTER" for the filter value.
<< Example 1. >> Data Source:
{ "content": "Lyrics of a song", "attributes": { "artist": { "type": "string", "description": "Name of the song artist" }, "length": { "type": "integer", "description": "Length of the song in seconds" }, "genre": { "type": "string", "description": "The song genre, one of "pop", "rock" or "rap"" } }}
User Query: What are songs by Taylor Swift or Katy Perry about teenage romance under 3 minutes long in the dance pop genre
Structured Request:
{ "query": "teenager love", "filter": "and(or(eq(\"artist\", \"Taylor Swift\"), eq(\"artist\", \"Katy Perry\")), lt(\"length\", 180), eq(\"genre\", \"pop\"))"}
<< Example 2. >> Data Source:
{ "content": "Lyrics of a song", "attributes": { "artist": { "type": "string", "description": "Name of the song artist" }, "length": { "type": "integer", "description": "Length of the song in seconds" }, "genre": { "type": "string", "description": "The song genre, one of "pop", "rock" or "rap"" } }}
User Query: What are songs that were not published on Spotify
Structured Request:
{ "query": "", "filter": "NO_FILTER"}
<< Example 3. >> Data Source:
{ "content": "Brief summary of a movie", "attributes": { "genre": { "description": "The genre of the movie. One of ['science fiction', 'comedy', 'drama', 'thriller', 'romance', 'action', 'animated']", "type": "string" }, "year": { "description": "The year the movie was released", "type": "integer" }, "director": { "description": "The name of the movie director", "type": "string" }, "rating": { "description": "A 1-10 rating for the movie", "type": "float" }}}
User Query: dummy question
Structured Request:
And what our full chain produces:```pythonquery_constructor.invoke( { "query": "What are some sci-fi movies from the 90's directed by Luc Besson about taxi drivers" })
StructuredQuery(query='taxi driver', filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2000)]), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Luc Besson')]), limit=None)
The query constructor is the key element of the self-query retriever. To make a great retrieval system you'll need to make sure your query constructor works well. Often this requires adjusting the prompt, the examples in the prompt, the attribute descriptions, etc. For an example that walks through refining a query constructor on some hotel inventory data, [check out this cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/self_query_hotel_search.ipynb).
The next key element is the structured query translator. This is the object responsible for translating the generic `StructuredQuery` object into a metadata filter in the syntax of the vector store you're using. LangChain comes with a number of built-in translators. To see them all head to the [Integrations section](/v0.2/docs/integrations/retrievers/self_query/).
from langchain.retrievers.self_query.chroma import ChromaTranslatorretriever = SelfQueryRetriever( query_constructor=query_constructor, vectorstore=vectorstore, structured_query_translator=ChromaTranslator(),)
**API Reference:**[ChromaTranslator](https://api.python.langchain.com/en/latest/query_constructors/langchain_community.query_constructors.chroma.ChromaTranslator.html)
retriever.invoke( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
[Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995})]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/self_query.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Response metadata
](/v0.2/docs/how_to/response_metadata/)[
Next
How to split text based on semantic similarity
](/v0.2/docs/how_to/semantic-chunker/)
* [Get started](#get-started)
* [Creating our self-querying retriever](#creating-our-self-querying-retriever)
* [Testing it out](#testing-it-out)
* [Filter k](#filter-k)
* [Constructing from scratch with LCEL](#constructing-from-scratch-with-lcel) | null |
https://python.langchain.com/v0.2/docs/how_to/semantic-chunker/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to split text based on semantic similarity
On this page
How to split text based on semantic similarity
==============================================
Taken from Greg Kamradt's wonderful notebook: [5\_Levels\_Of\_Text\_Splitting](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb)
All credit to him.
This guide covers how to split chunks based on their semantic similarity. If embeddings are sufficiently far apart, chunks are split.
At a high level, this splits into sentences, then groups into groups of 3 sentences, and then merges one that are similar in the embedding space.
Install Dependencies[](#install-dependencies "Direct link to Install Dependencies")
------------------------------------------------------------------------------------
!pip install --quiet langchain_experimental langchain_openai
Load Example Data[](#load-example-data "Direct link to Load Example Data")
---------------------------------------------------------------------------
# This is a long document we can split up.with open("state_of_the_union.txt") as f: state_of_the_union = f.read()
Create Text Splitter[](#create-text-splitter "Direct link to Create Text Splitter")
------------------------------------------------------------------------------------
To instantiate a [SemanticChunker](https://api.python.langchain.com/en/latest/text_splitter/langchain_experimental.text_splitter.SemanticChunker.html), we must specify an embedding model. Below we will use [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.openai.OpenAIEmbeddings.html).
from langchain_experimental.text_splitter import SemanticChunkerfrom langchain_openai.embeddings import OpenAIEmbeddingstext_splitter = SemanticChunker(OpenAIEmbeddings())
**API Reference:**[SemanticChunker](https://api.python.langchain.com/en/latest/text_splitter/langchain_experimental.text_splitter.SemanticChunker.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
Split Text[](#split-text "Direct link to Split Text")
------------------------------------------------------
We split text in the usual way, e.g., by invoking `.create_documents` to create LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects:
docs = text_splitter.create_documents([state_of_the_union])print(docs[0].page_content)
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. Throughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. They keep moving.
Breakpoints[](#breakpoints "Direct link to Breakpoints")
---------------------------------------------------------
This chunker works by determining when to "break" apart sentences. This is done by looking for differences in embeddings between any two sentences. When that difference is past some threshold, then they are split.
There are a few ways to determine what that threshold is, which are controlled by the `breakpoint_threshold_type` kwarg.
### Percentile[](#percentile "Direct link to Percentile")
The default way to split is based on percentile. In this method, all differences between sentences are calculated, and then any difference greater than the X percentile is split.
text_splitter = SemanticChunker( OpenAIEmbeddings(), breakpoint_threshold_type="percentile")
docs = text_splitter.create_documents([state_of_the_union])print(docs[0].page_content)
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. Throughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. They keep moving.
print(len(docs))
26
### Standard Deviation[](#standard-deviation "Direct link to Standard Deviation")
In this method, any difference greater than X standard deviations is split.
text_splitter = SemanticChunker( OpenAIEmbeddings(), breakpoint_threshold_type="standard_deviation")
docs = text_splitter.create_documents([state_of_the_union])print(docs[0].page_content)
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. Throughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. They keep moving. And the costs and the threats to America and the world keep rising. That’s why the NATO Alliance was created to secure peace and stability in Europe after World War 2. The United States is a member along with 29 other nations. It matters. American diplomacy matters. American resolve matters. Putin’s latest attack on Ukraine was premeditated and unprovoked. He rejected repeated efforts at diplomacy. He thought the West and NATO wouldn’t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. We prepared extensively and carefully. We spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. I spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. We countered Russia’s lies with truth. And now that he has acted the free world is holding him accountable. Along with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. Together with our allies –we are right now enforcing powerful economic sanctions. We are cutting off Russia’s largest banks from the international financial system. Preventing Russia’s central bank from defending the Russian Ruble making Putin’s $630 Billion “war fund” worthless. We are choking off Russia’s access to technology that will sap its economic strength and weaken its military for years to come. Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights – further isolating Russia – and adding an additional squeeze –on their economy. The Ruble has lost 30% of its value. The Russian stock market has lost 40% of its value and trading remains suspended. Russia’s economy is reeling and Putin alone is to blame. Together with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. We are giving more than $1 Billion in direct assistance to Ukraine. And we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. Let me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. Our forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies – in the event that Putin decides to keep moving west. For that purpose we’ve mobilized American ground forces, air squadrons, and ship deployments to protect NATO countries including Poland, Romania, Latvia, Lithuania, and Estonia. As I have made crystal clear the United States and our Allies will defend every inch of territory of NATO countries with the full force of our collective power. And we remain clear-eyed. The Ukrainians are fighting back with pure courage. But the next few days weeks, months, will be hard on them. Putin has unleashed violence and chaos. But while he may make gains on the battlefield – he will pay a continuing high price over the long run. And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. To all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. These steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming.
print(len(docs))
4
### Interquartile[](#interquartile "Direct link to Interquartile")
In this method, the interquartile distance is used to split chunks.
text_splitter = SemanticChunker( OpenAIEmbeddings(), breakpoint_threshold_type="interquartile")
docs = text_splitter.create_documents([state_of_the_union])print(docs[0].page_content)
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. Throughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. They keep moving.
print(len(docs))
25
### Gradient[](#gradient "Direct link to Gradient")
In this method, the gradient of distance is used to split chunks along with the percentile method. This method is useful when chunks are highly correlated with each other or specific to a domain e.g. legal or medical. The idea is to apply anomaly detection on gradient array so that the distribution become wider and easy to identify boundaries in highly semantic data.
text_splitter = SemanticChunker( OpenAIEmbeddings(), breakpoint_threshold_type="gradient")
docs = text_splitter.create_documents([state_of_the_union])print(docs[0].page_content)
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
print(len(docs))
26
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/semantic-chunker.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to do "self-querying" retrieval
](/v0.2/docs/how_to/self_query/)[
Next
How to chain runnables
](/v0.2/docs/how_to/sequence/)
* [Install Dependencies](#install-dependencies)
* [Load Example Data](#load-example-data)
* [Create Text Splitter](#create-text-splitter)
* [Split Text](#split-text)
* [Breakpoints](#breakpoints)
* [Percentile](#percentile)
* [Standard Deviation](#standard-deviation)
* [Interquartile](#interquartile)
* [Gradient](#gradient) | null |
https://python.langchain.com/v0.2/docs/how_to/split_by_token/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to split text by tokens
On this page
How to split text by tokens
===========================
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model.
tiktoken[](#tiktoken "Direct link to tiktoken")
------------------------------------------------
note
[tiktoken](https://github.com/openai/tiktoken) is a fast `BPE` tokenizer created by `OpenAI`.
We can use `tiktoken` to estimate tokens used. It will probably be more accurate for the OpenAI models.
1. How the text is split: by character passed in.
2. How the chunk size is measured: by `tiktoken` tokenizer.
[CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html), [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html), and [TokenTextSplitter](https://api.python.langchain.com/en/latest/base/langchain_text_splitters.base.TokenTextSplitter.html) can be used with `tiktoken` directly.
%pip install --upgrade --quiet langchain-text-splitters tiktoken
from langchain_text_splitters import CharacterTextSplitter# This is a long document we can split up.with open("state_of_the_union.txt") as f: state_of_the_union = f.read()
**API Reference:**[CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html)
To split with a [CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html) and then merge chunks with `tiktoken`, use its `.from_tiktoken_encoder()` method. Note that splits from this method can be larger than the chunk size measured by the `tiktoken` tokenizer.
The `.from_tiktoken_encoder()` method takes either `encoding_name` as an argument (e.g. `cl100k_base`), or the `model_name` (e.g. `gpt-4`). All additional arguments like `chunk_size`, `chunk_overlap`, and `separators` are used to instantiate `CharacterTextSplitter`:
text_splitter = CharacterTextSplitter.from_tiktoken_encoder( encoding_name="cl100k_base", chunk_size=100, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution.
To implement a hard constraint on the chunk size, we can use `RecursiveCharacterTextSplitter.from_tiktoken_encoder`, where each split will be recursively split if it has a larger size:
from langchain_text_splitters import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder( model_name="gpt-4", chunk_size=100, chunk_overlap=0,)
**API Reference:**[RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
We can also load a `TokenTextSplitter` splitter, which works with `tiktoken` directly and will ensure each split is smaller than chunk size.
from langchain_text_splitters import TokenTextSplittertext_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)print(texts[0])
**API Reference:**[TokenTextSplitter](https://api.python.langchain.com/en/latest/base/langchain_text_splitters.base.TokenTextSplitter.html)
Madam Speaker, Madam Vice President, our
Some written languages (e.g. Chinese and Japanese) have characters which encode to 2 or more tokens. Using the `TokenTextSplitter` directly can split the tokens for a character between two chunks causing malformed Unicode characters. Use `RecursiveCharacterTextSplitter.from_tiktoken_encoder` or `CharacterTextSplitter.from_tiktoken_encoder` to ensure chunks contain valid Unicode strings.
spaCy[](#spacy "Direct link to spaCy")
---------------------------------------
note
[spaCy](https://spacy.io/) is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.
LangChain implements splitters based on the [spaCy tokenizer](https://spacy.io/api/tokenizer).
1. How the text is split: by `spaCy` tokenizer.
2. How the chunk size is measured: by number of characters.
%pip install --upgrade --quiet spacy
# This is a long document we can split up.with open("state_of_the_union.txt") as f: state_of_the_union = f.read()
from langchain_text_splitters import SpacyTextSplittertext_splitter = SpacyTextSplitter(chunk_size=1000)texts = text_splitter.split_text(state_of_the_union)print(texts[0])
**API Reference:**[SpacyTextSplitter](https://api.python.langchain.com/en/latest/spacy/langchain_text_splitters.spacy.SpacyTextSplitter.html)
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.Members of Congress and the Cabinet.Justices of the Supreme Court.My fellow Americans. Last year COVID-19 kept us apart.This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents.But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over.Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
SentenceTransformers[](#sentencetransformers "Direct link to SentenceTransformers")
------------------------------------------------------------------------------------
The [SentenceTransformersTokenTextSplitter](https://api.python.langchain.com/en/latest/sentence_transformers/langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter.html) is a specialized text splitter for use with the sentence-transformer models. The default behaviour is to split the text into chunks that fit the token window of the sentence transformer model that you would like to use.
To split text and constrain token counts according to the sentence-transformers tokenizer, instantiate a `SentenceTransformersTokenTextSplitter`. You can optionally specify:
* `chunk_overlap`: integer count of token overlap;
* `model_name`: sentence-transformer model name, defaulting to `"sentence-transformers/all-mpnet-base-v2"`;
* `tokens_per_chunk`: desired token count per chunk.
from langchain_text_splitters import SentenceTransformersTokenTextSplittersplitter = SentenceTransformersTokenTextSplitter(chunk_overlap=0)text = "Lorem "count_start_and_stop_tokens = 2text_token_count = splitter.count_tokens(text=text) - count_start_and_stop_tokensprint(text_token_count)
**API Reference:**[SentenceTransformersTokenTextSplitter](https://api.python.langchain.com/en/latest/sentence_transformers/langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter.html)
2
token_multiplier = splitter.maximum_tokens_per_chunk // text_token_count + 1# `text_to_split` does not fit in a single chunktext_to_split = text * token_multiplierprint(f"tokens in text to split: {splitter.count_tokens(text=text_to_split)}")
tokens in text to split: 514
text_chunks = splitter.split_text(text=text_to_split)print(text_chunks[1])
lorem
NLTK[](#nltk "Direct link to NLTK")
------------------------------------
note
[The Natural Language Toolkit](https://en.wikipedia.org/wiki/Natural_Language_Toolkit), or more commonly [NLTK](https://www.nltk.org/), is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python programming language.
Rather than just splitting on "\\n\\n", we can use `NLTK` to split based on [NLTK tokenizers](https://www.nltk.org/api/nltk.tokenize.html).
1. How the text is split: by `NLTK` tokenizer.
2. How the chunk size is measured: by number of characters.
# pip install nltk
# This is a long document we can split up.with open("state_of_the_union.txt") as f: state_of_the_union = f.read()
from langchain_text_splitters import NLTKTextSplittertext_splitter = NLTKTextSplitter(chunk_size=1000)
**API Reference:**[NLTKTextSplitter](https://api.python.langchain.com/en/latest/nltk/langchain_text_splitters.nltk.NLTKTextSplitter.html)
texts = text_splitter.split_text(state_of_the_union)print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.Members of Congress and the Cabinet.Justices of the Supreme Court.My fellow Americans.Last year COVID-19 kept us apart.This year we are finally together again.Tonight, we meet as Democrats Republicans and Independents.But most importantly as Americans.With a duty to one another to the American people to the Constitution.And with an unwavering resolve that freedom will always triumph over tyranny.Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.But he badly miscalculated.He thought he could roll into Ukraine and the world would roll over.Instead he met a wall of strength he never imagined.He met the Ukrainian people.From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.Groups of citizens blocking tanks with their bodies.
KoNLPY[](#konlpy "Direct link to KoNLPY")
------------------------------------------
note
[KoNLPy: Korean NLP in Python](https://konlpy.org/en/latest/) is is a Python package for natural language processing (NLP) of the Korean language.
Token splitting involves the segmentation of text into smaller, more manageable units called tokens. These tokens are often words, phrases, symbols, or other meaningful elements crucial for further processing and analysis. In languages like English, token splitting typically involves separating words by spaces and punctuation marks. The effectiveness of token splitting largely depends on the tokenizer's understanding of the language structure, ensuring the generation of meaningful tokens. Since tokenizers designed for the English language are not equipped to understand the unique semantic structures of other languages, such as Korean, they cannot be effectively used for Korean language processing.
### Token splitting for Korean with KoNLPy's Kkma Analyzer[](#token-splitting-for-korean-with-konlpys-kkma-analyzer "Direct link to Token splitting for Korean with KoNLPy's Kkma Analyzer")
In case of Korean text, KoNLPY includes at morphological analyzer called `Kkma` (Korean Knowledge Morpheme Analyzer). `Kkma` provides detailed morphological analysis of Korean text. It breaks down sentences into words and words into their respective morphemes, identifying parts of speech for each token. It can segment a block of text into individual sentences, which is particularly useful for processing long texts.
### Usage Considerations[](#usage-considerations "Direct link to Usage Considerations")
While `Kkma` is renowned for its detailed analysis, it is important to note that this precision may impact processing speed. Thus, `Kkma` is best suited for applications where analytical depth is prioritized over rapid text processing.
# pip install konlpy
# This is a long Korean document that we want to split up into its component sentences.with open("./your_korean_doc.txt") as f: korean_document = f.read()
from langchain_text_splitters import KonlpyTextSplittertext_splitter = KonlpyTextSplitter()
**API Reference:**[KonlpyTextSplitter](https://api.python.langchain.com/en/latest/konlpy/langchain_text_splitters.konlpy.KonlpyTextSplitter.html)
texts = text_splitter.split_text(korean_document)# The sentences are split with "\n\n" characters.print(texts[0])
춘향전 옛날에 남원에 이 도령이라는 벼슬아치 아들이 있었다.그의 외모는 빛나는 달처럼 잘생겼고, 그의 학식과 기예는 남보다 뛰어났다.한편, 이 마을에는 춘향이라는 절세 가인이 살고 있었다.춘 향의 아름다움은 꽃과 같아 마을 사람들 로부터 많은 사랑을 받았다.어느 봄날, 도령은 친구들과 놀러 나갔다가 춘 향을 만 나 첫 눈에 반하고 말았다.두 사람은 서로 사랑하게 되었고, 이내 비밀스러운 사랑의 맹세를 나누었다.하지만 좋은 날들은 오래가지 않았다.도령의 아버지가 다른 곳으로 전근을 가게 되어 도령도 떠나 야만 했다.이별의 아픔 속에서도, 두 사람은 재회를 기약하며 서로를 믿고 기다리기로 했다.그러나 새로 부임한 관아의 사또가 춘 향의 아름다움에 욕심을 내 어 그녀에게 강요를 시작했다.춘 향 은 도령에 대한 자신의 사랑을 지키기 위해, 사또의 요구를 단호히 거절했다.이에 분노한 사또는 춘 향을 감옥에 가두고 혹독한 형벌을 내렸다.이야기는 이 도령이 고위 관직에 오른 후, 춘 향을 구해 내는 것으로 끝난다.두 사람은 오랜 시련 끝에 다시 만나게 되고, 그들의 사랑은 온 세상에 전해 지며 후세에까지 이어진다.- 춘향전 (The Tale of Chunhyang)
Hugging Face tokenizer[](#hugging-face-tokenizer "Direct link to Hugging Face tokenizer")
------------------------------------------------------------------------------------------
[Hugging Face](https://huggingface.co/docs/tokenizers/index) has many tokenizers.
We use Hugging Face tokenizer, the [GPT2TokenizerFast](https://huggingface.co/Ransaka/gpt2-tokenizer-fast) to count the text length in tokens.
1. How the text is split: by character passed in.
2. How the chunk size is measured: by number of tokens calculated by the `Hugging Face` tokenizer.
from transformers import GPT2TokenizerFasttokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# This is a long document we can split up.with open("state_of_the_union.txt") as f: state_of_the_union = f.read()from langchain_text_splitters import CharacterTextSplitter
**API Reference:**[CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html)
text_splitter = CharacterTextSplitter.from_huggingface_tokenizer( tokenizer, chunk_size=100, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/split_by_token.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to save and load LangChain objects
](/v0.2/docs/how_to/serialization/)[
Next
How to do question answering over CSVs
](/v0.2/docs/how_to/sql_csv/)
* [tiktoken](#tiktoken)
* [spaCy](#spacy)
* [SentenceTransformers](#sentencetransformers)
* [NLTK](#nltk)
* [KoNLPY](#konlpy)
* [Token splitting for Korean with KoNLPy's Kkma Analyzer](#token-splitting-for-korean-with-konlpys-kkma-analyzer)
* [Usage Considerations](#usage-considerations)
* [Hugging Face tokenizer](#hugging-face-tokenizer) | null |
https://python.langchain.com/v0.2/docs/how_to/sql_csv/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to do question answering over CSVs
On this page
How to do question answering over CSVs
======================================
LLMs are great for building question-answering systems over various types of data sources. In this section we'll go over how to build Q&A systems over data stored in a CSV file(s). Like working with SQL databases, the key to working with CSV files is to give an LLM access to tools for querying and interacting with the data. The two main ways to do this are to either:
* **RECOMMENDED**: Load the CSV(s) into a SQL database, and use the approaches outlined in the [SQL tutorial](/v0.2/docs/tutorials/sql_qa/).
* Give the LLM access to a Python environment where it can use libraries like Pandas to interact with the data.
We will cover both approaches in this guide.
⚠️ Security note ⚠️[](#️-security-note-️ "Direct link to ⚠️ Security note ⚠️")
-------------------------------------------------------------------------------
Both approaches mentioned above carry significant risks. Using SQL requires executing model-generated SQL queries. Using a library like Pandas requires letting the model execute Python code. Since it is easier to tightly scope SQL connection permissions and sanitize SQL queries than it is to sandbox Python environments, **we HIGHLY recommend interacting with CSV data via SQL.** For more on general security best practices, [see here](/v0.2/docs/security/).
Setup[](#setup "Direct link to Setup")
---------------------------------------
Dependencies for this guide:
%pip install -qU langchain langchain-openai langchain-community langchain-experimental pandas
Set required environment variables:
# Using LangSmith is recommended but not required. Uncomment below lines to use.# import os# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Download the [Titanic dataset](https://www.kaggle.com/datasets/yasserh/titanic-dataset) if you don't already have it:
!wget https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/stuff/titanic.csv -O titanic.csv
import pandas as pddf = pd.read_csv("titanic.csv")print(df.shape)print(df.columns.tolist())
(887, 8)['Survived', 'Pclass', 'Name', 'Sex', 'Age', 'Siblings/Spouses Aboard', 'Parents/Children Aboard', 'Fare']
SQL[](#sql "Direct link to SQL")
---------------------------------
Using SQL to interact with CSV data is the recommended approach because it is easier to limit permissions and sanitize queries than with arbitrary Python.
Most SQL databases make it easy to load a CSV file in as a table ([DuckDB](https://duckdb.org/docs/data/csv/overview.html), [SQLite](https://www.sqlite.org/csv.html), etc.). Once you've done this you can use all of the chain and agent-creating techniques outlined in the [SQL tutorial](/v0.2/docs/tutorials/sql_qa/). Here's a quick example of how we might do this with SQLite:
from langchain_community.utilities import SQLDatabasefrom sqlalchemy import create_engineengine = create_engine("sqlite:///titanic.db")df.to_sql("titanic", engine, index=False)
**API Reference:**[SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html)
887
db = SQLDatabase(engine=engine)print(db.dialect)print(db.get_usable_table_names())print(db.run("SELECT * FROM titanic WHERE Age < 2;"))
sqlite['titanic'][(1, 2, 'Master. Alden Gates Caldwell', 'male', 0.83, 0, 2, 29.0), (0, 3, 'Master. Eino Viljami Panula', 'male', 1.0, 4, 1, 39.6875), (1, 3, 'Miss. Eleanor Ileen Johnson', 'female', 1.0, 1, 1, 11.1333), (1, 2, 'Master. Richard F Becker', 'male', 1.0, 2, 1, 39.0), (1, 1, 'Master. Hudson Trevor Allison', 'male', 0.92, 1, 2, 151.55), (1, 3, 'Miss. Maria Nakid', 'female', 1.0, 0, 2, 15.7417), (0, 3, 'Master. Sidney Leonard Goodwin', 'male', 1.0, 5, 2, 46.9), (1, 3, 'Miss. Helene Barbara Baclini', 'female', 0.75, 2, 1, 19.2583), (1, 3, 'Miss. Eugenie Baclini', 'female', 0.75, 2, 1, 19.2583), (1, 2, 'Master. Viljo Hamalainen', 'male', 0.67, 1, 1, 14.5), (1, 3, 'Master. Bertram Vere Dean', 'male', 1.0, 1, 2, 20.575), (1, 3, 'Master. Assad Alexander Thomas', 'male', 0.42, 0, 1, 8.5167), (1, 2, 'Master. Andre Mallet', 'male', 1.0, 0, 2, 37.0042), (1, 2, 'Master. George Sibley Richards', 'male', 0.83, 1, 1, 18.75)]
And create a [SQL agent](/v0.2/docs/tutorials/sql_qa/) to interact with it:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
from langchain_community.agent_toolkits import create_sql_agentagent_executor = create_sql_agent(llm, db=db, agent_type="openai-tools", verbose=True)
**API Reference:**[create\_sql\_agent](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.sql.base.create_sql_agent.html)
agent_executor.invoke({"input": "what's the average age of survivors"})
[1m> Entering new SQL Agent Executor chain...[0m[32;1m[1;3mInvoking: `sql_db_list_tables` with `{}`[0m[38;5;200m[1;3mtitanic[0m[32;1m[1;3mInvoking: `sql_db_schema` with `{'table_names': 'titanic'}`[0m[33;1m[1;3mCREATE TABLE titanic ( "Survived" BIGINT, "Pclass" BIGINT, "Name" TEXT, "Sex" TEXT, "Age" FLOAT, "Siblings/Spouses Aboard" BIGINT, "Parents/Children Aboard" BIGINT, "Fare" FLOAT)/*3 rows from titanic table:Survived Pclass Name Sex Age Siblings/Spouses Aboard Parents/Children Aboard Fare0 3 Mr. Owen Harris Braund male 22.0 1 0 7.251 1 Mrs. John Bradley (Florence Briggs Thayer) Cumings female 38.0 1 0 71.28331 3 Miss. Laina Heikkinen female 26.0 0 0 7.925*/[0m[32;1m[1;3mInvoking: `sql_db_query` with `{'query': 'SELECT AVG(Age) AS Average_Age FROM titanic WHERE Survived = 1'}`[0m[36;1m[1;3m[(28.408391812865496,)][0m[32;1m[1;3mThe average age of survivors in the Titanic dataset is approximately 28.41 years.[0m[1m> Finished chain.[0m
{'input': "what's the average age of survivors", 'output': 'The average age of survivors in the Titanic dataset is approximately 28.41 years.'}
This approach easily generalizes to multiple CSVs, since we can just load each of them into our database as its own table. See the [Multiple CSVs](/v0.2/docs/how_to/sql_csv/#multiple-csvs) section below.
Pandas[](#pandas "Direct link to Pandas")
------------------------------------------
Instead of SQL we can also use data analysis libraries like pandas and the code generating abilities of LLMs to interact with CSV data. Again, **this approach is not fit for production use cases unless you have extensive safeguards in place**. For this reason, our code-execution utilities and constructors live in the `langchain-experimental` package.
### Chain[](#chain "Direct link to Chain")
Most LLMs have been trained on enough pandas Python code that they can generate it just by being asked to:
ai_msg = llm.invoke( "I have a pandas DataFrame 'df' with columns 'Age' and 'Fare'. Write code to compute the correlation between the two columns. Return Markdown for a Python code snippet and nothing else.")print(ai_msg.content)
```pythoncorrelation = df['Age'].corr(df['Fare'])correlation
We can combine this ability with a Python-executing tool to create a simple data analysis chain. We'll first want to load our CSV table as a dataframe, and give the tool access to this dataframe:```pythonimport pandas as pdfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_experimental.tools import PythonAstREPLTooldf = pd.read_csv("titanic.csv")tool = PythonAstREPLTool(locals={"df": df})tool.invoke("df['Fare'].mean()")
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [PythonAstREPLTool](https://api.python.langchain.com/en/latest/tools/langchain_experimental.tools.python.tool.PythonAstREPLTool.html)
32.30542018038331
To help enforce proper use of our Python tool, we'll using [tool calling](/v0.2/docs/how_to/tool_calling/):
llm_with_tools = llm.bind_tools([tool], tool_choice=tool.name)response = llm_with_tools.invoke( "I have a dataframe 'df' and want to know the correlation between the 'Age' and 'Fare' columns")response
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_SBrK246yUbdnJemXFC8Iod05', 'function': {'arguments': '{"query":"df.corr()[\'Age\'][\'Fare\']"}', 'name': 'python_repl_ast'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 125, 'total_tokens': 138}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-1fd332ba-fa72-4351-8182-d464e7368311-0', tool_calls=[{'name': 'python_repl_ast', 'args': {'query': "df.corr()['Age']['Fare']"}, 'id': 'call_SBrK246yUbdnJemXFC8Iod05'}])
response.tool_calls
[{'name': 'python_repl_ast', 'args': {'query': "df.corr()['Age']['Fare']"}, 'id': 'call_SBrK246yUbdnJemXFC8Iod05'}]
We'll add a tools output parser to extract the function call as a dict:
from langchain_core.output_parsers.openai_tools import JsonOutputKeyToolsParserparser = JsonOutputKeyToolsParser(key_name=tool.name, first_tool_only=True)(llm_with_tools | parser).invoke( "I have a dataframe 'df' and want to know the correlation between the 'Age' and 'Fare' columns")
**API Reference:**[JsonOutputKeyToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.JsonOutputKeyToolsParser.html)
{'query': "df[['Age', 'Fare']].corr()"}
And combine with a prompt so that we can just specify a question without needing to specify the dataframe info every invocation:
system = f"""You have access to a pandas dataframe `df`. \Here is the output of `df.head().to_markdown()`:
{df.head().to\_markdown()}
Given a user question, write the Python code to answer it. \Return ONLY the valid Python code and nothing else. \Don't assume you have access to any libraries other than built-in Python ones and pandas."""prompt = ChatPromptTemplate.from_messages([("system", system), ("human", "{question}")])code_chain = prompt | llm_with_tools | parsercode_chain.invoke({"question": "What's the correlation between age and fare"})
{'query': "df[['Age', 'Fare']].corr()"}
And lastly we'll add our Python tool so that the generated code is actually executed:
chain = prompt | llm_with_tools | parser | toolchain.invoke({"question": "What's the correlation between age and fare"})
0.11232863699941621
And just like that we have a simple data analysis chain. We can take a peak at the intermediate steps by looking at the LangSmith trace: [https://smith.langchain.com/public/b1309290-7212-49b7-bde2-75b39a32b49a/r](https://smith.langchain.com/public/b1309290-7212-49b7-bde2-75b39a32b49a/r)
We could add an additional LLM call at the end to generate a conversational response, so that we're not just responding with the tool output. For this we'll want to add a chat history `MessagesPlaceholder` to our prompt:
from operator import itemgetterfrom langchain_core.messages import ToolMessagefrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import MessagesPlaceholderfrom langchain_core.runnables import RunnablePassthroughsystem = f"""You have access to a pandas dataframe `df`. \Here is the output of `df.head().to_markdown()`:
**API Reference:**[ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
{df.head().to\_markdown()}
Given a user question, write the Python code to answer it. \Don't assume you have access to any libraries other than built-in Python ones and pandas.Respond directly to the question once you have enough information to answer it."""prompt = ChatPromptTemplate.from_messages( [ ( "system", system, ), ("human", "{question}"), # This MessagesPlaceholder allows us to optionally append an arbitrary number of messages # at the end of the prompt using the 'chat_history' arg. MessagesPlaceholder("chat_history", optional=True), ])def _get_chat_history(x: dict) -> list: """Parse the chain output up to this point into a list of chat history messages to insert in the prompt.""" ai_msg = x["ai_msg"] tool_call_id = x["ai_msg"].additional_kwargs["tool_calls"][0]["id"] tool_msg = ToolMessage(tool_call_id=tool_call_id, content=str(x["tool_output"])) return [ai_msg, tool_msg]chain = ( RunnablePassthrough.assign(ai_msg=prompt | llm_with_tools) .assign(tool_output=itemgetter("ai_msg") | parser | tool) .assign(chat_history=_get_chat_history) .assign(response=prompt | llm | StrOutputParser()) .pick(["tool_output", "response"]))
chain.invoke({"question": "What's the correlation between age and fare"})
{'tool_output': 0.11232863699941616, 'response': 'The correlation between age and fare is approximately 0.1123.'}
Here's the LangSmith trace for this run: [https://smith.langchain.com/public/14e38d70-45b1-4b81-8477-9fd2b7c07ea6/r](https://smith.langchain.com/public/14e38d70-45b1-4b81-8477-9fd2b7c07ea6/r)
### Agent[](#agent "Direct link to Agent")
For complex questions it can be helpful for an LLM to be able to iteratively execute code while maintaining the inputs and outputs of its previous executions. This is where Agents come into play. They allow an LLM to decide how many times a tool needs to be invoked and keep track of the executions it's made so far. The [create\_pandas\_dataframe\_agent](https://api.python.langchain.com/en/latest/agents/langchain_experimental.agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent.html) is a built-in agent that makes it easy to work with dataframes:
from langchain_experimental.agents import create_pandas_dataframe_agentagent = create_pandas_dataframe_agent(llm, df, agent_type="openai-tools", verbose=True)agent.invoke( { "input": "What's the correlation between age and fare? is that greater than the correlation between fare and survival?" })
**API Reference:**[create\_pandas\_dataframe\_agent](https://api.python.langchain.com/en/latest/agents/langchain_experimental.agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent.html)
[1m> Entering new AgentExecutor chain...[0m[32;1m[1;3mInvoking: `python_repl_ast` with `{'query': "df[['Age', 'Fare']].corr().iloc[0,1]"}`[0m[36;1m[1;3m0.11232863699941621[0m[32;1m[1;3mInvoking: `python_repl_ast` with `{'query': "df[['Fare', 'Survived']].corr().iloc[0,1]"}`[0m[36;1m[1;3m0.2561785496289603[0m[32;1m[1;3mThe correlation between Age and Fare is approximately 0.112, and the correlation between Fare and Survival is approximately 0.256.Therefore, the correlation between Fare and Survival (0.256) is greater than the correlation between Age and Fare (0.112).[0m[1m> Finished chain.[0m
{'input': "What's the correlation between age and fare? is that greater than the correlation between fare and survival?", 'output': 'The correlation between Age and Fare is approximately 0.112, and the correlation between Fare and Survival is approximately 0.256.\n\nTherefore, the correlation between Fare and Survival (0.256) is greater than the correlation between Age and Fare (0.112).'}
Here's the LangSmith trace for this run: [https://smith.langchain.com/public/6a86aee2-4f22-474a-9264-bd4c7283e665/r](https://smith.langchain.com/public/6a86aee2-4f22-474a-9264-bd4c7283e665/r)
### Multiple CSVs[](#multiple-csvs "Direct link to Multiple CSVs")
To handle multiple CSVs (or dataframes) we just need to pass multiple dataframes to our Python tool. Our `create_pandas_dataframe_agent` constructor can do this out of the box, we can pass in a list of dataframes instead of just one. If we're constructing a chain ourselves, we can do something like:
df_1 = df[["Age", "Fare"]]df_2 = df[["Fare", "Survived"]]tool = PythonAstREPLTool(locals={"df_1": df_1, "df_2": df_2})llm_with_tool = llm.bind_tools(tools=[tool], tool_choice=tool.name)df_template = """```python{df_name}.head().to_markdown()>>> {df_head}```"""df_context = "\n\n".join( df_template.format(df_head=_df.head().to_markdown(), df_name=df_name) for _df, df_name in [(df_1, "df_1"), (df_2, "df_2")])system = f"""You have access to a number of pandas dataframes. \Here is a sample of rows from each dataframe and the python code that was used to generate the sample:{df_context}Given a user question about the dataframes, write the Python code to answer it. \Don't assume you have access to any libraries other than built-in Python ones and pandas. \Make sure to refer only to the variables mentioned above."""prompt = ChatPromptTemplate.from_messages([("system", system), ("human", "{question}")])chain = prompt | llm_with_tool | parser | toolchain.invoke( { "question": "return the difference in the correlation between age and fare and the correlation between fare and survival" })
0.14384991262954416
Here's the LangSmith trace for this run: [https://smith.langchain.com/public/cc2a7d7f-7c5a-4e77-a10c-7b5420fcd07f/r](https://smith.langchain.com/public/cc2a7d7f-7c5a-4e77-a10c-7b5420fcd07f/r)
### Sandboxed code execution[](#sandboxed-code-execution "Direct link to Sandboxed code execution")
There are a number of tools like [E2B](/v0.2/docs/integrations/tools/e2b_data_analysis/) and [Bearly](/v0.2/docs/integrations/tools/bearly/) that provide sandboxed environments for Python code execution, to allow for safer code-executing chains and agents.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
For more advanced data analysis applications we recommend checking out:
* [SQL tutorial](/v0.2/docs/tutorials/sql_qa/): Many of the challenges of working with SQL db's and CSV's are generic to any structured data type, so it's useful to read the SQL techniques even if you're using Pandas for CSV data analysis.
* [Tool use](/v0.2/docs/how_to/tool_calling/): Guides on general best practices when working with chains and agents that invoke tools
* [Agents](/v0.2/docs/tutorials/agents/): Understand the fundamentals of building LLM agents.
* Integrations: Sandboxed envs like [E2B](/v0.2/docs/integrations/tools/e2b_data_analysis/) and [Bearly](/v0.2/docs/integrations/tools/bearly/), utilities like [SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), related agents like [Spark DataFrame agent](/v0.2/docs/integrations/toolkits/spark/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/sql_csv.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to split text by tokens
](/v0.2/docs/how_to/split_by_token/)[
Next
How to deal with large databases when doing SQL question-answering
](/v0.2/docs/how_to/sql_large_db/)
* [⚠️ Security note ⚠️](#️-security-note-️)
* [Setup](#setup)
* [SQL](#sql)
* [Pandas](#pandas)
* [Chain](#chain)
* [Agent](#agent)
* [Multiple CSVs](#multiple-csvs)
* [Sandboxed code execution](#sandboxed-code-execution)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/streaming_llm/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to stream responses from an LLM
On this page
How to stream responses from an LLM
===================================
All `LLM`s implement the [Runnable interface](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable), which comes with **default** implementations of standard runnable methods (i.e. `ainvoke`, `batch`, `abatch`, `stream`, `astream`, `astream_events`).
The **default** streaming implementations provide an`Iterator` (or `AsyncIterator` for asynchronous streaming) that yields a single value: the final output from the underlying chat model provider.
The ability to stream the output token-by-token depends on whether the provider has implemented proper streaming support.
See which [integrations support token-by-token streaming here](/v0.2/docs/integrations/llms/).
note
The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the model can be swapped in for any other model as it supports the same standard interface.
Sync stream[](#sync-stream "Direct link to Sync stream")
---------------------------------------------------------
Below we use a `|` to help visualize the delimiter between tokens.
from langchain_openai import OpenAIllm = OpenAI(model="gpt-3.5-turbo-instruct", temperature=0, max_tokens=512)for chunk in llm.stream("Write me a 1 verse song about sparkling water."): print(chunk, end="|", flush=True)
**API Reference:**[OpenAI](https://api.python.langchain.com/en/latest/llms/langchain_openai.llms.base.OpenAI.html)
|Spark|ling| water|,| oh| so clear||Bubbles dancing|,| without| fear||Refreshing| taste|,| a| pure| delight||Spark|ling| water|,| my| thirst|'s| delight||
Async streaming[](#async-streaming "Direct link to Async streaming")
---------------------------------------------------------------------
Let's see how to stream in an async setting using `astream`.
from langchain_openai import OpenAIllm = OpenAI(model="gpt-3.5-turbo-instruct", temperature=0, max_tokens=512)async for chunk in llm.astream("Write me a 1 verse song about sparkling water."): print(chunk, end="|", flush=True)
**API Reference:**[OpenAI](https://api.python.langchain.com/en/latest/llms/langchain_openai.llms.base.OpenAI.html)
|Spark|ling| water|,| oh| so clear||Bubbles dancing|,| without| fear||Refreshing| taste|,| a| pure| delight||Spark|ling| water|,| my| thirst|'s| delight||
Async event streaming[](#async-event-streaming "Direct link to Async event streaming")
---------------------------------------------------------------------------------------
LLMs also support the standard [astream events](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.astream_events) method.
tip
`astream_events` is most useful when implementing streaming in a larger LLM application that contains multiple steps (e.g., an application that involves an `agent`).
from langchain_openai import OpenAIllm = OpenAI(model="gpt-3.5-turbo-instruct", temperature=0, max_tokens=512)idx = 0async for event in llm.astream_events( "Write me a 1 verse song about goldfish on the moon", version="v1"): idx += 1 if idx >= 5: # Truncate the output print("...Truncated") break print(event)
**API Reference:**[OpenAI](https://api.python.langchain.com/en/latest/llms/langchain_openai.llms.base.OpenAI.html)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/streaming_llm.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to stream runnables
](/v0.2/docs/how_to/streaming/)[
Next
How to use a time-weighted vector store retriever
](/v0.2/docs/how_to/time_weighted_vectorstore/)
* [Sync stream](#sync-stream)
* [Async streaming](#async-streaming)
* [Async event streaming](#async-event-streaming) | null |
https://python.langchain.com/v0.2/docs/how_to/tool_calling/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use a model to call tools
On this page
How to use a model to call tools
================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [LangChain Tools](/v0.2/docs/concepts/#tools)
Tool calling vs function calling
We use the term tool calling interchangeably with function calling. Although function calling is sometimes meant to refer to invocations of a single function, we treat all models as though they can return multiple tool or function calls in each message.
Supported models
You can find a [list of all models that support tool calling](/v0.2/docs/integrations/chat/).
Tool calling allows a chat model to respond to a given prompt by "calling a tool". While the name implies that the model is performing some action, this is actually not the case! The model generates the arguments to a tool, and actually running the tool (or not) is up to the user. For example, if you want to [extract output matching some schema](/v0.2/docs/how_to/structured_output/) from unstructured text, you could give the model an "extraction" tool that takes parameters matching the desired schema, then treat the generated output as your final result.
However, tool calling goes beyond [structured output](/v0.2/docs/how_to/structured_output/) since you can pass responses from called tools back to the model to create longer interactions. For instance, given a search engine tool, an LLM might handle a query by first issuing a call to the search engine with arguments. The system calling the LLM can receive the tool call, execute it, and return the output to the LLM to inform its response. LangChain includes a suite of [built-in tools](/v0.2/docs/integrations/tools/) and supports several methods for defining your own [custom tools](/v0.2/docs/how_to/custom_tools/).
Tool calling is not universal, but many popular LLM providers, including [Anthropic](https://www.anthropic.com/), [Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), [Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, support variants of a tool calling feature.
LangChain implements standard interfaces for defining tools, passing them to LLMs, and representing tool calls. This guide and the other How-to pages in the Tool section will show you how to use tools with LangChain.
Passing tools to chat models[](#passing-tools-to-chat-models "Direct link to Passing tools to chat models")
------------------------------------------------------------------------------------------------------------
Chat models that support tool calling features implement a `.bind_tools` method, which receives a list of LangChain [tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool) and binds them to the chat model in its expected format. Subsequent invocations of the chat model will include tool schemas in its calls to the LLM.
For example, we can define the schema for custom tools using the `@tool` decorator on Python functions:
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
Or below, we define the schema using [Pydantic](https://docs.pydantic.dev):
from langchain_core.pydantic_v1 import BaseModel, Field# Note that the docstrings here are crucial, as they will be passed along# to the model along with the class name.class Add(BaseModel): """Add two integers together.""" a: int = Field(..., description="First integer") b: int = Field(..., description="Second integer")class Multiply(BaseModel): """Multiply two integers together.""" a: int = Field(..., description="First integer") b: int = Field(..., description="Second integer")tools = [Add, Multiply]
We can bind them to chat models as follows:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/firefunction-v1", temperature=0)
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
We'll use the `.bind_tools()` method to handle converting `Multiply` to the proper format for the model, then and bind it (i.e., passing it in each time the model is invoked).
llm_with_tools = llm.bind_tools(tools)
As we can see, even though the prompt didn't really suggest a tool call, our LLM made one since it was forced to do so. You can look at the docs for [`bind_tool`](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.BaseChatOpenAI.html#langchain_openai.chat_models.base.BaseChatOpenAI.bind_tools) to learn about all the ways to customize how your LLM selects tools.
Tool calls[](#tool-calls "Direct link to Tool calls")
------------------------------------------------------
If tool calls are included in a LLM response, they are attached to the corresponding [message](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage) or [message chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) as a list of [tool call](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCall.html#langchain_core.messages.tool.ToolCall) objects in the `.tool_calls` attribute.
Note that chat models can call multiple tools at once.
A `ToolCall` is a typed dict that includes a tool name, dict of argument values, and (optionally) an identifier. Messages with no tool calls default to an empty list for this attribute.
query = "What is 3 * 12? Also, what is 11 + 49?"llm_with_tools.invoke(query).tool_calls
[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_KquHA7mSbgtAkpkmRPaFnJKa'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_Fl0hQi4IBTzlpaJYlM5kPQhE'}]
The `.tool_calls` attribute should contain valid tool calls. Note that on occasion, model providers may output malformed tool calls (e.g., arguments that are not valid JSON). When parsing fails in these cases, instances of [InvalidToolCall](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.InvalidToolCall.html#langchain_core.messages.tool.InvalidToolCall) are populated in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have a name, string arguments, identifier, and error message.
If desired, [output parsers](/v0.2/docs/how_to/#output-parsers) can further process the output. For example, we can convert back to the original Pydantic class:
from langchain_core.output_parsers.openai_tools import PydanticToolsParserchain = llm_with_tools | PydanticToolsParser(tools=[Multiply, Add])chain.invoke(query)
**API Reference:**[PydanticToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.PydanticToolsParser.html)
[Multiply(a=3, b=12), Add(a=11, b=49)]
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now you've learned how to bind tool schemas to a chat model and to call those tools. Next, you can learn more about how to use tools:
* Few shot promting [with tools](/v0.2/docs/how_to/tools_few_shot/)
* Stream [tool calls](/v0.2/docs/how_to/tool_streaming/)
* Bind [model-specific tools](/v0.2/docs/how_to/tools_model_specific/)
* Pass [runtime values to tools](/v0.2/docs/how_to/tool_runtime/)
* Pass [tool results back to model](/v0.2/docs/how_to/tool_results_pass_to_model/)
You can also check out some more specific uses of tool calling:
* Building [tool-using chains and agents](/v0.2/docs/how_to/#tools)
* Getting [structured outputs](/v0.2/docs/how_to/structured_output/) from models
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_calling.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use a time-weighted vector store retriever
](/v0.2/docs/how_to/time_weighted_vectorstore/)[
Next
tool\_calling\_parallel
](/v0.2/docs/how_to/tool_calling_parallel/)
* [Passing tools to chat models](#passing-tools-to-chat-models)
* [Tool calls](#tool-calls)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/time_weighted_vectorstore/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use a time-weighted vector store retriever
On this page
How to use a time-weighted vector store retriever
=================================================
This retriever uses a combination of semantic similarity and a time decay.
The algorithm for scoring them is:
semantic_similarity + (1.0 - decay_rate) ^ hours_passed
Notably, `hours_passed` refers to the hours passed since the object in the retriever **was last accessed**, not since it was created. This means that frequently accessed objects remain "fresh".
from datetime import datetime, timedeltaimport faissfrom langchain.retrievers import TimeWeightedVectorStoreRetrieverfrom langchain_community.docstore import InMemoryDocstorefrom langchain_community.vectorstores import FAISSfrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddings
**API Reference:**[TimeWeightedVectorStoreRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever.html) | [InMemoryDocstore](https://api.python.langchain.com/en/latest/docstore/langchain_community.docstore.in_memory.InMemoryDocstore.html) | [FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
Low decay rate[](#low-decay-rate "Direct link to Low decay rate")
------------------------------------------------------------------
A low `decay rate` (in this, to be extreme, we will set it close to 0) means memories will be "remembered" for longer. A `decay rate` of 0 means memories never be forgotten, making this retriever equivalent to the vector lookup.
# Define your embedding modelembeddings_model = OpenAIEmbeddings()# Initialize the vectorstore as emptyembedding_size = 1536index = faiss.IndexFlatL2(embedding_size)vectorstore = FAISS(embeddings_model, index, InMemoryDocstore({}), {})retriever = TimeWeightedVectorStoreRetriever( vectorstore=vectorstore, decay_rate=0.0000000000000000000000001, k=1)
yesterday = datetime.now() - timedelta(days=1)retriever.add_documents( [Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])retriever.add_documents([Document(page_content="hello foo")])
['c3dcf671-3c0a-4273-9334-c4a913076bfa']
# "Hello World" is returned first because it is most salient, and the decay rate is close to 0., meaning it's still recent enoughretriever.get_relevant_documents("hello world")
[Document(page_content='hello world', metadata={'last_accessed_at': datetime.datetime(2023, 12, 27, 15, 30, 18, 457125), 'created_at': datetime.datetime(2023, 12, 27, 15, 30, 8, 442662), 'buffer_idx': 0})]
High decay rate[](#high-decay-rate "Direct link to High decay rate")
---------------------------------------------------------------------
With a high `decay rate` (e.g., several 9's), the `recency score` quickly goes to 0! If you set this all the way to 1, `recency` is 0 for all objects, once again making this equivalent to a vector lookup.
# Define your embedding modelembeddings_model = OpenAIEmbeddings()# Initialize the vectorstore as emptyembedding_size = 1536index = faiss.IndexFlatL2(embedding_size)vectorstore = FAISS(embeddings_model, index, InMemoryDocstore({}), {})retriever = TimeWeightedVectorStoreRetriever( vectorstore=vectorstore, decay_rate=0.999, k=1)
yesterday = datetime.now() - timedelta(days=1)retriever.add_documents( [Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])retriever.add_documents([Document(page_content="hello foo")])
['eb1c4c86-01a8-40e3-8393-9a927295a950']
# "Hello Foo" is returned first because "hello world" is mostly forgottenretriever.get_relevant_documents("hello world")
[Document(page_content='hello foo', metadata={'last_accessed_at': datetime.datetime(2023, 12, 27, 15, 30, 50, 57185), 'created_at': datetime.datetime(2023, 12, 27, 15, 30, 44, 720490), 'buffer_idx': 1})]
Virtual time[](#virtual-time "Direct link to Virtual time")
------------------------------------------------------------
Using some utils in LangChain, you can mock out the time component.
import datetimefrom langchain_core.utils import mock_now
**API Reference:**[mock\_now](https://api.python.langchain.com/en/latest/utils/langchain_core.utils.utils.mock_now.html)
# Notice the last access time is that date timewith mock_now(datetime.datetime(2024, 2, 3, 10, 11)): print(retriever.get_relevant_documents("hello world"))
[Document(page_content='hello world', metadata={'last_accessed_at': MockDateTime(2024, 2, 3, 10, 11), 'created_at': datetime.datetime(2023, 12, 27, 15, 30, 44, 532941), 'buffer_idx': 0})]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/time_weighted_vectorstore.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to stream responses from an LLM
](/v0.2/docs/how_to/streaming_llm/)[
Next
How to use a model to call tools
](/v0.2/docs/how_to/tool_calling/)
* [Low decay rate](#low-decay-rate)
* [High decay rate](#high-decay-rate)
* [Virtual time](#virtual-time) | null |
https://python.langchain.com/v0.2/docs/how_to/sql_prompting/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to better prompt when doing SQL question-answering
On this page
How to better prompt when doing SQL question-answering
======================================================
In this guide we'll go over prompting strategies to improve SQL query generation using [create\_sql\_query\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.query.create_sql_query_chain.html). We'll largely focus on methods for getting relevant database-specific information in your prompt.
We will cover:
* How the dialect of the LangChain [SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) impacts the prompt of the chain;
* How to format schema information into the prompt using `SQLDatabase.get_context`;
* How to build and select few-shot examples to assist the model.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, get required packages and set environment variables:
%pip install --upgrade --quiet langchain langchain-community langchain-experimental langchain-openai
# Uncomment the below to use LangSmith. Not required.# import os# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()# os.environ["LANGCHAIN_TRACING_V2"] = "true"
The below example will use a SQLite connection with Chinook database. Follow [these installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:
* Save [this file](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) as `Chinook_Sqlite.sql`
* Run `sqlite3 Chinook.db`
* Run `.read Chinook_Sqlite.sql`
* Test `SELECT * FROM Artist LIMIT 10;`
Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:
from langchain_community.utilities import SQLDatabasedb = SQLDatabase.from_uri("sqlite:///Chinook.db", sample_rows_in_table_info=3)print(db.dialect)print(db.get_usable_table_names())print(db.run("SELECT * FROM Artist LIMIT 10;"))
**API Reference:**[SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html)
sqlite['Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'][(1, 'AC/DC'), (2, 'Accept'), (3, 'Aerosmith'), (4, 'Alanis Morissette'), (5, 'Alice In Chains'), (6, 'Antônio Carlos Jobim'), (7, 'Apocalyptica'), (8, 'Audioslave'), (9, 'BackBeat'), (10, 'Billy Cobham')]
Dialect-specific prompting[](#dialect-specific-prompting "Direct link to Dialect-specific prompting")
------------------------------------------------------------------------------------------------------
One of the simplest things we can do is make our prompt specific to the SQL dialect we're using. When using the built-in [create\_sql\_query\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.query.create_sql_query_chain.html) and [SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html), this is handled for you for any of the following dialects:
from langchain.chains.sql_database.prompt import SQL_PROMPTSlist(SQL_PROMPTS)
['crate', 'duckdb', 'googlesql', 'mssql', 'mysql', 'mariadb', 'oracle', 'postgresql', 'sqlite', 'clickhouse', 'prestodb']
For example, using our current DB we can see that we'll get a SQLite-specific prompt.
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
from langchain.chains import create_sql_query_chainchain = create_sql_query_chain(llm, db)chain.get_prompts()[0].pretty_print()
**API Reference:**[create\_sql\_query\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.query.create_sql_query_chain.html)
You are a SQLite expert. Given an input question, first create a syntactically correct SQLite query to run, then look at the results of the query and return the answer to the input question.Unless the user specifies in the question a specific number of examples to obtain, query for at most 5 results using the LIMIT clause as per SQLite. You can order the results to return the most informative data in the database.Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.Pay attention to use date('now') function to get the current date, if the question involves "today".Use the following format:Question: Question hereSQLQuery: SQL Query to runSQLResult: Result of the SQLQueryAnswer: Final answer hereOnly use the following tables:[33;1m[1;3m{table_info}[0mQuestion: [33;1m[1;3m{input}[0m
Table definitions and example rows[](#table-definitions-and-example-rows "Direct link to Table definitions and example rows")
------------------------------------------------------------------------------------------------------------------------------
In most SQL chains, we'll need to feed the model at least part of the database schema. Without this it won't be able to write valid queries. Our database comes with some convenience methods to give us the relevant context. Specifically, we can get the table names, their schemas, and a sample of rows from each table.
Here we will use `SQLDatabase.get_context`, which provides available tables and their schemas:
context = db.get_context()print(list(context))print(context["table_info"])
['table_info', 'table_names']CREATE TABLE "Album" ( "AlbumId" INTEGER NOT NULL, "Title" NVARCHAR(160) NOT NULL, "ArtistId" INTEGER NOT NULL, PRIMARY KEY ("AlbumId"), FOREIGN KEY("ArtistId") REFERENCES "Artist" ("ArtistId"))/*3 rows from Album table:AlbumId Title ArtistId1 For Those About To Rock We Salute You 12 Balls to the Wall 23 Restless and Wild 2*/CREATE TABLE "Artist" ( "ArtistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("ArtistId"))/*3 rows from Artist table:ArtistId Name1 AC/DC2 Accept3 Aerosmith*/CREATE TABLE "Customer" ( "CustomerId" INTEGER NOT NULL, "FirstName" NVARCHAR(40) NOT NULL, "LastName" NVARCHAR(20) NOT NULL, "Company" NVARCHAR(80), "Address" NVARCHAR(70), "City" NVARCHAR(40), "State" NVARCHAR(40), "Country" NVARCHAR(40), "PostalCode" NVARCHAR(10), "Phone" NVARCHAR(24), "Fax" NVARCHAR(24), "Email" NVARCHAR(60) NOT NULL, "SupportRepId" INTEGER, PRIMARY KEY ("CustomerId"), FOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId"))/*3 rows from Customer table:CustomerId FirstName LastName Company Address City State Country PostalCode Phone Fax Email SupportRepId1 Luís Gonçalves Embraer - Empresa Brasileira de Aeronáutica S.A. Av. Brigadeiro Faria Lima, 2170 São José dos Campos SP Brazil 12227-000 +55 (12) 3923-5555 +55 (12) 3923-5566 [email protected] 32 Leonie Köhler None Theodor-Heuss-Straße 34 Stuttgart None Germany 70174 +49 0711 2842222 None [email protected] 53 François Tremblay None 1498 rue Bélanger Montréal QC Canada H2G 1A7 +1 (514) 721-4711 None [email protected] 3*/CREATE TABLE "Employee" ( "EmployeeId" INTEGER NOT NULL, "LastName" NVARCHAR(20) NOT NULL, "FirstName" NVARCHAR(20) NOT NULL, "Title" NVARCHAR(30), "ReportsTo" INTEGER, "BirthDate" DATETIME, "HireDate" DATETIME, "Address" NVARCHAR(70), "City" NVARCHAR(40), "State" NVARCHAR(40), "Country" NVARCHAR(40), "PostalCode" NVARCHAR(10), "Phone" NVARCHAR(24), "Fax" NVARCHAR(24), "Email" NVARCHAR(60), PRIMARY KEY ("EmployeeId"), FOREIGN KEY("ReportsTo") REFERENCES "Employee" ("EmployeeId"))/*3 rows from Employee table:EmployeeId LastName FirstName Title ReportsTo BirthDate HireDate Address City State Country PostalCode Phone Fax Email1 Adams Andrew General Manager None 1962-02-18 00:00:00 2002-08-14 00:00:00 11120 Jasper Ave NW Edmonton AB Canada T5K 2N1 +1 (780) 428-9482 +1 (780) 428-3457 [email protected] Edwards Nancy Sales Manager 1 1958-12-08 00:00:00 2002-05-01 00:00:00 825 8 Ave SW Calgary AB Canada T2P 2T3 +1 (403) 262-3443 +1 (403) 262-3322 [email protected] Peacock Jane Sales Support Agent 2 1973-08-29 00:00:00 2002-04-01 00:00:00 1111 6 Ave SW Calgary AB Canada T2P 5M5 +1 (403) 262-3443 +1 (403) 262-6712 [email protected]*/CREATE TABLE "Genre" ( "GenreId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("GenreId"))/*3 rows from Genre table:GenreId Name1 Rock2 Jazz3 Metal*/CREATE TABLE "Invoice" ( "InvoiceId" INTEGER NOT NULL, "CustomerId" INTEGER NOT NULL, "InvoiceDate" DATETIME NOT NULL, "BillingAddress" NVARCHAR(70), "BillingCity" NVARCHAR(40), "BillingState" NVARCHAR(40), "BillingCountry" NVARCHAR(40), "BillingPostalCode" NVARCHAR(10), "Total" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("InvoiceId"), FOREIGN KEY("CustomerId") REFERENCES "Customer" ("CustomerId"))/*3 rows from Invoice table:InvoiceId CustomerId InvoiceDate BillingAddress BillingCity BillingState BillingCountry BillingPostalCode Total1 2 2021-01-01 00:00:00 Theodor-Heuss-Straße 34 Stuttgart None Germany 70174 1.982 4 2021-01-02 00:00:00 Ullevålsveien 14 Oslo None Norway 0171 3.963 8 2021-01-03 00:00:00 Grétrystraat 63 Brussels None Belgium 1000 5.94*/CREATE TABLE "InvoiceLine" ( "InvoiceLineId" INTEGER NOT NULL, "InvoiceId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, "UnitPrice" NUMERIC(10, 2) NOT NULL, "Quantity" INTEGER NOT NULL, PRIMARY KEY ("InvoiceLineId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("InvoiceId") REFERENCES "Invoice" ("InvoiceId"))/*3 rows from InvoiceLine table:InvoiceLineId InvoiceId TrackId UnitPrice Quantity1 1 2 0.99 12 1 4 0.99 13 2 6 0.99 1*/CREATE TABLE "MediaType" ( "MediaTypeId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("MediaTypeId"))/*3 rows from MediaType table:MediaTypeId Name1 MPEG audio file2 Protected AAC audio file3 Protected MPEG-4 video file*/CREATE TABLE "Playlist" ( "PlaylistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("PlaylistId"))/*3 rows from Playlist table:PlaylistId Name1 Music2 Movies3 TV Shows*/CREATE TABLE "PlaylistTrack" ( "PlaylistId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, PRIMARY KEY ("PlaylistId", "TrackId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId"))/*3 rows from PlaylistTrack table:PlaylistId TrackId1 34021 33891 3390*/CREATE TABLE "Track" ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "AlbumId" INTEGER, "MediaTypeId" INTEGER NOT NULL, "GenreId" INTEGER, "Composer" NVARCHAR(220), "Milliseconds" INTEGER NOT NULL, "Bytes" INTEGER, "UnitPrice" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("TrackId"), FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), FOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId"))/*3 rows from Track table:TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice1 For Those About To Rock (We Salute You) 1 1 1 Angus Young, Malcolm Young, Brian Johnson 343719 11170334 0.992 Balls to the Wall 2 2 1 U. Dirkschneider, W. Hoffmann, H. Frank, P. Baltes, S. Kaufmann, G. Hoffmann 342562 5510424 0.993 Fast As a Shark 3 2 1 F. Baltes, S. Kaufman, U. Dirkscneider & W. Hoffman 230619 3990994 0.99*/
When we don't have too many, or too wide of, tables, we can just insert the entirety of this information in our prompt:
prompt_with_context = chain.get_prompts()[0].partial(table_info=context["table_info"])print(prompt_with_context.pretty_repr()[:1500])
You are a SQLite expert. Given an input question, first create a syntactically correct SQLite query to run, then look at the results of the query and return the answer to the input question.Unless the user specifies in the question a specific number of examples to obtain, query for at most 5 results using the LIMIT clause as per SQLite. You can order the results to return the most informative data in the database.Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.Pay attention to use date('now') function to get the current date, if the question involves "today".Use the following format:Question: Question hereSQLQuery: SQL Query to runSQLResult: Result of the SQLQueryAnswer: Final answer hereOnly use the following tables:CREATE TABLE "Album" ( "AlbumId" INTEGER NOT NULL, "Title" NVARCHAR(160) NOT NULL, "ArtistId" INTEGER NOT NULL, PRIMARY KEY ("AlbumId"), FOREIGN KEY("ArtistId") REFERENCES "Artist" ("ArtistId"))/*3 rows from Album table:AlbumId Title ArtistId1 For Those About To Rock We Salute You 12 Balls to the Wall 23 Restless and Wild 2*/CREATE TABLE "Artist" ( "ArtistId" INTEGER NOT NULL, "Name" NVARCHAR(120)
When we do have database schemas that are too large to fit into our model's context window, we'll need to come up with ways of inserting only the relevant table definitions into the prompt based on the user input. For more on this head to the [Many tables, wide tables, high-cardinality feature](/v0.2/docs/how_to/sql_large_db/) guide.
Few-shot examples[](#few-shot-examples "Direct link to Few-shot examples")
---------------------------------------------------------------------------
Including examples of natural language questions being converted to valid SQL queries against our database in the prompt will often improve model performance, especially for complex queries.
Let's say we have the following examples:
examples = [ {"input": "List all artists.", "query": "SELECT * FROM Artist;"}, { "input": "Find all albums for the artist 'AC/DC'.", "query": "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');", }, { "input": "List all tracks in the 'Rock' genre.", "query": "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');", }, { "input": "Find the total duration of all tracks.", "query": "SELECT SUM(Milliseconds) FROM Track;", }, { "input": "List all customers from Canada.", "query": "SELECT * FROM Customer WHERE Country = 'Canada';", }, { "input": "How many tracks are there in the album with ID 5?", "query": "SELECT COUNT(*) FROM Track WHERE AlbumId = 5;", }, { "input": "Find the total number of invoices.", "query": "SELECT COUNT(*) FROM Invoice;", }, { "input": "List all tracks that are longer than 5 minutes.", "query": "SELECT * FROM Track WHERE Milliseconds > 300000;", }, { "input": "Who are the top 5 customers by total purchase?", "query": "SELECT CustomerId, SUM(Total) AS TotalPurchase FROM Invoice GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;", }, { "input": "Which albums are from the year 2000?", "query": "SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';", }, { "input": "How many employees are there", "query": 'SELECT COUNT(*) FROM "Employee"', },]
We can create a few-shot prompt with them like so:
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplateexample_prompt = PromptTemplate.from_template("User input: {input}\nSQL query: {query}")prompt = FewShotPromptTemplate( examples=examples[:5], example_prompt=example_prompt, prefix="You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run. Unless otherwise specificed, do not return more than {top_k} rows.\n\nHere is the relevant table info: {table_info}\n\nBelow are a number of examples of questions and their corresponding SQL queries.", suffix="User input: {input}\nSQL query: ", input_variables=["input", "top_k", "table_info"],)
**API Reference:**[FewShotPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html)
print(prompt.format(input="How many artists are there?", top_k=3, table_info="foo"))
You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run. Unless otherwise specificed, do not return more than 3 rows.Here is the relevant table info: fooBelow are a number of examples of questions and their corresponding SQL queries.User input: List all artists.SQL query: SELECT * FROM Artist;User input: Find all albums for the artist 'AC/DC'.SQL query: SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');User input: List all tracks in the 'Rock' genre.SQL query: SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');User input: Find the total duration of all tracks.SQL query: SELECT SUM(Milliseconds) FROM Track;User input: List all customers from Canada.SQL query: SELECT * FROM Customer WHERE Country = 'Canada';User input: How many artists are there?SQL query:
Dynamic few-shot examples[](#dynamic-few-shot-examples "Direct link to Dynamic few-shot examples")
---------------------------------------------------------------------------------------------------
If we have enough examples, we may want to only include the most relevant ones in the prompt, either because they don't fit in the model's context window or because the long tail of examples distracts the model. And specifically, given any input we want to include the examples most relevant to that input.
We can do just this using an ExampleSelector. In this case we'll use a [SemanticSimilarityExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html), which will store the examples in the vector database of our choosing. At runtime it will perform a similarity search between the input and our examples, and return the most semantically similar ones.
We default to OpenAI embeddings here, but you can swap them out for the model provider of your choice.
from langchain_community.vectorstores import FAISSfrom langchain_core.example_selectors import SemanticSimilarityExampleSelectorfrom langchain_openai import OpenAIEmbeddingsexample_selector = SemanticSimilarityExampleSelector.from_examples( examples, OpenAIEmbeddings(), FAISS, k=5, input_keys=["input"],)
**API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [SemanticSimilarityExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
example_selector.select_examples({"input": "how many artists are there?"})
[{'input': 'List all artists.', 'query': 'SELECT * FROM Artist;'}, {'input': 'How many employees are there', 'query': 'SELECT COUNT(*) FROM "Employee"'}, {'input': 'How many tracks are there in the album with ID 5?', 'query': 'SELECT COUNT(*) FROM Track WHERE AlbumId = 5;'}, {'input': 'Which albums are from the year 2000?', 'query': "SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';"}, {'input': "List all tracks in the 'Rock' genre.", 'query': "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');"}]
To use it, we can pass the ExampleSelector directly in to our FewShotPromptTemplate:
prompt = FewShotPromptTemplate( example_selector=example_selector, example_prompt=example_prompt, prefix="You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run. Unless otherwise specificed, do not return more than {top_k} rows.\n\nHere is the relevant table info: {table_info}\n\nBelow are a number of examples of questions and their corresponding SQL queries.", suffix="User input: {input}\nSQL query: ", input_variables=["input", "top_k", "table_info"],)
print(prompt.format(input="how many artists are there?", top_k=3, table_info="foo"))
You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run. Unless otherwise specificed, do not return more than 3 rows.Here is the relevant table info: fooBelow are a number of examples of questions and their corresponding SQL queries.User input: List all artists.SQL query: SELECT * FROM Artist;User input: How many employees are thereSQL query: SELECT COUNT(*) FROM "Employee"User input: How many tracks are there in the album with ID 5?SQL query: SELECT COUNT(*) FROM Track WHERE AlbumId = 5;User input: Which albums are from the year 2000?SQL query: SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';User input: List all tracks in the 'Rock' genre.SQL query: SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');User input: how many artists are there?SQL query:
Trying it out, we see that the model identifies the relevant table:
chain = create_sql_query_chain(llm, db, prompt)chain.invoke({"question": "how many artists are there?"})
'SELECT COUNT(*) FROM Artist;'
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/sql_prompting.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to deal with large databases when doing SQL question-answering
](/v0.2/docs/how_to/sql_large_db/)[
Next
How to do query validation as part of SQL question-answering
](/v0.2/docs/how_to/sql_query_checking/)
* [Setup](#setup)
* [Dialect-specific prompting](#dialect-specific-prompting)
* [Table definitions and example rows](#table-definitions-and-example-rows)
* [Few-shot examples](#few-shot-examples)
* [Dynamic few-shot examples](#dynamic-few-shot-examples) | null |
https://python.langchain.com/v0.2/docs/how_to/sql_large_db/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to deal with large databases when doing SQL question-answering
On this page
How to deal with large databases when doing SQL question-answering
==================================================================
In order to write valid queries against a database, we need to feed the model the table names, table schemas, and feature values for it to query over. When there are many tables, columns, and/or high-cardinality columns, it becomes impossible for us to dump the full information about our database in every prompt. Instead, we must find ways to dynamically insert into the prompt only the most relevant information.
In this guide we demonstrate methods for identifying such relevant information, and feeding this into a query-generation step. We will cover:
1. Identifying a relevant subset of tables;
2. Identifying a relevant subset of column values.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, get required packages and set environment variables:
%pip install --upgrade --quiet langchain langchain-community langchain-openai
# Uncomment the below to use LangSmith. Not required.# import os# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()# os.environ["LANGCHAIN_TRACING_V2"] = "true"
The below example will use a SQLite connection with Chinook database. Follow [these installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:
* Save [this file](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) as `Chinook_Sqlite.sql`
* Run `sqlite3 Chinook.db`
* Run `.read Chinook_Sqlite.sql`
* Test `SELECT * FROM Artist LIMIT 10;`
Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven [SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) class:
from langchain_community.utilities import SQLDatabasedb = SQLDatabase.from_uri("sqlite:///Chinook.db")print(db.dialect)print(db.get_usable_table_names())print(db.run("SELECT * FROM Artist LIMIT 10;"))
**API Reference:**[SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html)
sqlite['Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'][(1, 'AC/DC'), (2, 'Accept'), (3, 'Aerosmith'), (4, 'Alanis Morissette'), (5, 'Alice In Chains'), (6, 'Antônio Carlos Jobim'), (7, 'Apocalyptica'), (8, 'Audioslave'), (9, 'BackBeat'), (10, 'Billy Cobham')]
Many tables[](#many-tables "Direct link to Many tables")
---------------------------------------------------------
One of the main pieces of information we need to include in our prompt is the schemas of the relevant tables. When we have very many tables, we can't fit all of the schemas in a single prompt. What we can do in such cases is first extract the names of the tables related to the user input, and then include only their schemas.
One easy and reliable way to do this is using [tool-calling](/v0.2/docs/how_to/tool_calling/). Below, we show how we can use this feature to obtain output conforming to a desired format (in this case, a list of table names). We use the chat model's `.bind_tools` method to bind a tool in Pydantic format, and feed this into an output parser to reconstruct the object from the model's response.
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
from langchain_core.output_parsers.openai_tools import PydanticToolsParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.pydantic_v1 import BaseModel, Fieldclass Table(BaseModel): """Table in SQL database.""" name: str = Field(description="Name of table in SQL database.")table_names = "\n".join(db.get_usable_table_names())system = f"""Return the names of ALL the SQL tables that MIGHT be relevant to the user question. \The tables are:{table_names}Remember to include ALL POTENTIALLY RELEVANT tables, even if you're not sure that they're needed."""prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{input}"), ])llm_with_tools = llm.bind_tools([Table])output_parser = PydanticToolsParser(tools=[Table])table_chain = prompt | llm_with_tools | output_parsertable_chain.invoke({"input": "What are all the genres of Alanis Morisette songs"})
**API Reference:**[PydanticToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.PydanticToolsParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
[Table(name='Genre')]
This works pretty well! Except, as we'll see below, we actually need a few other tables as well. This would be pretty difficult for the model to know based just on the user question. In this case, we might think to simplify our model's job by grouping the tables together. We'll just ask the model to choose between categories "Music" and "Business", and then take care of selecting all the relevant tables from there:
system = """Return the names of any SQL tables that are relevant to the user question.The tables are:MusicBusiness"""prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{input}"), ])category_chain = prompt | llm_with_tools | output_parsercategory_chain.invoke({"input": "What are all the genres of Alanis Morisette songs"})
[Table(name='Music'), Table(name='Business')]
from typing import Listdef get_tables(categories: List[Table]) -> List[str]: tables = [] for category in categories: if category.name == "Music": tables.extend( [ "Album", "Artist", "Genre", "MediaType", "Playlist", "PlaylistTrack", "Track", ] ) elif category.name == "Business": tables.extend(["Customer", "Employee", "Invoice", "InvoiceLine"]) return tablestable_chain = category_chain | get_tablestable_chain.invoke({"input": "What are all the genres of Alanis Morisette songs"})
['Album', 'Artist', 'Genre', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track', 'Customer', 'Employee', 'Invoice', 'InvoiceLine']
Now that we've got a chain that can output the relevant tables for any query we can combine this with our [create\_sql\_query\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.query.create_sql_query_chain.html), which can accept a list of `table_names_to_use` to determine which table schemas are included in the prompt:
from operator import itemgetterfrom langchain.chains import create_sql_query_chainfrom langchain_core.runnables import RunnablePassthroughquery_chain = create_sql_query_chain(llm, db)# Convert "question" key to the "input" key expected by current table_chain.table_chain = {"input": itemgetter("question")} | table_chain# Set table_names_to_use using table_chain.full_chain = RunnablePassthrough.assign(table_names_to_use=table_chain) | query_chain
**API Reference:**[create\_sql\_query\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.query.create_sql_query_chain.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
query = full_chain.invoke( {"question": "What are all the genres of Alanis Morisette songs"})print(query)
SELECT DISTINCT "g"."Name"FROM "Genre" gJOIN "Track" t ON "g"."GenreId" = "t"."GenreId"JOIN "Album" a ON "t"."AlbumId" = "a"."AlbumId"JOIN "Artist" ar ON "a"."ArtistId" = "ar"."ArtistId"WHERE "ar"."Name" = 'Alanis Morissette'LIMIT 5;
db.run(query)
"[('Rock',)]"
We can see the LangSmith trace for this run [here](https://smith.langchain.com/public/4fbad408-3554-4f33-ab47-1e510a1b52a3/r).
We've seen how to dynamically include a subset of table schemas in a prompt within a chain. Another possible approach to this problem is to let an Agent decide for itself when to look up tables by giving it a Tool to do so. You can see an example of this in the [SQL: Agents](/v0.2/docs/tutorials/agents/) guide.
High-cardinality columns[](#high-cardinality-columns "Direct link to High-cardinality columns")
------------------------------------------------------------------------------------------------
In order to filter columns that contain proper nouns such as addresses, song names or artists, we first need to double-check the spelling in order to filter the data correctly.
One naive strategy it to create a vector store with all the distinct proper nouns that exist in the database. We can then query that vector store each user input and inject the most relevant proper nouns into the prompt.
First we need the unique values for each entity we want, for which we define a function that parses the result into a list of elements:
import astimport redef query_as_list(db, query): res = db.run(query) res = [el for sub in ast.literal_eval(res) for el in sub if el] res = [re.sub(r"\b\d+\b", "", string).strip() for string in res] return resproper_nouns = query_as_list(db, "SELECT Name FROM Artist")proper_nouns += query_as_list(db, "SELECT Title FROM Album")proper_nouns += query_as_list(db, "SELECT Name FROM Genre")len(proper_nouns)proper_nouns[:5]
['AC/DC', 'Accept', 'Aerosmith', 'Alanis Morissette', 'Alice In Chains']
Now we can embed and store all of our values in a vector database:
from langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsvector_db = FAISS.from_texts(proper_nouns, OpenAIEmbeddings())retriever = vector_db.as_retriever(search_kwargs={"k": 15})
**API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
And put together a query construction chain that first retrieves values from the database and inserts them into the prompt:
from operator import itemgetterfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughsystem = """You are a SQLite expert. Given an input question, create a syntacticallycorrect SQLite query to run. Unless otherwise specificed, do not return more than{top_k} rows.Only return the SQL query with no markup or explanation.Here is the relevant table info: {table_info}Here is a non-exhaustive list of possible feature values. If filtering on a featurevalue make sure to check its spelling against this list first:{proper_nouns}"""prompt = ChatPromptTemplate.from_messages([("system", system), ("human", "{input}")])query_chain = create_sql_query_chain(llm, db, prompt=prompt)retriever_chain = ( itemgetter("question") | retriever | (lambda docs: "\n".join(doc.page_content for doc in docs)))chain = RunnablePassthrough.assign(proper_nouns=retriever_chain) | query_chain
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
To try out our chain, let's see what happens when we try filtering on "elenis moriset", a misspelling of Alanis Morissette, without and with retrieval:
# Without retrievalquery = query_chain.invoke( {"question": "What are all the genres of elenis moriset songs", "proper_nouns": ""})print(query)db.run(query)
SELECT DISTINCT g.Name FROM Track tJOIN Album a ON t.AlbumId = a.AlbumIdJOIN Artist ar ON a.ArtistId = ar.ArtistIdJOIN Genre g ON t.GenreId = g.GenreIdWHERE ar.Name = 'Elenis Moriset';
''
# Without retrievalquery = query_chain.invoke( {"question": "What are all the genres of elenis moriset songs", "proper_nouns": ""})print(query)db.run(query)
SELECT DISTINCT Genre.NameFROM GenreJOIN Track ON Genre.GenreId = Track.GenreIdJOIN Album ON Track.AlbumId = Album.AlbumIdJOIN Artist ON Album.ArtistId = Artist.ArtistIdWHERE Artist.Name = 'Elenis Moriset'
''
# With retrievalquery = chain.invoke({"question": "What are all the genres of elenis moriset songs"})print(query)db.run(query)
SELECT DISTINCT g.NameFROM Genre gJOIN Track t ON g.GenreId = t.GenreIdJOIN Album a ON t.AlbumId = a.AlbumIdJOIN Artist ar ON a.ArtistId = ar.ArtistIdWHERE ar.Name = 'Alanis Morissette';
"[('Rock',)]"
We can see that with retrieval we're able to correct the spelling from "Elenis Moriset" to "Alanis Morissette" and get back a valid result.
Another possible approach to this problem is to let an Agent decide for itself when to look up proper nouns. You can see an example of this in the [SQL: Agents](/v0.2/docs/tutorials/agents/) guide.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/sql_large_db.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to do question answering over CSVs
](/v0.2/docs/how_to/sql_csv/)[
Next
How to better prompt when doing SQL question-answering
](/v0.2/docs/how_to/sql_prompting/)
* [Setup](#setup)
* [Many tables](#many-tables)
* [High-cardinality columns](#high-cardinality-columns) | null |
https://python.langchain.com/v0.2/docs/how_to/sql_query_checking/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to do query validation as part of SQL question-answering
On this page
How to do query validation as part of SQL question-answering
============================================================
Perhaps the most error-prone part of any SQL chain or agent is writing valid and safe SQL queries. In this guide we'll go over some strategies for validating our queries and handling invalid queries.
We will cover:
1. Appending a "query validator" step to the query generation;
2. Prompt engineering to reduce the incidence of errors.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, get required packages and set environment variables:
%pip install --upgrade --quiet langchain langchain-community langchain-openai
# Uncomment the below to use LangSmith. Not required.# import os# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()# os.environ["LANGCHAIN_TRACING_V2"] = "true"
The below example will use a SQLite connection with Chinook database. Follow [these installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:
* Save [this file](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) as `Chinook_Sqlite.sql`
* Run `sqlite3 Chinook.db`
* Run `.read Chinook_Sqlite.sql`
* Test `SELECT * FROM Artist LIMIT 10;`
Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:
from langchain_community.utilities import SQLDatabasedb = SQLDatabase.from_uri("sqlite:///Chinook.db")print(db.dialect)print(db.get_usable_table_names())print(db.run("SELECT * FROM Artist LIMIT 10;"))
**API Reference:**[SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html)
sqlite['Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'][(1, 'AC/DC'), (2, 'Accept'), (3, 'Aerosmith'), (4, 'Alanis Morissette'), (5, 'Alice In Chains'), (6, 'Antônio Carlos Jobim'), (7, 'Apocalyptica'), (8, 'Audioslave'), (9, 'BackBeat'), (10, 'Billy Cobham')]
Query checker[](#query-checker "Direct link to Query checker")
---------------------------------------------------------------
Perhaps the simplest strategy is to ask the model itself to check the original query for common mistakes. Suppose we have the following SQL query chain:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
from langchain.chains import create_sql_query_chainchain = create_sql_query_chain(llm, db)
**API Reference:**[create\_sql\_query\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.query.create_sql_query_chain.html)
And we want to validate its outputs. We can do so by extending the chain with a second prompt and model call:
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatesystem = """Double check the user's {dialect} query for common mistakes, including:- Using NOT IN with NULL values- Using UNION when UNION ALL should have been used- Using BETWEEN for exclusive ranges- Data type mismatch in predicates- Properly quoting identifiers- Using the correct number of arguments for functions- Casting to the correct data type- Using the proper columns for joinsIf there are any of the above mistakes, rewrite the query.If there are no mistakes, just reproduce the original query with no further commentary.Output the final SQL query only."""prompt = ChatPromptTemplate.from_messages( [("system", system), ("human", "{query}")]).partial(dialect=db.dialect)validation_chain = prompt | llm | StrOutputParser()full_chain = {"query": chain} | validation_chain
**API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
query = full_chain.invoke( { "question": "What's the average Invoice from an American customer whose Fax is missing since 2003 but before 2010" })print(query)
SELECT AVG(i.Total) AS AverageInvoiceFROM Invoice iJOIN Customer c ON i.CustomerId = c.CustomerIdWHERE c.Country = 'USA'AND c.Fax IS NULLAND i.InvoiceDate >= '2003-01-01' AND i.InvoiceDate < '2010-01-01'
Note how we can see both steps of the chain in the [Langsmith trace](https://smith.langchain.com/public/8a743295-a57c-4e4c-8625-bc7e36af9d74/r).
db.run(query)
'[(6.632999999999998,)]'
The obvious downside of this approach is that we need to make two model calls instead of one to generate our query. To get around this we can try to perform the query generation and query check in a single model invocation:
system = """You are a {dialect} expert. Given an input question, create a syntactically correct {dialect} query to run.Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per {dialect}. You can order the results to return the most informative data in the database.Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.Pay attention to use date('now') function to get the current date, if the question involves "today".Only use the following tables:{table_info}Write an initial draft of the query. Then double check the {dialect} query for common mistakes, including:- Using NOT IN with NULL values- Using UNION when UNION ALL should have been used- Using BETWEEN for exclusive ranges- Data type mismatch in predicates- Properly quoting identifiers- Using the correct number of arguments for functions- Casting to the correct data type- Using the proper columns for joinsUse format:First draft: <<FIRST_DRAFT_QUERY>>Final answer: <<FINAL_ANSWER_QUERY>>"""prompt = ChatPromptTemplate.from_messages( [("system", system), ("human", "{input}")]).partial(dialect=db.dialect)def parse_final_answer(output: str) -> str: return output.split("Final answer: ")[1]chain = create_sql_query_chain(llm, db, prompt=prompt) | parse_final_answerprompt.pretty_print()
================================[1m System Message [0m================================You are a [33;1m[1;3m{dialect}[0m expert. Given an input question, create a syntactically correct [33;1m[1;3m{dialect}[0m query to run.Unless the user specifies in the question a specific number of examples to obtain, query for at most [33;1m[1;3m{top_k}[0m results using the LIMIT clause as per [33;1m[1;3m{dialect}[0m. You can order the results to return the most informative data in the database.Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.Pay attention to use date('now') function to get the current date, if the question involves "today".Only use the following tables:[33;1m[1;3m{table_info}[0mWrite an initial draft of the query. Then double check the [33;1m[1;3m{dialect}[0m query for common mistakes, including:- Using NOT IN with NULL values- Using UNION when UNION ALL should have been used- Using BETWEEN for exclusive ranges- Data type mismatch in predicates- Properly quoting identifiers- Using the correct number of arguments for functions- Casting to the correct data type- Using the proper columns for joinsUse format:First draft: <<FIRST_DRAFT_QUERY>>Final answer: <<FINAL_ANSWER_QUERY>>================================[1m Human Message [0m=================================[33;1m[1;3m{input}[0m
query = chain.invoke( { "question": "What's the average Invoice from an American customer whose Fax is missing since 2003 but before 2010" })print(query)
SELECT AVG(i."Total") AS "AverageInvoice"FROM "Invoice" iJOIN "Customer" c ON i."CustomerId" = c."CustomerId"WHERE c."Country" = 'USA'AND c."Fax" IS NULLAND i."InvoiceDate" BETWEEN '2003-01-01' AND '2010-01-01';
db.run(query)
'[(6.632999999999998,)]'
Human-in-the-loop[](#human-in-the-loop "Direct link to Human-in-the-loop")
---------------------------------------------------------------------------
In some cases our data is sensitive enough that we never want to execute a SQL query without a human approving it first. Head to the [Tool use: Human-in-the-loop](/v0.2/docs/how_to/tools_human/) page to learn how to add a human-in-the-loop to any tool, chain or agent.
Error handling[](#error-handling "Direct link to Error handling")
------------------------------------------------------------------
At some point, the model will make a mistake and craft an invalid SQL query. Or an issue will arise with our database. Or the model API will go down. We'll want to add some error handling behavior to our chains and agents so that we fail gracefully in these situations, and perhaps even automatically recover. To learn about error handling with tools, head to the [Tool use: Error handling](/v0.2/docs/how_to/tools_error/) page.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/sql_query_checking.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to better prompt when doing SQL question-answering
](/v0.2/docs/how_to/sql_prompting/)[
Next
How to stream runnables
](/v0.2/docs/how_to/streaming/)
* [Setup](#setup)
* [Query checker](#query-checker)
* [Human-in-the-loop](#human-in-the-loop)
* [Error handling](#error-handling) | null |
https://python.langchain.com/v0.2/docs/how_to/streaming/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to stream runnables
On this page
How to stream runnables
=======================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language)
* [Output parsers](/v0.2/docs/concepts/#output-parsers)
Streaming is critical in making applications based on LLMs feel responsive to end-users.
Important LangChain primitives like [chat models](/v0.2/docs/concepts/#chat-models), [output parsers](/v0.2/docs/concepts/#output-parsers), [prompts](/v0.2/docs/concepts/#prompt-templates), [retrievers](/v0.2/docs/concepts/#retrievers), and [agents](/v0.2/docs/concepts/#agents) implement the LangChain [Runnable Interface](/v0.2/docs/concepts/#interface).
This interface provides two general approaches to stream content:
1. sync `stream` and async `astream`: a **default implementation** of streaming that streams the **final output** from the chain.
2. async `astream_events` and async `astream_log`: these provide a way to stream both **intermediate steps** and **final output** from the chain.
Let's take a look at both approaches, and try to understand how to use them.
info
For a higher-level overview of streaming techniques in LangChain, see [this section of the conceptual guide](/v0.2/docs/concepts/#streaming).
Using Stream[](#using-stream "Direct link to Using Stream")
------------------------------------------------------------
All `Runnable` objects implement a sync method called `stream` and an async variant called `astream`.
These methods are designed to stream the final output in chunks, yielding each chunk as soon as it is available.
Streaming is only possible if all steps in the program know how to process an **input stream**; i.e., process an input chunk one at a time, and yield a corresponding output chunk.
The complexity of this processing can vary, from straightforward tasks like emitting tokens produced by an LLM, to more challenging ones like streaming parts of JSON results before the entire JSON is complete.
The best place to start exploring streaming is with the single most important components in LLMs apps-- the LLMs themselves!
### LLMs and Chat Models[](#llms-and-chat-models "Direct link to LLMs and Chat Models")
Large language models and their chat variants are the primary bottleneck in LLM based apps.
Large language models can take **several seconds** to generate a complete response to a query. This is far slower than the **~200-300 ms** threshold at which an application feels responsive to an end user.
The key strategy to make the application feel more responsive is to show intermediate progress; viz., to stream the output from the model **token by token**.
We will show examples of streaming using a chat model. Choose one from the options below:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAImodel = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAImodel = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoheremodel = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksmodel = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqmodel = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAImodel = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
Let's start with the sync `stream` API:
chunks = []for chunk in model.stream("what color is the sky?"): chunks.append(chunk) print(chunk.content, end="|", flush=True)
The| sky| appears| blue| during| the| day|.|
Alternatively, if you're working in an async environment, you may consider using the async `astream` API:
chunks = []async for chunk in model.astream("what color is the sky?"): chunks.append(chunk) print(chunk.content, end="|", flush=True)
The| sky| appears| blue| during| the| day|.|
Let's inspect one of the chunks
chunks[0]
AIMessageChunk(content='The', id='run-b36bea64-5511-4d7a-b6a3-a07b3db0c8e7')
We got back something called an `AIMessageChunk`. This chunk represents a part of an `AIMessage`.
Message chunks are additive by design -- one can simply add them up to get the state of the response so far!
chunks[0] + chunks[1] + chunks[2] + chunks[3] + chunks[4]
AIMessageChunk(content='The sky appears blue during', id='run-b36bea64-5511-4d7a-b6a3-a07b3db0c8e7')
### Chains[](#chains "Direct link to Chains")
Virtually all LLM applications involve more steps than just a call to a language model.
Let's build a simple chain using `LangChain Expression Language` (`LCEL`) that combines a prompt, model and a parser and verify that streaming works.
We will use [`StrOutputParser`](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) to parse the output from the model. This is a simple parser that extracts the `content` field from an `AIMessageChunk`, giving us the `token` returned by the model.
tip
LCEL is a _declarative_ way to specify a "program" by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of `stream` and `astream` allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface.
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")parser = StrOutputParser()chain = prompt | model | parserasync for chunk in chain.astream({"topic": "parrot"}): print(chunk, end="|", flush=True)
**API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
Here|'s| a| joke| about| a| par|rot|:|A man| goes| to| a| pet| shop| to| buy| a| par|rot|.| The| shop| owner| shows| him| two| stunning| pa|rr|ots| with| beautiful| pl|um|age|.|"|There|'s| a| talking| par|rot| an|d a| non|-|talking| par|rot|,"| the| owner| says|.| "|The| talking| par|rot| costs| $|100|,| an|d the| non|-|talking| par|rot| is| $|20|."|The| man| says|,| "|I|'ll| take| the| non|-|talking| par|rot| at| $|20|."|He| pays| an|d leaves| with| the| par|rot|.| As| he|'s| walking| down| the| street|,| the| par|rot| looks| up| at| him| an|d says|,| "|You| know|,| you| really| are| a| stupi|d man|!"|The| man| is| stun|ne|d an|d looks| at| the| par|rot| in| dis|bel|ief|.| The| par|rot| continues|,| "|Yes|,| you| got| r|ippe|d off| big| time|!| I| can| talk| just| as| well| as| that| other| par|rot|,| an|d you| only| pai|d $|20| |for| me|!"|
Note that we're getting streaming output even though we're using `parser` at the end of the chain above. The `parser` operates on each streaming chunk individidually. Many of the [LCEL primitives](/v0.2/docs/how_to/#langchain-expression-language-lcel) also support this kind of transform-style passthrough streaming, which can be very convenient when constructing apps.
Custom functions can be [designed to return generators](/v0.2/docs/how_to/functions/#streaming), which are able to operate on streams.
Certain runnables, like [prompt templates](/v0.2/docs/how_to/#prompt-templates) and [chat models](/v0.2/docs/how_to/#chat-models), cannot process individual chunks and instead aggregate all previous steps. Such runnables can interrupt the streaming process.
note
The LangChain Expression language allows you to separate the construction of a chain from the mode in which it is used (e.g., sync/async, batch/streaming etc.). If this is not relevant to what you're building, you can also rely on a standard **imperative** programming approach by caling `invoke`, `batch` or `stream` on each component individually, assigning the results to variables and then using them downstream as you see fit.
### Working with Input Streams[](#working-with-input-streams "Direct link to Working with Input Streams")
What if you wanted to stream JSON from the output as it was being generated?
If you were to rely on `json.loads` to parse the partial json, the parsing would fail as the partial json wouldn't be valid json.
You'd likely be at a complete loss of what to do and claim that it wasn't possible to stream JSON.
Well, turns out there is a way to do it -- the parser needs to operate on the **input stream**, and attempt to "auto-complete" the partial json into a valid state.
Let's see such a parser in action to understand what this means.
from langchain_core.output_parsers import JsonOutputParserchain = ( model | JsonOutputParser()) # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some modelsasync for text in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`"): print(text, flush=True)
**API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html)
{}{'countries': []}{'countries': [{}]}{'countries': [{'name': ''}]}{'countries': [{'name': 'France'}]}{'countries': [{'name': 'France', 'population': 67}]}{'countries': [{'name': 'France', 'population': 67413}]}{'countries': [{'name': 'France', 'population': 67413000}]}{'countries': [{'name': 'France', 'population': 67413000}, {}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': ''}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain'}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': ''}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan'}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan', 'population': 125}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan', 'population': 125584}]}{'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351567}, {'name': 'Japan', 'population': 125584000}]}
Now, let's **break** streaming. We'll use the previous example and append an extraction function at the end that extracts the country names from the finalized JSON.
danger
Any steps in the chain that operate on **finalized inputs** rather than on **input streams** can break streaming functionality via `stream` or `astream`.
tip
Later, we will discuss the `astream_events` API which streams results from intermediate steps. This API will stream results from intermediate steps even if the chain contains steps that only operate on **finalized inputs**.
from langchain_core.output_parsers import ( JsonOutputParser,)# A function that operates on finalized inputs# rather than on an input_streamdef _extract_country_names(inputs): """A function that does not operates on input streams and breaks streaming.""" if not isinstance(inputs, dict): return "" if "countries" not in inputs: return "" countries = inputs["countries"] if not isinstance(countries, list): return "" country_names = [ country.get("name") for country in countries if isinstance(country, dict) ] return country_nameschain = model | JsonOutputParser() | _extract_country_namesasync for text in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`"): print(text, end="|", flush=True)
**API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html)
['France', 'Spain', 'Japan']|
#### Generator Functions[](#generator-functions "Direct link to Generator Functions")
Le'ts fix the streaming using a generator function that can operate on the **input stream**.
tip
A generator function (a function that uses `yield`) allows writing code that operates on **input streams**
from langchain_core.output_parsers import JsonOutputParserasync def _extract_country_names_streaming(input_stream): """A function that operates on input streams.""" country_names_so_far = set() async for input in input_stream: if not isinstance(input, dict): continue if "countries" not in input: continue countries = input["countries"] if not isinstance(countries, list): continue for country in countries: name = country.get("name") if not name: continue if name not in country_names_so_far: yield name country_names_so_far.add(name)chain = model | JsonOutputParser() | _extract_country_names_streamingasync for text in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`",): print(text, end="|", flush=True)
**API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html)
France|Spain|Japan|
note
Because the code above is relying on JSON auto-completion, you may see partial names of countries (e.g., `Sp` and `Spain`), which is not what one would want for an extraction result!
We're focusing on streaming concepts, not necessarily the results of the chains.
### Non-streaming components[](#non-streaming-components "Direct link to Non-streaming components")
Some built-in components like Retrievers do not offer any `streaming`. What happens if we try to `stream` them? 🤨
from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import OpenAIEmbeddingstemplate = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)vectorstore = FAISS.from_texts( ["harrison worked at kensho", "harrison likes spicy food"], embedding=OpenAIEmbeddings(),)retriever = vectorstore.as_retriever()chunks = [chunk for chunk in retriever.stream("where did harrison work?")]chunks
**API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
[[Document(page_content='harrison worked at kensho'), Document(page_content='harrison likes spicy food')]]
Stream just yielded the final result from that component.
This is OK 🥹! Not all components have to implement streaming -- in some cases streaming is either unnecessary, difficult or just doesn't make sense.
tip
An LCEL chain constructed using non-streaming components, will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain.
retrieval_chain = ( { "context": retriever.with_config(run_name="Docs"), "question": RunnablePassthrough(), } | prompt | model | StrOutputParser())
for chunk in retrieval_chain.stream( "Where did harrison work? " "Write 3 made up sentences about this place."): print(chunk, end="|", flush=True)
Base|d on| the| given| context|,| Harrison| worke|d at| K|ens|ho|.|Here| are| |3| |made| up| sentences| about| this| place|:|1|.| K|ens|ho| was| a| cutting|-|edge| technology| company| known| for| its| innovative| solutions| in| artificial| intelligence| an|d data| analytics|.|2|.| The| modern| office| space| at| K|ens|ho| feature|d open| floor| plans|,| collaborative| work|sp|aces|,| an|d a| vib|rant| atmosphere| that| fos|tere|d creativity| an|d team|work|.|3|.| With| its| prime| location| in| the| heart| of| the| city|,| K|ens|ho| attracte|d top| talent| from| aroun|d the| worl|d,| creating| a| diverse| an|d dynamic| work| environment|.|
Now that we've seen how `stream` and `astream` work, let's venture into the world of streaming events. 🏞️
Using Stream Events[](#using-stream-events "Direct link to Using Stream Events")
---------------------------------------------------------------------------------
Event Streaming is a **beta** API. This API may change a bit based on feedback.
note
This guide demonstrates the `V2` API and requires langchain-core >= 0.2. For the `V1` API compatible with older versions of LangChain, see [here](https://python.langchain.com/v0.1/docs/expression_language/streaming/#using-stream-events).
import langchain_corelangchain_core.__version__
For the `astream_events` API to work properly:
* Use `async` throughout the code to the extent possible (e.g., async tools etc)
* Propagate callbacks if defining custom functions / runnables
* Whenever using runnables without LCEL, make sure to call `.astream()` on LLMs rather than `.ainvoke` to force the LLM to stream tokens.
* Let us know if anything doesn't work as expected! :)
### Event Reference[](#event-reference "Direct link to Event Reference")
Below is a reference table that shows some events that might be emitted by the various Runnable objects.
note
When streaming is implemented properly, the inputs to a runnable will not be known until after the input stream has been entirely consumed. This means that `inputs` will often be included only for `end` events and rather than for `start` events.
event
name
chunk
input
output
on\_chat\_model\_start
\[model name\]
{"messages": \[\[SystemMessage, HumanMessage\]\]}
on\_chat\_model\_stream
\[model name\]
AIMessageChunk(content="hello")
on\_chat\_model\_end
\[model name\]
{"messages": \[\[SystemMessage, HumanMessage\]\]}
AIMessageChunk(content="hello world")
on\_llm\_start
\[model name\]
{'input': 'hello'}
on\_llm\_stream
\[model name\]
'Hello'
on\_llm\_end
\[model name\]
'Hello human!'
on\_chain\_start
format\_docs
on\_chain\_stream
format\_docs
"hello world!, goodbye world!"
on\_chain\_end
format\_docs
\[Document(...)\]
"hello world!, goodbye world!"
on\_tool\_start
some\_tool
{"x": 1, "y": "2"}
on\_tool\_end
some\_tool
{"x": 1, "y": "2"}
on\_retriever\_start
\[retriever name\]
{"query": "hello"}
on\_retriever\_end
\[retriever name\]
{"query": "hello"}
\[Document(...), ..\]
on\_prompt\_start
\[template\_name\]
{"question": "hello"}
on\_prompt\_end
\[template\_name\]
{"question": "hello"}
ChatPromptValue(messages: \[SystemMessage, ...\])
### Chat Model[](#chat-model "Direct link to Chat Model")
Let's start off by looking at the events produced by a chat model.
events = []async for event in model.astream_events("hello", version="v2"): events.append(event)
/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future. warn_beta(
note
Hey what's that funny version="v2" parameter in the API?! 😾
This is a **beta API**, and we're almost certainly going to make some changes to it (in fact, we already have!)
This version parameter will allow us to minimize such breaking changes to your code.
In short, we are annoying you now, so we don't have to annoy you later.
`v2` is only available for langchain-core>=0.2.0.
Let's take a look at the few of the start event and a few of the end events.
events[:3]
[{'event': 'on_chat_model_start', 'data': {'input': 'hello'}, 'name': 'ChatAnthropic', 'tags': [], 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'metadata': {}}, {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Hello', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}, {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='!', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}]
events[-2:]
[{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='?', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}, {'event': 'on_chat_model_end', 'data': {'output': AIMessageChunk(content='Hello! How can I assist you today?', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')}, 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}}]
### Chain[](#chain "Direct link to Chain")
Let's revisit the example chain that parsed streaming JSON to explore the streaming events API.
chain = ( model | JsonOutputParser()) # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some modelsevents = [ event async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2", )]
If you examine at the first few events, you'll notice that there are **3** different start events rather than **2** start events.
The three start events correspond to:
1. The chain (model + parser)
2. The model
3. The parser
events[:3]
[{'event': 'on_chain_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'RunnableSequence', 'tags': [], 'run_id': '4765006b-16e2-4b1d-a523-edd9fd64cb92', 'metadata': {}}, {'event': 'on_chat_model_start', 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`')]]}}, 'name': 'ChatAnthropic', 'tags': ['seq:step:1'], 'run_id': '0320c234-7b52-4a14-ae4e-5f100949e589', 'metadata': {}}, {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-0320c234-7b52-4a14-ae4e-5f100949e589')}, 'run_id': '0320c234-7b52-4a14-ae4e-5f100949e589', 'name': 'ChatAnthropic', 'tags': ['seq:step:1'], 'metadata': {}}]
What do you think you'd see if you looked at the last 3 events? what about the middle?
Let's use this API to take output the stream events from the model and the parser. We're ignoring start events, end events and events from the chain.
num_events = 0async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2",): kind = event["event"] if kind == "on_chat_model_stream": print( f"Chat model chunk: {repr(event['data']['chunk'].content)}", flush=True, ) if kind == "on_parser_stream": print(f"Parser chunk: {event['data']['chunk']}", flush=True) num_events += 1 if num_events > 30: # Truncate the output print("...") break
Chat model chunk: '{'Parser chunk: {}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'countries'Chat model chunk: '":'Chat model chunk: ' ['Parser chunk: {'countries': []}Chat model chunk: '\n 'Chat model chunk: '{'Parser chunk: {'countries': [{}]}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'name'Chat model chunk: '":'Chat model chunk: ' "'Parser chunk: {'countries': [{'name': ''}]}Chat model chunk: 'France'Parser chunk: {'countries': [{'name': 'France'}]}Chat model chunk: '",'Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'population'...
Because both the model and the parser support streaming, we see streaming events from both components in real time! Kind of cool isn't it? 🦜
### Filtering Events[](#filtering-events "Direct link to Filtering Events")
Because this API produces so many events, it is useful to be able to filter on events.
You can filter by either component `name`, component `tags` or component `type`.
#### By Name[](#by-name "Direct link to By Name")
chain = model.with_config({"run_name": "model"}) | JsonOutputParser().with_config( {"run_name": "my_parser"})max_events = 0async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2", include_names=["my_parser"],): print(event) max_events += 1 if max_events > 10: # Truncate output print("...") break
{'event': 'on_parser_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'my_parser', 'tags': ['seq:step:2'], 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': []}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': ''}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France'}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {'name': ''}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}...
#### By Type[](#by-type "Direct link to By Type")
chain = model.with_config({"run_name": "model"}) | JsonOutputParser().with_config( {"run_name": "my_parser"})max_events = 0async for event in chain.astream_events( 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`', version="v2", include_types=["chat_model"],): print(event) max_events += 1 if max_events > 10: # Truncate output print("...") break
{'event': 'on_chat_model_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'model', 'tags': ['seq:step:1'], 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='"', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='countries', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='":', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' [', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='"', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}...
#### By Tags[](#by-tags "Direct link to By Tags")
caution
Tags are inherited by child components of a given runnable.
If you're using tags to filter, make sure that this is what you want.
chain = (model | JsonOutputParser()).with_config({"tags": ["my_chain"]})max_events = 0async for event in chain.astream_events( 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`', version="v2", include_tags=["my_chain"],): print(event) max_events += 1 if max_events > 10: # Truncate output print("...") break
{'event': 'on_chain_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'RunnableSequence', 'tags': ['my_chain'], 'run_id': 'fd68dd64-7a4d-4bdb-a0c2-ee592db0d024', 'metadata': {}}{'event': 'on_chat_model_start', 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`')]]}}, 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_parser_start', 'data': {}, 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'run_id': 'afde30b9-beac-4b36-b4c7-dbbe423ddcdb', 'metadata': {}}{'event': 'on_parser_stream', 'data': {'chunk': {}}, 'run_id': 'afde30b9-beac-4b36-b4c7-dbbe423ddcdb', 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}}{'event': 'on_chain_stream', 'data': {'chunk': {}}, 'run_id': 'fd68dd64-7a4d-4bdb-a0c2-ee592db0d024', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\n ', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='"', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='countries', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='":', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' [', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}...
### Non-streaming components[](#non-streaming-components-1 "Direct link to Non-streaming components")
Remember how some components don't stream well because they don't operate on **input streams**?
While such components can break streaming of the final output when using `astream`, `astream_events` will still yield streaming events from intermediate steps that support streaming!
# Function that does not support streaming.# It operates on the finalizes inputs rather than# operating on the input stream.def _extract_country_names(inputs): """A function that does not operates on input streams and breaks streaming.""" if not isinstance(inputs, dict): return "" if "countries" not in inputs: return "" countries = inputs["countries"] if not isinstance(countries, list): return "" country_names = [ country.get("name") for country in countries if isinstance(country, dict) ] return country_nameschain = ( model | JsonOutputParser() | _extract_country_names) # This parser only works with OpenAI right now
As expected, the `astream` API doesn't work correctly because `_extract_country_names` doesn't operate on streams.
async for chunk in chain.astream( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`",): print(chunk, flush=True)
['France', 'Spain', 'Japan']
Now, let's confirm that with astream\_events we're still seeing streaming output from the model and the parser.
num_events = 0async for event in chain.astream_events( "output a list of the countries france, spain and japan and their populations in JSON format. " 'Use a dict with an outer key of "countries" which contains a list of countries. ' "Each country should have the key `name` and `population`", version="v2",): kind = event["event"] if kind == "on_chat_model_stream": print( f"Chat model chunk: {repr(event['data']['chunk'].content)}", flush=True, ) if kind == "on_parser_stream": print(f"Parser chunk: {event['data']['chunk']}", flush=True) num_events += 1 if num_events > 30: # Truncate the output print("...") break
Chat model chunk: '{'Parser chunk: {}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'countries'Chat model chunk: '":'Chat model chunk: ' ['Parser chunk: {'countries': []}Chat model chunk: '\n 'Chat model chunk: '{'Parser chunk: {'countries': [{}]}Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'name'Chat model chunk: '":'Chat model chunk: ' "'Parser chunk: {'countries': [{'name': ''}]}Chat model chunk: 'France'Parser chunk: {'countries': [{'name': 'France'}]}Chat model chunk: '",'Chat model chunk: '\n 'Chat model chunk: '"'Chat model chunk: 'population'Chat model chunk: '":'Chat model chunk: ' 'Chat model chunk: '67'Parser chunk: {'countries': [{'name': 'France', 'population': 67}]}...
### Propagating Callbacks[](#propagating-callbacks "Direct link to Propagating Callbacks")
caution
If you're using invoking runnables inside your tools, you need to propagate callbacks to the runnable; otherwise, no stream events will be generated.
note
When using `RunnableLambdas` or `@chain` decorator, callbacks are propagated automatically behind the scenes.
from langchain_core.runnables import RunnableLambdafrom langchain_core.tools import tooldef reverse_word(word: str): return word[::-1]reverse_word = RunnableLambda(reverse_word)@tooldef bad_tool(word: str): """Custom tool that doesn't propagate callbacks.""" return reverse_word.invoke(word)async for event in bad_tool.astream_events("hello", version="v2"): print(event)
**API Reference:**[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) | [tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
{'event': 'on_tool_start', 'data': {'input': 'hello'}, 'name': 'bad_tool', 'tags': [], 'run_id': 'ea900472-a8f7-425d-b627-facdef936ee8', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': 'hello'}, 'name': 'reverse_word', 'tags': [], 'run_id': '77b01284-0515-48f4-8d7c-eb27c1882f86', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': 'olleh', 'input': 'hello'}, 'run_id': '77b01284-0515-48f4-8d7c-eb27c1882f86', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_tool_end', 'data': {'output': 'olleh'}, 'run_id': 'ea900472-a8f7-425d-b627-facdef936ee8', 'name': 'bad_tool', 'tags': [], 'metadata': {}}
Here's a re-implementation that does propagate callbacks correctly. You'll notice that now we're getting events from the `reverse_word` runnable as well.
@tooldef correct_tool(word: str, callbacks): """A tool that correctly propagates callbacks.""" return reverse_word.invoke(word, {"callbacks": callbacks})async for event in correct_tool.astream_events("hello", version="v2"): print(event)
{'event': 'on_tool_start', 'data': {'input': 'hello'}, 'name': 'correct_tool', 'tags': [], 'run_id': 'd5ea83b9-9278-49cc-9f1d-aa302d671040', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': 'hello'}, 'name': 'reverse_word', 'tags': [], 'run_id': '44dafbf4-2f87-412b-ae0e-9f71713810df', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': 'olleh', 'input': 'hello'}, 'run_id': '44dafbf4-2f87-412b-ae0e-9f71713810df', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_tool_end', 'data': {'output': 'olleh'}, 'run_id': 'd5ea83b9-9278-49cc-9f1d-aa302d671040', 'name': 'correct_tool', 'tags': [], 'metadata': {}}
If you're invoking runnables from within Runnable Lambdas or `@chains`, then callbacks will be passed automatically on your behalf.
from langchain_core.runnables import RunnableLambdaasync def reverse_and_double(word: str): return await reverse_word.ainvoke(word) * 2reverse_and_double = RunnableLambda(reverse_and_double)await reverse_and_double.ainvoke("1234")async for event in reverse_and_double.astream_events("1234", version="v2"): print(event)
**API Reference:**[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html)
{'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_and_double', 'tags': [], 'run_id': '03b0e6a1-3e60-42fc-8373-1e7829198d80', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_word', 'tags': [], 'run_id': '5cf26fc8-840b-4642-98ed-623dda28707a', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '4321', 'input': '1234'}, 'run_id': '5cf26fc8-840b-4642-98ed-623dda28707a', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_chain_stream', 'data': {'chunk': '43214321'}, 'run_id': '03b0e6a1-3e60-42fc-8373-1e7829198d80', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '43214321'}, 'run_id': '03b0e6a1-3e60-42fc-8373-1e7829198d80', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}}
And with the `@chain` decorator:
from langchain_core.runnables import chain@chainasync def reverse_and_double(word: str): return await reverse_word.ainvoke(word) * 2await reverse_and_double.ainvoke("1234")async for event in reverse_and_double.astream_events("1234", version="v2"): print(event)
**API Reference:**[chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html)
{'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_and_double', 'tags': [], 'run_id': '1bfcaedc-f4aa-4d8e-beee-9bba6ef17008', 'metadata': {}}{'event': 'on_chain_start', 'data': {'input': '1234'}, 'name': 'reverse_word', 'tags': [], 'run_id': '64fc99f0-5d7d-442b-b4f5-4537129f67d1', 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '4321', 'input': '1234'}, 'run_id': '64fc99f0-5d7d-442b-b4f5-4537129f67d1', 'name': 'reverse_word', 'tags': [], 'metadata': {}}{'event': 'on_chain_stream', 'data': {'chunk': '43214321'}, 'run_id': '1bfcaedc-f4aa-4d8e-beee-9bba6ef17008', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}}{'event': 'on_chain_end', 'data': {'output': '43214321'}, 'run_id': '1bfcaedc-f4aa-4d8e-beee-9bba6ef17008', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}}
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now you've learned some ways to stream both final outputs and internal steps with LangChain.
To learn more, check out the other how-to guides in this section, or the [conceptual guide on Langchain Expression Language](/v0.2/docs/concepts/#langchain-expression-language/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/streaming.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to do query validation as part of SQL question-answering
](/v0.2/docs/how_to/sql_query_checking/)[
Next
How to stream responses from an LLM
](/v0.2/docs/how_to/streaming_llm/)
* [Using Stream](#using-stream)
* [LLMs and Chat Models](#llms-and-chat-models)
* [Chains](#chains)
* [Working with Input Streams](#working-with-input-streams)
* [Non-streaming components](#non-streaming-components)
* [Using Stream Events](#using-stream-events)
* [Event Reference](#event-reference)
* [Chat Model](#chat-model)
* [Chain](#chain)
* [Filtering Events](#filtering-events)
* [Non-streaming components](#non-streaming-components-1)
* [Propagating Callbacks](#propagating-callbacks)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/tool_calling_parallel/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* tool\_calling\_parallel
On this page
tool\_calling\_parallel
=======================
### Disabling parallel tool calling (OpenAI only)[](#disabling-parallel-tool-calling-openai-only "Direct link to Disabling parallel tool calling (OpenAI only)")
OpenAI tool calling performs tool calling in parallel by default. That means that if we ask a question like "What is the weather in Tokyo, New York, and Chicago?" and we have a tool for getting the weather, it will call the tool 3 times in parallel. We can force it to call only a single tool once by using the `parallel_tool_call` parameter.
First let's set up our tools and model:
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Now let's show a quick example of how disabling parallel tool calls work:
llm_with_tools = llm.bind_tools(tools, parallel_tool_calls=False)llm_with_tools.invoke("Please call the first tool two times").tool_calls
[{'name': 'add', 'args': {'a': 2, 'b': 2}, 'id': 'call_Hh4JOTCDM85Sm9Pr84VKrWu5'}]
As we can see, even though we explicitly told the model to call a tool twice, by disabling parallel tool calls the model was constrained to only calling one.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_calling_parallel.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use a model to call tools
](/v0.2/docs/how_to/tool_calling/)[
Next
How to force tool calling behavior
](/v0.2/docs/how_to/tool_choice/)
* [Disabling parallel tool calling (OpenAI only)](#disabling-parallel-tool-calling-openai-only) | null |
https://python.langchain.com/v0.2/docs/how_to/tool_choice/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to force tool calling behavior
How to force tool calling behavior
==================================
In order to force our LLM to spelect a specific tool, we can use the `tool_choice` parameter to ensure certain behavior. First, let's define our model and tools:
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
For example, we can force our tool to call the multiply tool by using the following code:
llm_forced_to_multiply = llm.bind_tools(tools, tool_choice="Multiply")llm_forced_to_multiply.invoke("what is 2 + 4")
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_9cViskmLvPnHjXk9tbVla5HA', 'function': {'arguments': '{"a":2,"b":4}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 103, 'total_tokens': 112}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-095b827e-2bdd-43bb-8897-c843f4504883-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 2, 'b': 4}, 'id': 'call_9cViskmLvPnHjXk9tbVla5HA'}], usage_metadata={'input_tokens': 103, 'output_tokens': 9, 'total_tokens': 112})
Even if we pass it something that doesn't require multiplcation - it will still call the tool!
We can also just force our tool to select at least one of our tools by passing in the "any" (or "required" which is OpenAI specific) keyword to the `tool_choice` parameter.
llm_forced_to_use_tool = llm.bind_tools(tools, tool_choice="any")llm_forced_to_use_tool.invoke("What day is today?")
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W', 'function': {'arguments': '{"a":1,"b":2}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 94, 'total_tokens': 109}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-28f75260-9900-4bed-8cd3-f1579abb65e5-0', tool_calls=[{'name': 'Add', 'args': {'a': 1, 'b': 2}, 'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W'}], usage_metadata={'input_tokens': 94, 'output_tokens': 15, 'total_tokens': 109})
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_choice.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
tool\_calling\_parallel
](/v0.2/docs/how_to/tool_calling_parallel/)[
Next
How to pass tool outputs to the model
](/v0.2/docs/how_to/tool_results_pass_to_model/) | null |
https://python.langchain.com/v0.2/docs/how_to/tool_results_pass_to_model/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to pass tool outputs to the model
How to pass tool outputs to the model
=====================================
If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s. First, let's define our tools and our model.
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)llm_with_tools = llm.bind_tools(tools)
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Now we can use `ToolMessage` to pass back the output of the tool calls to the model.
from langchain_core.messages import HumanMessage, ToolMessagequery = "What is 3 * 12? Also, what is 11 + 49?"messages = [HumanMessage(query)]ai_msg = llm_with_tools.invoke(messages)messages.append(ai_msg)for tool_call in ai_msg.tool_calls: selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()] tool_output = selected_tool.invoke(tool_call["args"]) messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))messages
**API Reference:**[HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html)
[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'), AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_svc2GLSxNFALbaCAbSjMI9J8', 'function': {'arguments': '{"a": 3, "b": 12}', 'name': 'Multiply'}, 'type': 'function'}, {'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh', 'function': {'arguments': '{"a": 11, "b": 49}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 105, 'total_tokens': 155}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-a79ad1dd-95f1-4a46-b688-4c83f327a7b3-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_svc2GLSxNFALbaCAbSjMI9J8'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh'}]), ToolMessage(content='36', tool_call_id='call_svc2GLSxNFALbaCAbSjMI9J8'), ToolMessage(content='60', tool_call_id='call_r8jxte3zW6h3MEGV3zH2qzFh')]
llm_with_tools.invoke(messages)
AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 171, 'total_tokens': 189}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'stop', 'logprobs': None}, id='run-20b52149-e00d-48ea-97cf-f8de7a255f8c-0')
Note that we pass back the same `id` in the `ToolMessage` as the what we receive from the model in order to help the model match tool responses with tool calls.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_results_pass_to_model.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to force tool calling behavior
](/v0.2/docs/how_to/tool_choice/)[
Next
How to pass run time values to a tool
](/v0.2/docs/how_to/tool_runtime/) | null |
https://python.langchain.com/v0.2/docs/how_to/tool_runtime/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to pass run time values to a tool
How to pass run time values to a tool
=====================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [LangChain Tools](/v0.2/docs/concepts/#tools)
* [How to create tools](/v0.2/docs/how_to/custom_tools/)
* [How to use a model to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling)
Supported models
This how-to guide uses models with native tool calling capability. You can find a [list of all models that support tool calling](/v0.2/docs/integrations/chat/).
Using with LangGraph
If you're using LangGraph, please refer to [this how-to guide](https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/) which shows how to create an agent that keeps track of a given user's favorite pets.
You may need to bind values to a tool that are only known at runtime. For example, the tool logic may require using the ID of the user who made the request.
Most of the time, such values should not be controlled by the LLM. In fact, allowing the LLM to control the user ID may lead to a security risk.
Instead, the LLM should only control the parameters of the tool that are meant to be controlled by the LLM, while other parameters (such as user ID) should be fixed by the application logic.
This how-to guide shows a simple design pattern that creates the tool dynamically at run time and binds to them appropriate values.
We can bind them to chat models as follows:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/firefunction-v1", temperature=0)
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
Passing request time information
================================
The idea is to create the tool dynamically at request time, and bind to it the appropriate information. For example, this information may be the user ID as resolved from the request itself.
from typing import Listfrom langchain_core.output_parsers import JsonOutputParserfrom langchain_core.tools import BaseTool, tool
**API Reference:**[JsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html) | [BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html) | [tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
user_to_pets = {}def generate_tools_for_user(user_id: str) -> List[BaseTool]: """Generate a set of tools that have a user id associated with them.""" @tool def update_favorite_pets(pets: List[str]) -> None: """Add the list of favorite pets.""" user_to_pets[user_id] = pets @tool def delete_favorite_pets() -> None: """Delete the list of favorite pets.""" if user_id in user_to_pets: del user_to_pets[user_id] @tool def list_favorite_pets() -> None: """List favorite pets if any.""" return user_to_pets.get(user_id, []) return [update_favorite_pets, delete_favorite_pets, list_favorite_pets]
Verify that the tools work correctly
update_pets, delete_pets, list_pets = generate_tools_for_user("eugene")update_pets.invoke({"pets": ["cat", "dog"]})print(user_to_pets)print(list_pets.invoke({}))
{'eugene': ['cat', 'dog']}['cat', 'dog']
from langchain_core.prompts import ChatPromptTemplatedef handle_run_time_request(user_id: str, query: str): """Handle run time request.""" tools = generate_tools_for_user(user_id) llm_with_tools = llm.bind_tools(tools) prompt = ChatPromptTemplate.from_messages( [("system", "You are a helpful assistant.")], ) chain = prompt | llm_with_tools return llm_with_tools.invoke(query)
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
This code will allow the LLM to invoke the tools, but the LLM is **unaware** of the fact that a **user ID** even exists!
ai_message = handle_run_time_request( "eugene", "my favorite animals are cats and parrots.")ai_message.tool_calls
[{'name': 'update_favorite_pets', 'args': {'pets': ['cats', 'parrots']}, 'id': 'call_jJvjPXsNbFO5MMgW0q84iqCN'}]
info
Chat models only output requests to invoke tools, they don't actually invoke the underlying tools.
To see how to invoke the tools, please refer to [how to use a model to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_runtime.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to pass tool outputs to the model
](/v0.2/docs/how_to/tool_results_pass_to_model/)[
Next
How to stream tool calls
](/v0.2/docs/how_to/tool_streaming/) | null |
https://python.langchain.com/v0.2/docs/how_to/tool_streaming/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to stream tool calls
How to stream tool calls
========================
When tools are called in a streaming context, [message chunks](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) will be populated with [tool call chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCallChunk.html#langchain_core.messages.tool.ToolCallChunk) objects in a list via the `.tool_call_chunks` attribute. A `ToolCallChunk` includes optional string fields for the tool `name`, `args`, and `id`, and includes an optional integer field `index` that can be used to join chunks together. Fields are optional because portions of a tool call may be streamed across different chunks (e.g., a chunk that includes a substring of the arguments may have null values for the tool name and id).
Because message chunks inherit from their parent message class, an [AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) with tool call chunks will also include `.tool_calls` and `.invalid_tool_calls` fields. These fields are parsed best-effort from the message's tool call chunks.
Note that not all providers currently support streaming for tool calls. Before we start let's define our tools and our model.
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)llm_with_tools = llm.bind_tools(tools)
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Now let's define our query and stream our output:
query = "What is 3 * 12? Also, what is 11 + 49?"async for chunk in llm_with_tools.astream(query): print(chunk.tool_call_chunks)
[][{'name': 'Multiply', 'args': '', 'id': 'call_3aQwTP9CYlFxwOvQZPHDu6wL', 'index': 0}][{'name': None, 'args': '{"a"', 'id': None, 'index': 0}][{'name': None, 'args': ': 3, ', 'id': None, 'index': 0}][{'name': None, 'args': '"b": 1', 'id': None, 'index': 0}][{'name': None, 'args': '2}', 'id': None, 'index': 0}][{'name': 'Add', 'args': '', 'id': 'call_SQUoSsJz2p9Kx2x73GOgN1ja', 'index': 1}][{'name': None, 'args': '{"a"', 'id': None, 'index': 1}][{'name': None, 'args': ': 11,', 'id': None, 'index': 1}][{'name': None, 'args': ' "b": ', 'id': None, 'index': 1}][{'name': None, 'args': '49}', 'id': None, 'index': 1}][]
Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/v0.2/docs/how_to/output_parser_structured/) support streaming.
For example, below we accumulate tool call chunks:
first = Trueasync for chunk in llm_with_tools.astream(query): if first: gathered = chunk first = False else: gathered = gathered + chunk print(gathered.tool_call_chunks)
[][{'name': 'Multiply', 'args': '', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a"', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, ', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, "b": 1', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a"', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11,', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": ', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}][{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]
print(type(gathered.tool_call_chunks[0]["args"]))
<class 'str'>
And below we accumulate tool calls to demonstrate partial parsing:
first = Trueasync for chunk in llm_with_tools.astream(query): if first: gathered = chunk first = False else: gathered = gathered + chunk print(gathered.tool_calls)
[][][{'name': 'Multiply', 'args': {}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 1}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}][{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]
print(type(gathered.tool_calls[0]["args"]))
<class 'dict'>
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tool_streaming.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to pass run time values to a tool
](/v0.2/docs/how_to/tool_runtime/)[
Next
How to convert tools to OpenAI Functions
](/v0.2/docs/how_to/tools_as_openai_functions/) | null |
https://python.langchain.com/v0.2/docs/how_to/tools_as_openai_functions/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to convert tools to OpenAI Functions
How to convert tools to OpenAI Functions
========================================
This notebook goes over how to use LangChain tools as OpenAI functions.
%pip install -qU langchain-community langchain-openai
from langchain_community.tools import MoveFileToolfrom langchain_core.messages import HumanMessagefrom langchain_core.utils.function_calling import convert_to_openai_functionfrom langchain_openai import ChatOpenAI
**API Reference:**[MoveFileTool](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.file_management.move.MoveFileTool.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [convert\_to\_openai\_function](https://api.python.langchain.com/en/latest/utils/langchain_core.utils.function_calling.convert_to_openai_function.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
model = ChatOpenAI(model="gpt-3.5-turbo")
tools = [MoveFileTool()]functions = [convert_to_openai_function(t) for t in tools]
functions[0]
{'name': 'move_file', 'description': 'Move or rename a file from one location to another', 'parameters': {'type': 'object', 'properties': {'source_path': {'description': 'Path of the file to move', 'type': 'string'}, 'destination_path': {'description': 'New path for the moved file', 'type': 'string'}}, 'required': ['source_path', 'destination_path']}}
message = model.invoke( [HumanMessage(content="move file foo to bar")], functions=functions)
message
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}})
message.additional_kwargs["function_call"]
{'name': 'move_file', 'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}'}
With OpenAI chat models we can also automatically bind and convert function-like objects with `bind_functions`
model_with_functions = model.bind_functions(tools)model_with_functions.invoke([HumanMessage(content="move file foo to bar")])
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}})
Or we can use the update OpenAI API that uses `tools` and `tool_choice` instead of `functions` and `function_call` by using `ChatOpenAI.bind_tools`:
model_with_tools = model.bind_tools(tools)model_with_tools.invoke([HumanMessage(content="move file foo to bar")])
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_btkY3xV71cEVAOHnNa5qwo44', 'function': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}, 'type': 'function'}]})
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_as_openai_functions.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to stream tool calls
](/v0.2/docs/how_to/tool_streaming/)[
Next
How to handle tool errors
](/v0.2/docs/how_to/tools_error/) | null |
https://python.langchain.com/v0.2/docs/how_to/tools_few_shot/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use few-shot prompting with tool calling
How to use few-shot prompting with tool calling
===============================================
For more complex tool use it's very useful to add few-shot examples to the prompt. We can do this by adding `AIMessage`s with `ToolCall`s and corresponding `ToolMessage`s to our prompt.
First let's define our tools and model.
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
import osfrom getpass import getpassfrom langchain_openai import ChatOpenAIos.environ["OPENAI_API_KEY"] = getpass()llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)llm_with_tools = llm.bind_tools(tools)
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Let's run our model where we can notice that even with some special instructions our model can get tripped up by order of operations.
llm_with_tools.invoke( "Whats 119 times 8 minus 20. Don't do any math yourself, only use tools for math. Respect order of operations").tool_calls
[{'name': 'Multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_T88XN6ECucTgbXXkyDeC2CQj'}, {'name': 'Add', 'args': {'a': 952, 'b': -20}, 'id': 'call_licdlmGsRqzup8rhqJSb1yZ4'}]
The model shouldn't be trying to add anything yet, since it technically can't know the results of 119 \* 8 yet.
By adding a prompt with some examples we can correct this behavior:
from langchain_core.messages import AIMessage, HumanMessage, ToolMessagefrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughexamples = [ HumanMessage( "What's the product of 317253 and 128472 plus four", name="example_user" ), AIMessage( "", name="example_assistant", tool_calls=[ {"name": "Multiply", "args": {"x": 317253, "y": 128472}, "id": "1"} ], ), ToolMessage("16505054784", tool_call_id="1"), AIMessage( "", name="example_assistant", tool_calls=[{"name": "Add", "args": {"x": 16505054784, "y": 4}, "id": "2"}], ), ToolMessage("16505054788", tool_call_id="2"), AIMessage( "The product of 317253 and 128472 plus four is 16505054788", name="example_assistant", ),]system = """You are bad at math but are an expert at using a calculator. Use past tool usage as an example of how to correctly use the tools."""few_shot_prompt = ChatPromptTemplate.from_messages( [ ("system", system), *examples, ("human", "{query}"), ])chain = {"query": RunnablePassthrough()} | few_shot_prompt | llm_with_toolschain.invoke("Whats 119 times 8 minus 20").tool_calls
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
[{'name': 'Multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_9MvuwQqg7dlJupJcoTWiEsDo'}]
And we get the correct output this time.
Here's what the [LangSmith trace](https://smith.langchain.com/public/f70550a1-585f-4c9d-a643-13148ab1616f/r) looks like.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_few_shot.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to handle tool errors
](/v0.2/docs/how_to/tools_error/)[
Next
How to add a human-in-the-loop for tools
](/v0.2/docs/how_to/tools_human/) | null |
https://python.langchain.com/v0.2/docs/how_to/graph_constructing/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to construct knowledge graphs
On this page
How to construct knowledge graphs
=================================
In this guide we'll go over the basic ways of constructing a knowledge graph based on unstructured text. The constructured graph can then be used as knowledge base in a RAG application.
⚠️ Security note ⚠️[](#️-security-note-️ "Direct link to ⚠️ Security note ⚠️")
-------------------------------------------------------------------------------
Constructing knowledge graphs requires executing write access to the database. There are inherent risks in doing this. Make sure that you verify and validate data before importing it. For more on general security best practices, [see here](/v0.2/docs/security/).
Architecture[](#architecture "Direct link to Architecture")
------------------------------------------------------------
At a high-level, the steps of constructing a knowledge are from text are:
1. **Extracting structured information from text**: Model is used to extract structured graph information from text.
2. **Storing into graph database**: Storing the extracted structured graph information into a graph database enables downstream RAG applications
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, get required packages and set environment variables. In this example, we will be using Neo4j graph database.
%pip install --upgrade --quiet langchain langchain-community langchain-openai langchain-experimental neo4j
Note: you may need to restart the kernel to use updated packages.
We default to OpenAI models in this guide.
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# Uncomment the below to use LangSmith. Not required.# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()# os.environ["LANGCHAIN_TRACING_V2"] = "true"
········
Next, we need to define Neo4j credentials and connection. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
import osfrom langchain_community.graphs import Neo4jGraphos.environ["NEO4J_URI"] = "bolt://localhost:7687"os.environ["NEO4J_USERNAME"] = "neo4j"os.environ["NEO4J_PASSWORD"] = "password"graph = Neo4jGraph()
**API Reference:**[Neo4jGraph](https://api.python.langchain.com/en/latest/graphs/langchain_community.graphs.neo4j_graph.Neo4jGraph.html)
LLM Graph Transformer[](#llm-graph-transformer "Direct link to LLM Graph Transformer")
---------------------------------------------------------------------------------------
Extracting graph data from text enables the transformation of unstructured information into structured formats, facilitating deeper insights and more efficient navigation through complex relationships and patterns. The `LLMGraphTransformer` converts text documents into structured graph documents by leveraging a LLM to parse and categorize entities and their relationships. The selection of the LLM model significantly influences the output by determining the accuracy and nuance of the extracted graph data.
import osfrom langchain_experimental.graph_transformers import LLMGraphTransformerfrom langchain_openai import ChatOpenAIllm = ChatOpenAI(temperature=0, model_name="gpt-4-turbo")llm_transformer = LLMGraphTransformer(llm=llm)
**API Reference:**[LLMGraphTransformer](https://api.python.langchain.com/en/latest/graph_transformers/langchain_experimental.graph_transformers.llm.LLMGraphTransformer.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
Now we can pass in example text and examine the results.
from langchain_core.documents import Documenttext = """Marie Curie, born in 1867, was a Polish and naturalised-French physicist and chemist who conducted pioneering research on radioactivity.She was the first woman to win a Nobel Prize, the first person to win a Nobel Prize twice, and the only person to win a Nobel Prize in two scientific fields.Her husband, Pierre Curie, was a co-winner of her first Nobel Prize, making them the first-ever married couple to win the Nobel Prize and launching the Curie family legacy of five Nobel Prizes.She was, in 1906, the first woman to become a professor at the University of Paris."""documents = [Document(page_content=text)]graph_documents = llm_transformer.convert_to_graph_documents(documents)print(f"Nodes:{graph_documents[0].nodes}")print(f"Relationships:{graph_documents[0].relationships}")
**API Reference:**[Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html)
Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='MARRIED'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='PROFESSOR')]
Examine the following image to better grasp the structure of the generated knowledge graph.
![graph_construction1.png](/v0.2/assets/images/graph_construction1-2b4d31978d58696d5a6a52ad92ae088f.png)
Note that the graph construction process is non-deterministic since we are using LLM. Therefore, you might get slightly different results on each execution.
Additionally, you have the flexibility to define specific types of nodes and relationships for extraction according to your requirements.
llm_transformer_filtered = LLMGraphTransformer( llm=llm, allowed_nodes=["Person", "Country", "Organization"], allowed_relationships=["NATIONALITY", "LOCATED_IN", "WORKED_AT", "SPOUSE"],)graph_documents_filtered = llm_transformer_filtered.convert_to_graph_documents( documents)print(f"Nodes:{graph_documents_filtered[0].nodes}")print(f"Relationships:{graph_documents_filtered[0].relationships}")
Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]
For a better understanding of the generated graph, we can again visualize it.
![graph_construction2.png](/v0.2/assets/images/graph_construction2-8b43506ae0fb3a006eaa4ba83fea8af5.png)
The `node_properties` parameter enables the extraction of node properties, allowing the creation of a more detailed graph. When set to `True`, LLM autonomously identifies and extracts relevant node properties. Conversely, if `node_properties` is defined as a list of strings, the LLM selectively retrieves only the specified properties from the text.
llm_transformer_props = LLMGraphTransformer( llm=llm, allowed_nodes=["Person", "Country", "Organization"], allowed_relationships=["NATIONALITY", "LOCATED_IN", "WORKED_AT", "SPOUSE"], node_properties=["born_year"],)graph_documents_props = llm_transformer_props.convert_to_graph_documents(documents)print(f"Nodes:{graph_documents_props[0].nodes}")print(f"Relationships:{graph_documents_props[0].relationships}")
Nodes:[Node(id='Marie Curie', type='Person', properties={'born_year': '1867'}), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]
Storing to graph database[](#storing-to-graph-database "Direct link to Storing to graph database")
---------------------------------------------------------------------------------------------------
The generated graph documents can be stored to a graph database using the `add_graph_documents` method.
graph.add_graph_documents(graph_documents_props)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/graph_constructing.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Build an Agent with AgentExecutor (Legacy)
](/v0.2/docs/how_to/agent_executor/)[
Next
How to partially format prompt templates
](/v0.2/docs/how_to/prompts_partial/)
* [⚠️ Security note ⚠️](#️-security-note-️)
* [Architecture](#architecture)
* [Setup](#setup)
* [LLM Graph Transformer](#llm-graph-transformer)
* [Storing to graph database](#storing-to-graph-database) | null |
https://python.langchain.com/v0.2/docs/how_to/tools_error/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to handle tool errors
On this page
How to handle tool errors
=========================
Using a model to invoke a tool has some obvious potential failure modes. Firstly, the model needs to return a output that can be parsed at all. Secondly, the model needs to return tool arguments that are valid.
We can build error handling into our chains to mitigate these failure modes.
Setup[](#setup "Direct link to Setup")
---------------------------------------
We'll need to install the following packages:
%pip install --upgrade --quiet langchain-core langchain-openai
If you'd like to trace your runs in [LangSmith](https://docs.smith.langchain.com/) uncomment and set the following environment variables:
import getpassimport os# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Chain[](#chain "Direct link to Chain")
---------------------------------------
Suppose we have the following (dummy) tool and tool-calling chain. We'll make our tool intentionally convoluted to try and trip up the model.
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
# Define toolfrom langchain_core.tools import tool@tooldef complex_tool(int_arg: int, float_arg: float, dict_arg: dict) -> int: """Do something complex with a complex tool.""" return int_arg * float_arg
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
llm_with_tools = llm.bind_tools( [complex_tool],)
# Define chainchain = llm_with_tools | (lambda msg: msg.tool_calls[0]["args"]) | complex_tool
We can see that when we try to invoke this chain with even a fairly explicit input, the model fails to correctly call the tool (it forgets the `dict_arg` argument).
chain.invoke( "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg")
---------------------------------------------------------------------------``````outputValidationError Traceback (most recent call last)``````outputCell In[12], line 1----> 1 chain.invoke( 2 "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg" 3 )``````outputFile ~/langchain/libs/core/langchain_core/runnables/base.py:2499, in RunnableSequence.invoke(self, input, config) 2497 try: 2498 for i, step in enumerate(self.steps):-> 2499 input = step.invoke( 2500 input, 2501 # mark each step as a child run 2502 patch_config( 2503 config, callbacks=run_manager.get_child(f"seq:step:{i+1}") 2504 ), 2505 ) 2506 # finish the root run 2507 except BaseException as e:``````outputFile ~/langchain/libs/core/langchain_core/tools.py:241, in BaseTool.invoke(self, input, config, **kwargs) 234 def invoke( 235 self, 236 input: Union[str, Dict], 237 config: Optional[RunnableConfig] = None, 238 **kwargs: Any, 239 ) -> Any: 240 config = ensure_config(config)--> 241 return self.run( 242 input, 243 callbacks=config.get("callbacks"), 244 tags=config.get("tags"), 245 metadata=config.get("metadata"), 246 run_name=config.get("run_name"), 247 run_id=config.pop("run_id", None), 248 **kwargs, 249 )``````outputFile ~/langchain/libs/core/langchain_core/tools.py:387, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, **kwargs) 385 except ValidationError as e: 386 if not self.handle_validation_error:--> 387 raise e 388 elif isinstance(self.handle_validation_error, bool): 389 observation = "Tool input validation error"``````outputFile ~/langchain/libs/core/langchain_core/tools.py:378, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, **kwargs) 364 run_manager = callback_manager.on_tool_start( 365 {"name": self.name, "description": self.description}, 366 tool_input if isinstance(tool_input, str) else str(tool_input), (...) 375 **kwargs, 376 ) 377 try:--> 378 parsed_input = self._parse_input(tool_input) 379 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input) 380 observation = ( 381 self._run(*tool_args, run_manager=run_manager, **tool_kwargs) 382 if new_arg_supported 383 else self._run(*tool_args, **tool_kwargs) 384 )``````outputFile ~/langchain/libs/core/langchain_core/tools.py:283, in BaseTool._parse_input(self, tool_input) 281 else: 282 if input_args is not None:--> 283 result = input_args.parse_obj(tool_input) 284 return { 285 k: getattr(result, k) 286 for k, v in result.dict().items() 287 if k in tool_input 288 } 289 return tool_input``````outputFile ~/langchain/.venv/lib/python3.9/site-packages/pydantic/v1/main.py:526, in BaseModel.parse_obj(cls, obj) 524 exc = TypeError(f'{cls.__name__} expected dict not {obj.__class__.__name__}') 525 raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e--> 526 return cls(**obj)``````outputFile ~/langchain/.venv/lib/python3.9/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data) 339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) 340 if validation_error:--> 341 raise validation_error 342 try: 343 object_setattr(__pydantic_self__, '__dict__', values)``````outputValidationError: 1 validation error for complex_toolSchemadict_arg field required (type=value_error.missing)
Try/except tool call[](#tryexcept-tool-call "Direct link to Try/except tool call")
-----------------------------------------------------------------------------------
The simplest way to more gracefully handle errors is to try/except the tool-calling step and return a helpful message on errors:
from typing import Anyfrom langchain_core.runnables import Runnable, RunnableConfigdef try_except_tool(tool_args: dict, config: RunnableConfig) -> Runnable: try: complex_tool.invoke(tool_args, config=config) except Exception as e: return f"Calling tool with arguments:\n\n{tool_args}\n\nraised the following error:\n\n{type(e)}: {e}"chain = llm_with_tools | (lambda msg: msg.tool_calls[0]["args"]) | try_except_tool
**API Reference:**[Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) | [RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html)
print( chain.invoke( "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg" ))
Calling tool with arguments:{'int_arg': 5, 'float_arg': 2.1}raised the following error:<class 'pydantic.v1.error_wrappers.ValidationError'>: 1 validation error for complex_toolSchemadict_arg field required (type=value_error.missing)
Fallbacks[](#fallbacks "Direct link to Fallbacks")
---------------------------------------------------
We can also try to fallback to a better model in the event of a tool invocation error. In this case we'll fall back to an identical chain that uses `gpt-4-1106-preview` instead of `gpt-3.5-turbo`.
chain = llm_with_tools | (lambda msg: msg.tool_calls[0]["args"]) | complex_toolbetter_model = ChatOpenAI(model="gpt-4-1106-preview", temperature=0).bind_tools( [complex_tool], tool_choice="complex_tool")better_chain = better_model | (lambda msg: msg.tool_calls[0]["args"]) | complex_toolchain_with_fallback = chain.with_fallbacks([better_chain])chain_with_fallback.invoke( "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg")
10.5
Looking at the [Langsmith trace](https://smith.langchain.com/public/00e91fc2-e1a4-4b0f-a82e-e6b3119d196c/r) for this chain run, we can see that the first chain call fails as expected and it's the fallback that succeeds.
Retry with exception[](#retry-with-exception "Direct link to Retry with exception")
------------------------------------------------------------------------------------
To take things one step further, we can try to automatically re-run the chain with the exception passed in, so that the model may be able to correct its behavior:
import jsonfrom typing import Anyfrom langchain_core.messages import AIMessage, HumanMessage, ToolCall, ToolMessagefrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_core.runnables import RunnablePassthroughclass CustomToolException(Exception): """Custom LangChain tool exception.""" def __init__(self, tool_call: ToolCall, exception: Exception) -> None: super().__init__() self.tool_call = tool_call self.exception = exceptiondef tool_custom_exception(msg: AIMessage, config: RunnableConfig) -> Runnable: try: return complex_tool.invoke(msg.tool_calls[0]["args"], config=config) except Exception as e: raise CustomToolException(msg.tool_calls[0], e)def exception_to_messages(inputs: dict) -> dict: exception = inputs.pop("exception") # Add historical messages to the original input, so the model knows that it made a mistake with the last tool call. messages = [ AIMessage(content="", tool_calls=[exception.tool_call]), ToolMessage( tool_call_id=exception.tool_call["id"], content=str(exception.exception) ), HumanMessage( content="The last tool call raised an exception. Try calling the tool again with corrected arguments. Do not repeat mistakes." ), ] inputs["last_output"] = messages return inputs# We add a last_output MessagesPlaceholder to our prompt which if not passed in doesn't# affect the prompt at all, but gives us the option to insert an arbitrary list of Messages# into the prompt if needed. We'll use this on retries to insert the error message.prompt = ChatPromptTemplate.from_messages( [("human", "{input}"), MessagesPlaceholder("last_output", optional=True)])chain = prompt | llm_with_tools | tool_custom_exception# If the initial chain call fails, we rerun it withe the exception passed in as a message.self_correcting_chain = chain.with_fallbacks( [exception_to_messages | chain], exception_key="exception")
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ToolCall](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCall.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
self_correcting_chain.invoke( { "input": "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg" })
10.5
And our chain succeeds! Looking at the [LangSmith trace](https://smith.langchain.com/public/c11e804c-e14f-4059-bd09-64766f999c14/r), we can see that indeed our initial chain still fails, and it's only on retrying that the chain succeeds.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_error.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to convert tools to OpenAI Functions
](/v0.2/docs/how_to/tools_as_openai_functions/)[
Next
How to use few-shot prompting with tool calling
](/v0.2/docs/how_to/tools_few_shot/)
* [Setup](#setup)
* [Chain](#chain)
* [Try/except tool call](#tryexcept-tool-call)
* [Fallbacks](#fallbacks)
* [Retry with exception](#retry-with-exception) | null |
https://python.langchain.com/v0.2/docs/how_to/agent_executor/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* Build an Agent with AgentExecutor (Legacy)
On this page
Build an Agent with AgentExecutor (Legacy)
==========================================
info
This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/v0.2/docs/concepts/#langgraph) or the [migration guide](/v0.2/docs/how_to/migrate_agent/)
By themselves, language models can't take actions - they just output text. A big use case for LangChain is creating **agents**. Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determines whether more actions are needed, or whether it is okay to finish.
In this tutorial, we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.
Concepts[](#concepts "Direct link to Concepts")
------------------------------------------------
Concepts we will cover are:
* Using [language models](/v0.2/docs/concepts/#chat-models), in particular their tool calling ability
* Creating a [Retriever](/v0.2/docs/concepts/#retrievers) to expose specific information to our agent
* Using a Search [Tool](/v0.2/docs/concepts/#tools) to look up things online
* [`Chat History`](/v0.2/docs/concepts/#chat-history), which allows a chatbot to "remember" past interactions and take them into account when responding to follow-up questions.
* Debugging and tracing your application using [LangSmith](/v0.2/docs/concepts/#langsmith)
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Jupyter Notebook[](#jupyter-notebook "Direct link to Jupyter Notebook")
This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.
This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.
### Installation[](#installation "Direct link to Installation")
To install LangChain run:
* Pip
* Conda
pip install langchain
conda install langchain -c conda-forge
For more details, see our [Installation guide](/v0.2/docs/how_to/installation/).
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
After you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..."
Or, if in a notebook, you can set them with:
import getpassimport osos.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Define tools[](#define-tools "Direct link to Define tools")
------------------------------------------------------------
We first need to create the tools we want to use. We will use two tools: [Tavily](/v0.2/docs/integrations/tools/tavily_search/) (to search online) and then a retriever over a local index we will create
### [Tavily](/v0.2/docs/integrations/tools/tavily_search/)[](#tavily "Direct link to tavily")
We have a built-in tool in LangChain to easily use Tavily search engine as tool. Note that this requires an API key - they have a free tier, but if you don't have one or don't want to create one, you can always ignore this step.
Once you create your API key, you will need to export that as:
export TAVILY_API_KEY="..."
from langchain_community.tools.tavily_search import TavilySearchResults
**API Reference:**[TavilySearchResults](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.tavily_search.tool.TavilySearchResults.html)
search = TavilySearchResults(max_results=2)
search.invoke("what is the weather in SF")
[{'url': 'https://www.weatherapi.com/', 'content': "{'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1714000492, 'localtime': '2024-04-24 16:14'}, 'current': {'last_updated_epoch': 1713999600, 'last_updated': '2024-04-24 16:00', 'temp_c': 15.6, 'temp_f': 60.1, 'is_day': 1, 'condition': {'text': 'Overcast', 'icon': '//cdn.weatherapi.com/weather/64x64/day/122.png', 'code': 1009}, 'wind_mph': 10.5, 'wind_kph': 16.9, 'wind_degree': 330, 'wind_dir': 'NNW', 'pressure_mb': 1018.0, 'pressure_in': 30.06, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 72, 'cloud': 100, 'feelslike_c': 15.6, 'feelslike_f': 60.1, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 5.0, 'gust_mph': 14.8, 'gust_kph': 23.8}}"}, {'url': 'https://www.weathertab.com/en/c/e/04/united-states/california/san-francisco/', 'content': 'San Francisco Weather Forecast for Apr 2024 - Risk of Rain Graph. Rain Risk Graph: Monthly Overview. Bar heights indicate rain risk percentages. Yellow bars mark low-risk days, while black and grey bars signal higher risks. Grey-yellow bars act as buffers, advising to keep at least one day clear from the riskier grey and black days, guiding ...'}]
### Retriever[](#retriever "Direct link to Retriever")
We will also create a retriever over some data of our own. For a deeper explanation of each step here, see [this tutorial](/v0.2/docs/tutorials/rag/).
from langchain_community.document_loaders import WebBaseLoaderfrom langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitterloader = WebBaseLoader("https://docs.smith.langchain.com/overview")docs = loader.load()documents = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200).split_documents(docs)vector = FAISS.from_documents(documents, OpenAIEmbeddings())retriever = vector.as_retriever()
**API Reference:**[WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
retriever.invoke("how to upload a dataset")[0]
Document(page_content='# The data to predict and grade over evaluators=[exact_match], # The evaluators to score the results experiment_prefix="sample-experiment", # The name of the experiment metadata={ "version": "1.0.0", "revision_id": "beta" },)import { Client, Run, Example } from \'langsmith\';import { runOnDataset } from \'langchain/smith\';import { EvaluationResult } from \'langsmith/evaluation\';const client = new Client();// Define dataset: these are your test casesconst datasetName = "Sample Dataset";const dataset = await client.createDataset(datasetName, { description: "A sample dataset in LangSmith."});await client.createExamples({ inputs: [ { postfix: "to LangSmith" }, { postfix: "to Evaluations in LangSmith" }, ], outputs: [ { output: "Welcome to LangSmith" }, { output: "Welcome to Evaluations in LangSmith" }, ], datasetId: dataset.id,});// Define your evaluatorconst exactMatch = async ({ run, example }: { run: Run; example?:', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'Getting started with LangSmith | \uf8ffü¶úÔ∏è\uf8ffüõ†Ô∏è LangSmith', 'description': 'Introduction', 'language': 'en'})
Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it)
from langchain.tools.retriever import create_retriever_tool
**API Reference:**[create\_retriever\_tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.create_retriever_tool.html)
retriever_tool = create_retriever_tool( retriever, "langsmith_search", "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",)
### Tools[](#tools "Direct link to Tools")
Now that we have created both, we can create a list of tools that we will use downstream.
tools = [search, retriever_tool]
Using Language Models[](#using-language-models "Direct link to Using Language Models")
---------------------------------------------------------------------------------------
Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI(model="gpt-4")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAImodel = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAImodel = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoheremodel = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksmodel = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqmodel = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAImodel = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAImodel = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
You can call the language model by passing in a list of messages. By default, the response is a `content` string.
from langchain_core.messages import HumanMessageresponse = model.invoke([HumanMessage(content="hi!")])response.content
**API Reference:**[HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html)
'Hello! How can I assist you today?'
We can now see what it is like to enable this model to do tool calling. In order to enable that we use `.bind_tools` to give the language model knowledge of these tools
model_with_tools = model.bind_tools(tools)
We can now call the model. Let's first call it with a normal message, and see how it responds. We can look at both the `content` field as well as the `tool_calls` field.
response = model_with_tools.invoke([HumanMessage(content="Hi!")])print(f"ContentString: {response.content}")print(f"ToolCalls: {response.tool_calls}")
ContentString: Hello! How can I assist you today?ToolCalls: []
Now, let's try calling it with some input that would expect a tool to be called.
response = model_with_tools.invoke([HumanMessage(content="What's the weather in SF?")])print(f"ContentString: {response.content}")print(f"ToolCalls: {response.tool_calls}")
ContentString: ToolCalls: [{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_4HteVahXkRAkWjp6dGXryKZX'}]
We can see that there's now no content, but there is a tool call! It wants us to call the Tavily Search tool.
This isn't calling that tool yet - it's just telling us to. In order to actually calll it, we'll want to create our agent.
Create the agent[](#create-the-agent "Direct link to Create the agent")
------------------------------------------------------------------------
Now that we have defined the tools and the LLM, we can create the agent. We will be using a tool calling agent - for more information on this type of agent, as well as other options, see [this guide](/v0.2/docs/concepts/#agent_types/).
We can first choose the prompt we want to use to guide the agent.
If you want to see the contents of this prompt and have access to LangSmith, you can go to:
[https://smith.langchain.com/hub/hwchase17/openai-functions-agent](https://smith.langchain.com/hub/hwchase17/openai-functions-agent)
from langchain import hub# Get the prompt to use - you can modify this!prompt = hub.pull("hwchase17/openai-functions-agent")prompt.messages
[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant')), MessagesPlaceholder(variable_name='chat_history', optional=True), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), MessagesPlaceholder(variable_name='agent_scratchpad')]
Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/v0.2/docs/concepts/#agents).
Note that we are passing in the `model`, not `model_with_tools`. That is because `create_tool_calling_agent` will call `.bind_tools` for us under the hood.
from langchain.agents import create_tool_calling_agentagent = create_tool_calling_agent(model, tools, prompt)
**API Reference:**[create\_tool\_calling\_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html)
Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools).
from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools)
**API Reference:**[AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html)
Run the agent[](#run-the-agent "Direct link to Run the agent")
---------------------------------------------------------------
We can now run the agent on a few queries! Note that for now, these are all **stateless** queries (it won't remember previous interactions).
First up, let's how it responds when there's no need to call a tool:
agent_executor.invoke({"input": "hi!"})
{'input': 'hi!', 'output': 'Hello! How can I assist you today?'}
In order to see exactly what is happening under the hood (and to make sure it's not calling a tool) we can take a look at the [LangSmith trace](https://smith.langchain.com/public/8441812b-94ce-4832-93ec-e1114214553a/r)
Let's now try it out on an example where it should be invoking the retriever
agent_executor.invoke({"input": "how can langsmith help with testing?"})
{'input': 'how can langsmith help with testing?', 'output': 'LangSmith is a platform that aids in building production-grade Language Learning Model (LLM) applications. It can assist with testing in several ways:\n\n1. **Monitoring and Evaluation**: LangSmith allows close monitoring and evaluation of your application. This helps you to ensure the quality of your application and deploy it with confidence.\n\n2. **Tracing**: LangSmith has tracing capabilities that can be beneficial for debugging and understanding the behavior of your application.\n\n3. **Evaluation Capabilities**: LangSmith has built-in tools for evaluating the performance of your LLM. \n\n4. **Prompt Hub**: This is a prompt management tool built into LangSmith that can help in testing different prompts and their responses.\n\nPlease note that to use LangSmith, you would need to install it and create an API key. The platform offers Python and Typescript SDKs for utilization. It works independently and does not require the use of LangChain.'}
Let's take a look at the [LangSmith trace](https://smith.langchain.com/public/762153f6-14d4-4c98-8659-82650f860c62/r) to make sure it's actually calling that.
Now let's try one where it needs to call the search tool:
agent_executor.invoke({"input": "whats the weather in sf?"})
{'input': 'whats the weather in sf?', 'output': 'The current weather in San Francisco is partly cloudy with a temperature of 16.1°C (61.0°F). The wind is coming from the WNW at a speed of 10.5 mph. The humidity is at 67%. [source](https://www.weatherapi.com/)'}
We can check out the [LangSmith trace](https://smith.langchain.com/public/36df5b1a-9a0b-4185-bae2-964e1d53c665/r) to make sure it's calling the search tool effectively.
Adding in memory[](#adding-in-memory "Direct link to Adding in memory")
------------------------------------------------------------------------
As mentioned earlier, this agent is stateless. This means it does not remember previous interactions. To give it memory we need to pass in previous `chat_history`. Note: it needs to be called `chat_history` because of the prompt we are using. If we use a different prompt, we could change the variable name
# Here we pass in an empty list of messages for chat_history because it is the first message in the chatagent_executor.invoke({"input": "hi! my name is bob", "chat_history": []})
{'input': 'hi! my name is bob', 'chat_history': [], 'output': 'Hello Bob! How can I assist you today?'}
from langchain_core.messages import AIMessage, HumanMessage
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html)
agent_executor.invoke( { "chat_history": [ HumanMessage(content="hi! my name is bob"), AIMessage(content="Hello Bob! How can I assist you today?"), ], "input": "what's my name?", })
{'chat_history': [HumanMessage(content='hi! my name is bob'), AIMessage(content='Hello Bob! How can I assist you today?')], 'input': "what's my name?", 'output': 'Your name is Bob. How can I assist you further?'}
If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory. For more information on how to use this, see [this guide](/v0.2/docs/how_to/message_history/).
from langchain_community.chat_message_histories import ChatMessageHistoryfrom langchain_core.chat_history import BaseChatMessageHistoryfrom langchain_core.runnables.history import RunnableWithMessageHistorystore = {}def get_session_history(session_id: str) -> BaseChatMessageHistory: if session_id not in store: store[session_id] = ChatMessageHistory() return store[session_id]
**API Reference:**[ChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.ChatMessageHistory.html) | [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html)
Because we have multiple inputs, we need to specify two things:
* `input_messages_key`: The input key to use to add to the conversation history.
* `history_messages_key`: The key to add the loaded messages into.
agent_with_chat_history = RunnableWithMessageHistory( agent_executor, get_session_history, input_messages_key="input", history_messages_key="chat_history",)
agent_with_chat_history.invoke( {"input": "hi! I'm bob"}, config={"configurable": {"session_id": "<foo>"}},)
{'input': "hi! I'm bob", 'chat_history': [], 'output': 'Hello Bob! How can I assist you today?'}
agent_with_chat_history.invoke( {"input": "what's my name?"}, config={"configurable": {"session_id": "<foo>"}},)
{'input': "what's my name?", 'chat_history': [HumanMessage(content="hi! I'm bob"), AIMessage(content='Hello Bob! How can I assist you today?')], 'output': 'Your name is Bob.'}
Example LangSmith trace: [https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r](https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r)
Conclusion[](#conclusion "Direct link to Conclusion")
------------------------------------------------------
That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn!
info
This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/v0.2/docs/concepts/#langgraph)
If you want to continue using LangChain agents, some good advanced guides are:
* [How to use LangGraph's built-in versions of `AgentExecutor`](/v0.2/docs/how_to/migrate_agent/)
* [How to create a custom agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/custom_agent/)
* [How to stream responses from an agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/streaming/)
* [How to return structured output from an agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/agent_structured/)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/agent_executor.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add ad-hoc tool calling capability to LLMs and Chat Models
](/v0.2/docs/how_to/tools_prompting/)[
Next
How to construct knowledge graphs
](/v0.2/docs/how_to/graph_constructing/)
* [Concepts](#concepts)
* [Setup](#setup)
* [Jupyter Notebook](#jupyter-notebook)
* [Installation](#installation)
* [LangSmith](#langsmith)
* [Define tools](#define-tools)
* [Tavily](#tavily)
* [Retriever](#retriever)
* [Tools](#tools)
* [Using Language Models](#using-language-models)
* [Create the agent](#create-the-agent)
* [Run the agent](#run-the-agent)
* [Adding in memory](#adding-in-memory)
* [Conclusion](#conclusion) | null |
https://python.langchain.com/v0.2/docs/how_to/prompts_partial/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to partially format prompt templates
On this page
How to partially format prompt templates
========================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
Like partially binding arguments to a function, it can make sense to "partial" a prompt template - e.g. pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
LangChain supports this in two ways:
1. Partial formatting with string values.
2. Partial formatting with functions that return string values.
In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.
Partial with strings[](#partial-with-strings "Direct link to Partial with strings")
------------------------------------------------------------------------------------
One common use case for wanting to partial a prompt template is if you get access to some of the variables in a prompt before others. For example, suppose you have a prompt template that requires two variables, `foo` and `baz`. If you get the `foo` value early on in your chain, but the `baz` value later, it can be inconvenient to pass both variables all the way through the chain. Instead, you can partial the prompt template with the `foo` value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:
from langchain_core.prompts import PromptTemplateprompt = PromptTemplate.from_template("{foo}{bar}")partial_prompt = prompt.partial(foo="foo")print(partial_prompt.format(bar="baz"))
**API Reference:**[PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html)
foobaz
You can also just initialize the prompt with the partialed variables.
prompt = PromptTemplate( template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"})print(prompt.format(bar="baz"))
foobaz
Partial with functions[](#partial-with-functions "Direct link to Partial with functions")
------------------------------------------------------------------------------------------
The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables is inconvenient. In this case, it's handy to be able to partial the prompt with a function that always returns the current date.
from datetime import datetimedef _get_datetime(): now = datetime.now() return now.strftime("%m/%d/%Y, %H:%M:%S")prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective", "date"],)partial_prompt = prompt.partial(date=_get_datetime)print(partial_prompt.format(adjective="funny"))
Tell me a funny joke about the day 04/21/2024, 19:43:57
You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow.
prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective"], partial_variables={"date": _get_datetime},)print(prompt.format(adjective="funny"))
Tell me a funny joke about the day 04/21/2024, 19:43:57
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to partially apply variables to your prompt templates.
Next, check out the other how-to guides on prompt templates in this section, like [adding few-shot examples to your prompt templates](/v0.2/docs/how_to/few_shot_examples_chat/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/prompts_partial.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to construct knowledge graphs
](/v0.2/docs/how_to/graph_constructing/)[
Next
How to handle multiple queries when doing query analysis
](/v0.2/docs/how_to/query_multiple_queries/)
* [Partial with strings](#partial-with-strings)
* [Partial with functions](#partial-with-functions)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/query_multiple_queries/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to handle multiple queries when doing query analysis
On this page
How to handle multiple queries when doing query analysis
========================================================
Sometimes, a query analysis technique may allow for multiple queries to be generated. In these cases, we need to remember to run all queries and then to combine the results. We will show a simple example (using mock data) of how to do that.
Setup[](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
# %pip install -qU langchain langchain-community langchain-openai langchain-chroma
#### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
We'll use OpenAI in this example:
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
### Create Index[](#create-index "Direct link to Create Index")
We will create a vectorstore over fake information.
from langchain_chroma import Chromafrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplittertexts = ["Harrison worked at Kensho", "Ankush worked at Facebook"]embeddings = OpenAIEmbeddings(model="text-embedding-3-small")vectorstore = Chroma.from_texts( texts, embeddings,)retriever = vectorstore.as_retriever(search_kwargs={"k": 1})
**API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
Query analysis[](#query-analysis "Direct link to Query analysis")
------------------------------------------------------------------
We will use function calling to structure the output. We will let it return multiple queries.
from typing import List, Optionalfrom langchain_core.pydantic_v1 import BaseModel, Fieldclass Search(BaseModel): """Search over a database of job records.""" queries: List[str] = Field( ..., description="Distinct queries to search for", )
from langchain_core.output_parsers.openai_tools import PydanticToolsParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAIoutput_parser = PydanticToolsParser(tools=[Search])system = """You have the ability to issue search queries to get information to help answer user information.If you need to look up two distinct pieces of information, you are allowed to do that!"""prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ])llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)structured_llm = llm.with_structured_output(Search)query_analyzer = {"question": RunnablePassthrough()} | prompt | structured_llm
**API Reference:**[PydanticToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.PydanticToolsParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
/Users/harrisonchase/workplace/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change. warn_beta(
We can see that this allows for creating multiple queries
query_analyzer.invoke("where did Harrison Work")
Search(queries=['Harrison work location'])
query_analyzer.invoke("where did Harrison and ankush Work")
Search(queries=['Harrison work place', 'Ankush work place'])
Retrieval with query analysis[](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis")
---------------------------------------------------------------------------------------------------------------
So how would we include this in a chain? One thing that will make this a lot easier is if we call our retriever asyncronously - this will let us loop over the queries and not get blocked on the response time.
from langchain_core.runnables import chain
**API Reference:**[chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html)
@chainasync def custom_chain(question): response = await query_analyzer.ainvoke(question) docs = [] for query in response.queries: new_docs = await retriever.ainvoke(query) docs.extend(new_docs) # You probably want to think about reranking or deduplicating documents here # But that is a separate topic return docs
await custom_chain.ainvoke("where did Harrison Work")
[Document(page_content='Harrison worked at Kensho')]
await custom_chain.ainvoke("where did Harrison and ankush Work")
[Document(page_content='Harrison worked at Kensho'), Document(page_content='Ankush worked at Facebook')]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/query_multiple_queries.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to partially format prompt templates
](/v0.2/docs/how_to/prompts_partial/)[
Next
How to use built-in tools and toolkits
](/v0.2/docs/how_to/tools_builtin/)
* [Setup](#setup)
* [Create Index](#create-index)
* [Query analysis](#query-analysis)
* [Retrieval with query analysis](#retrieval-with-query-analysis) | null |
https://python.langchain.com/v0.2/docs/how_to/tools_model_specific/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to bind model-specific tools
How to bind model-specific tools
================================
Providers adopt different conventions for formatting tool schemas. For instance, OpenAI uses a format like this:
* `type`: The type of the tool. At the time of writing, this is always `"function"`.
* `function`: An object containing tool parameters.
* `function.name`: The name of the schema to output.
* `function.description`: A high level description of the schema to output.
* `function.parameters`: The nested details of the schema you want to extract, formatted as a [JSON schema](https://json-schema.org/) dict.
We can bind this model-specific format directly to the model as well if preferred. Here's an example:
from langchain_openai import ChatOpenAImodel = ChatOpenAI()model_with_tools = model.bind( tools=[ { "type": "function", "function": { "name": "multiply", "description": "Multiply two integers together.", "parameters": { "type": "object", "properties": { "a": {"type": "number", "description": "First integer"}, "b": {"type": "number", "description": "Second integer"}, }, "required": ["a", "b"], }, }, } ])model_with_tools.invoke("Whats 119 times 8?")
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mn4ELw1NbuE0DFYhIeK0GrPe', 'function': {'arguments': '{"a":119,"b":8}', 'name': 'multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 62, 'total_tokens': 79}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-353e8a9a-7125-4f94-8c68-4f3da4c21120-0', tool_calls=[{'name': 'multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_mn4ELw1NbuE0DFYhIeK0GrPe'}])
This is functionally equivalent to the `bind_tools()` method.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_model_specific.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add a human-in-the-loop for tools
](/v0.2/docs/how_to/tools_human/)[
Next
How to trim messages
](/v0.2/docs/how_to/trim_messages/) | null |
https://python.langchain.com/v0.2/docs/how_to/tools_human/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add a human-in-the-loop for tools
On this page
How to add a human-in-the-loop for tools
========================================
There are certain tools that we don't trust a model to execute on its own. One thing we can do in such situations is require human approval before the tool is invoked.
info
This how-to guide shows a simple way to add human-in-the-loop for code running in a jupyter notebook or in a terminal.
To build a production application, you will need to do more work to keep track of application state appropriately.
We recommend using `langgraph` for powering such a capability. For more details, please see this [guide](https://langchain-ai.github.io/langgraph/how-tos/human-in-the-loop/).
Setup[](#setup "Direct link to Setup")
---------------------------------------
We'll need to install the following packages:
%pip install --upgrade --quiet langchain
And set these environment variables:
import getpassimport os# If you'd like to use LangSmith, uncomment the below:# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Chain[](#chain "Direct link to Chain")
---------------------------------------
Let's create a few simple (dummy) tools and a tool-calling chain:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
from typing import Dict, Listfrom langchain_core.messages import AIMessagefrom langchain_core.runnables import Runnable, RunnablePassthroughfrom langchain_core.tools import tool@tooldef count_emails(last_n_days: int) -> int: """Multiply two integers together.""" return last_n_days * 2@tooldef send_email(message: str, recipient: str) -> str: "Add two integers." return f"Successfully sent email to {recipient}."tools = [count_emails, send_email]llm_with_tools = llm.bind_tools(tools)def call_tools(msg: AIMessage) -> List[Dict]: """Simple sequential tool calling helper.""" tool_map = {tool.name: tool for tool in tools} tool_calls = msg.tool_calls.copy() for tool_call in tool_calls: tool_call["output"] = tool_map[tool_call["name"]].invoke(tool_call["args"]) return tool_callschain = llm_with_tools | call_toolschain.invoke("how many emails did i get in the last 5 days?")
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
[{'name': 'count_emails', 'args': {'last_n_days': 5}, 'id': 'toolu_01QYZdJ4yPiqsdeENWHqioFW', 'output': 10}]
Adding human approval[](#adding-human-approval "Direct link to Adding human approval")
---------------------------------------------------------------------------------------
Let's add a step in the chain that will ask a person to approve or reject the tall call request.
On rejection, the step will raise an exception which will stop execution of the rest of the chain.
import jsonclass NotApproved(Exception): """Custom exception."""def human_approval(msg: AIMessage) -> AIMessage: """Responsible for passing through its input or raising an exception. Args: msg: output from the chat model Returns: msg: original output from the msg """ tool_strs = "\n\n".join( json.dumps(tool_call, indent=2) for tool_call in msg.tool_calls ) input_msg = ( f"Do you approve of the following tool invocations\n\n{tool_strs}\n\n" "Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.\n >>>" ) resp = input(input_msg) if resp.lower() not in ("yes", "y"): raise NotApproved(f"Tool invocations not approved:\n\n{tool_strs}") return msg
chain = llm_with_tools | human_approval | call_toolschain.invoke("how many emails did i get in the last 5 days?")
Do you approve of the following tool invocations{ "name": "count_emails", "args": { "last_n_days": 5 }, "id": "toolu_01WbD8XeMoQaRFtsZezfsHor"}Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no. >>> yes
[{'name': 'count_emails', 'args': {'last_n_days': 5}, 'id': 'toolu_01WbD8XeMoQaRFtsZezfsHor', 'output': 10}]
try: chain.invoke("Send [email protected] an email saying 'What's up homie'")except NotApproved as e: print() print(e)
Do you approve of the following tool invocations{ "name": "send_email", "args": { "recipient": "[email protected]", "message": "What's up homie" }, "id": "toolu_014XccHFzBiVcc9GV1harV9U"}Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no. >>> no``````outputTool invocations not approved:{ "name": "send_email", "args": { "recipient": "[email protected]", "message": "What's up homie" }, "id": "toolu_014XccHFzBiVcc9GV1harV9U"}
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_human.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use few-shot prompting with tool calling
](/v0.2/docs/how_to/tools_few_shot/)[
Next
How to bind model-specific tools
](/v0.2/docs/how_to/tools_model_specific/)
* [Setup](#setup)
* [Chain](#chain)
* [Adding human approval](#adding-human-approval) | null |
https://python.langchain.com/v0.2/docs/how_to/trim_messages/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to trim messages
On this page
How to trim messages
====================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Messages](/v0.2/docs/concepts/#messages)
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [Chaining](/v0.2/docs/how_to/sequence/)
* [Chat history](/v0.2/docs/concepts/#chat-history)
The methods in this guide also require `langchain-core>=0.2.9`.
All models have finite context windows, meaning there's a limit to how many tokens they can take as input. If you have very long messages or a chain/agent that accumulates a long message is history, you'll need to manage the length of the messages you're passing in to the model.
The `trim_messages` util provides some basic strategies for trimming a list of messages to be of a certain token length.
Getting the last `max_tokens` tokens[](#getting-the-last-max_tokens-tokens "Direct link to getting-the-last-max_tokens-tokens")
--------------------------------------------------------------------------------------------------------------------------------
To get the last `max_tokens` in the list of Messages we can set `strategy="last"`. Notice that for our `token_counter` we can pass in a function (more on that below) or a language model (since language models have a message token counting method). It makes sense to pass in a model when you're trimming your messages to fit into the context window of that specific model:
# pip install -U langchain-openaifrom langchain_core.messages import ( AIMessage, HumanMessage, SystemMessage, trim_messages,)from langchain_openai import ChatOpenAImessages = [ SystemMessage("you're a good assistant, you always respond with a joke."), HumanMessage("i wonder why it's called langchain"), AIMessage( 'Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!' ), HumanMessage("and who is harrison chasing anyways"), AIMessage( "Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!" ), HumanMessage("what do you call a speechless parrot"),]trim_messages( messages, max_tokens=45, strategy="last", token_counter=ChatOpenAI(model="gpt-4o"),)
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [SystemMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html) | [trim\_messages](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.utils.trim_messages.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
[AIMessage(content="Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!"), HumanMessage(content='what do you call a speechless parrot')]
If we want to always keep the initial system message we can specify `include_system=True`:
trim_messages( messages, max_tokens=45, strategy="last", token_counter=ChatOpenAI(model="gpt-4o"), include_system=True,)
[SystemMessage(content="you're a good assistant, you always respond with a joke."), HumanMessage(content='what do you call a speechless parrot')]
If we want to allow splitting up the contents of a message we can specify `allow_partial=True`:
trim_messages( messages, max_tokens=56, strategy="last", token_counter=ChatOpenAI(model="gpt-4o"), include_system=True, allow_partial=True,)
[SystemMessage(content="you're a good assistant, you always respond with a joke."), AIMessage(content="\nWhy, he's probably chasing after the last cup of coffee in the office!"), HumanMessage(content='what do you call a speechless parrot')]
If we need to make sure that our first message (excluding the system message) is always of a specific type, we can specify `start_on`:
trim_messages( messages, max_tokens=60, strategy="last", token_counter=ChatOpenAI(model="gpt-4o"), include_system=True, start_on="human",)
[SystemMessage(content="you're a good assistant, you always respond with a joke."), HumanMessage(content='what do you call a speechless parrot')]
Getting the first `max_tokens` tokens[](#getting-the-first-max_tokens-tokens "Direct link to getting-the-first-max_tokens-tokens")
-----------------------------------------------------------------------------------------------------------------------------------
We can perform the flipped operation of getting the _first_ `max_tokens` by specifying `strategy="first"`:
trim_messages( messages, max_tokens=45, strategy="first", token_counter=ChatOpenAI(model="gpt-4o"),)
[SystemMessage(content="you're a good assistant, you always respond with a joke."), HumanMessage(content="i wonder why it's called langchain")]
Writing a custom token counter[](#writing-a-custom-token-counter "Direct link to Writing a custom token counter")
------------------------------------------------------------------------------------------------------------------
We can write a custom token counter function that takes in a list of messages and returns an int.
from typing import List# pip install tiktokenimport tiktokenfrom langchain_core.messages import BaseMessage, ToolMessagedef str_token_counter(text: str) -> int: enc = tiktoken.get_encoding("o200k_base") return len(enc.encode(text))def tiktoken_counter(messages: List[BaseMessage]) -> int: """Approximately reproduce https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb For simplicity only supports str Message.contents. """ num_tokens = 3 # every reply is primed with <|start|>assistant<|message|> tokens_per_message = 3 tokens_per_name = 1 for msg in messages: if isinstance(msg, HumanMessage): role = "user" elif isinstance(msg, AIMessage): role = "assistant" elif isinstance(msg, ToolMessage): role = "tool" elif isinstance(msg, SystemMessage): role = "system" else: raise ValueError(f"Unsupported messages type {msg.__class__}") num_tokens += ( tokens_per_message + str_token_counter(role) + str_token_counter(msg.content) ) if msg.name: num_tokens += tokens_per_name + str_token_counter(msg.name) return num_tokenstrim_messages( messages, max_tokens=45, strategy="last", token_counter=tiktoken_counter,)
**API Reference:**[BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html)
[AIMessage(content="Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!"), HumanMessage(content='what do you call a speechless parrot')]
Chaining[](#chaining "Direct link to Chaining")
------------------------------------------------
`trim_messages` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain
llm = ChatOpenAI(model="gpt-4o")# Notice we don't pass in messages. This creates# a RunnableLambda that takes messages as inputtrimmer = trim_messages( max_tokens=45, strategy="last", token_counter=llm, include_system=True,)chain = trimmer | llmchain.invoke(messages)
AIMessage(content='A: A "Polly-gone"!', response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 32, 'total_tokens': 41}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_66b29dffce', 'finish_reason': 'stop', 'logprobs': None}, id='run-83e96ddf-bcaa-4f63-824c-98b0f8a0d474-0', usage_metadata={'input_tokens': 32, 'output_tokens': 9, 'total_tokens': 41})
Looking at the LangSmith trace we can see that before the messages are passed to the model they are first trimmed: [https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r](https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r)
Looking at just the trimmer, we can see that it's a Runnable object that can be invoked like all Runnables:
trimmer.invoke(messages)
[SystemMessage(content="you're a good assistant, you always respond with a joke."), HumanMessage(content='what do you call a speechless parrot')]
Using with ChatMessageHistory[](#using-with-chatmessagehistory "Direct link to Using with ChatMessageHistory")
---------------------------------------------------------------------------------------------------------------
Trimming messages is especially useful when [working with chat histories](/v0.2/docs/how_to/message_history/), which can get arbitrarily long:
from langchain_core.chat_history import InMemoryChatMessageHistoryfrom langchain_core.runnables.history import RunnableWithMessageHistorychat_history = InMemoryChatMessageHistory(messages=messages[:-1])def dummy_get_session_history(session_id): if session_id != "1": return InMemoryChatMessageHistory() return chat_historyllm = ChatOpenAI(model="gpt-4o")trimmer = trim_messages( max_tokens=45, strategy="last", token_counter=llm, include_system=True,)chain = trimmer | llmchain_with_history = RunnableWithMessageHistory(chain, dummy_get_session_history)chain_with_history.invoke( [HumanMessage("what do you call a speechless parrot")], config={"configurable": {"session_id": "1"}},)
**API Reference:**[InMemoryChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.InMemoryChatMessageHistory.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html)
AIMessage(content='A "polly-no-wanna-cracker"!', response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 32, 'total_tokens': 42}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_5bf7397cd3', 'finish_reason': 'stop', 'logprobs': None}, id='run-054dd309-3497-4e7b-b22a-c1859f11d32e-0', usage_metadata={'input_tokens': 32, 'output_tokens': 10, 'total_tokens': 42})
Looking at the LangSmith trace we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message: [https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r](https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r)
API reference[](#api-reference "Direct link to API reference")
---------------------------------------------------------------
For a complete description of all arguments head to the API reference: [https://api.python.langchain.com/en/latest/messages/langchain\_core.messages.utils.trim\_messages.html](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.utils.trim_messages.html)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/trim_messages.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to bind model-specific tools
](/v0.2/docs/how_to/tools_model_specific/)[
Next
How to create and query vector stores
](/v0.2/docs/how_to/vectorstores/)
* [Getting the last `max_tokens` tokens](#getting-the-last-max_tokens-tokens)
* [Getting the first `max_tokens` tokens](#getting-the-first-max_tokens-tokens)
* [Writing a custom token counter](#writing-a-custom-token-counter)
* [Chaining](#chaining)
* [Using with ChatMessageHistory](#using-with-chatmessagehistory)
* [API reference](#api-reference) | null |
https://python.langchain.com/v0.2/docs/how_to/tools_builtin/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use built-in tools and toolkits
On this page
How to use built-in tools and toolkits
======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Tools](/v0.2/docs/concepts/#tools)
* [LangChain Toolkits](/v0.2/docs/concepts/#tools)
Tools[](#tools "Direct link to Tools")
---------------------------------------
LangChain has a large collection of 3rd party tools. Please visit [Tool Integrations](/v0.2/docs/integrations/tools/) for a list of the available tools.
info
When using 3rd party tools, make sure that you understand how the tool works, what permissions it has. Read over its documentation and check if anything is required from you from a security point of view. Please see our [security](https://python.langchain.com/v0.2/docs/security/) guidelines for more information.
Let's try out the [Wikipedia integration](/v0.2/docs/integrations/tools/wikipedia/).
!pip install -qU wikipedia
from langchain_community.tools import WikipediaQueryRunfrom langchain_community.utilities import WikipediaAPIWrapperapi_wrapper = WikipediaAPIWrapper(top_k_results=1, doc_content_chars_max=100)tool = WikipediaQueryRun(api_wrapper=api_wrapper)print(tool.invoke({"query": "langchain"}))
**API Reference:**[WikipediaQueryRun](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.wikipedia.tool.WikipediaQueryRun.html) | [WikipediaAPIWrapper](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.wikipedia.WikipediaAPIWrapper.html)
Page: LangChainSummary: LangChain is a framework designed to simplify the creation of applications
The tool has the following defaults associated with it:
print(f"Name: {tool.name}")print(f"Description: {tool.description}")print(f"args schema: {tool.args}")print(f"returns directly?: {tool.return_direct}")
Name: wiki-toolDescription: look up things in wikipediaargs schema: {'query': {'title': 'Query', 'description': 'query to look up in Wikipedia, should be 3 or less words', 'type': 'string'}}returns directly?: True
Customizing Default Tools[](#customizing-default-tools "Direct link to Customizing Default Tools")
---------------------------------------------------------------------------------------------------
We can also modify the built in name, description, and JSON schema of the arguments.
When defining the JSON schema of the arguments, it is important that the inputs remain the same as the function, so you shouldn't change that. But you can define custom descriptions for each input easily.
from langchain_community.tools import WikipediaQueryRunfrom langchain_community.utilities import WikipediaAPIWrapperfrom langchain_core.pydantic_v1 import BaseModel, Fieldclass WikiInputs(BaseModel): """Inputs to the wikipedia tool.""" query: str = Field( description="query to look up in Wikipedia, should be 3 or less words" )tool = WikipediaQueryRun( name="wiki-tool", description="look up things in wikipedia", args_schema=WikiInputs, api_wrapper=api_wrapper, return_direct=True,)print(tool.run("langchain"))
**API Reference:**[WikipediaQueryRun](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.wikipedia.tool.WikipediaQueryRun.html) | [WikipediaAPIWrapper](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.wikipedia.WikipediaAPIWrapper.html)
Page: LangChainSummary: LangChain is a framework designed to simplify the creation of applications
print(f"Name: {tool.name}")print(f"Description: {tool.description}")print(f"args schema: {tool.args}")print(f"returns directly?: {tool.return_direct}")
Name: wiki-toolDescription: look up things in wikipediaargs schema: {'query': {'title': 'Query', 'description': 'query to look up in Wikipedia, should be 3 or less words', 'type': 'string'}}returns directly?: True
How to use built-in toolkits[](#how-to-use-built-in-toolkits "Direct link to How to use built-in toolkits")
------------------------------------------------------------------------------------------------------------
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
For a complete list of available ready-made toolkits, visit [Integrations](/v0.2/docs/integrations/toolkits/).
All Toolkits expose a `get_tools` method which returns a list of tools.
You're usually meant to use them this way:
# Initialize a toolkittoolkit = ExampleTookit(...)# Get list of toolstools = toolkit.get_tools()
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/tools_builtin.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to handle multiple queries when doing query analysis
](/v0.2/docs/how_to/query_multiple_queries/)[
Next
How to pass through arguments from one step to the next
](/v0.2/docs/how_to/passthrough/)
* [Tools](#tools)
* [Customizing Default Tools](#customizing-default-tools)
* [How to use built-in toolkits](#how-to-use-built-in-toolkits) | null |
https://python.langchain.com/v0.2/docs/how_to/vectorstores/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create and query vector stores
On this page
How to create and query vector stores
=====================================
info
Head to [Integrations](/v0.2/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores.
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you.
Get started[](#get-started "Direct link to Get started")
---------------------------------------------------------
This guide showcases basic functionality related to vector stores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the [text embedding model interfaces](/v0.2/docs/how_to/embed_text/) before diving into this.
Before using the vectorstore at all, we need to load some data and initialize an embedding model.
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import osimport getpassos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain_community.document_loaders import TextLoaderfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import CharacterTextSplitter# Load the document, split it into chunks, embed each chunk and load it into the vector store.raw_documents = TextLoader('state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)
**API Reference:**[TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html)
There are many great vector store options, here are a few that are free, open-source, and run entirely on your local machine. Review all integrations for many great hosted offerings.
* Chroma
* FAISS
* Lance
This walkthrough uses the `chroma` vector database, which runs on your local machine as a library.
pip install langchain-chroma
from langchain_chroma import Chromadb = Chroma.from_documents(documents, OpenAIEmbeddings())
This walkthrough uses the `FAISS` vector database, which makes use of the Facebook AI Similarity Search (FAISS) library.
pip install faiss-cpu
from langchain_community.vectorstores import FAISSdb = FAISS.from_documents(documents, OpenAIEmbeddings())
**API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html)
This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format.
pip install lancedb
from langchain_community.vectorstores import LanceDBimport lancedbdb = lancedb.connect("/tmp/lancedb")table = db.create_table( "my_table", data=[ { "vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1", } ], mode="overwrite",)db = LanceDB.from_documents(documents, OpenAIEmbeddings())
**API Reference:**[LanceDB](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.lancedb.LanceDB.html)
Similarity search[](#similarity-search "Direct link to Similarity search")
---------------------------------------------------------------------------
All vectorstores expose a `similarity_search` method. This will take incoming documents, create an embedding of them, and then find all documents with the most similar embedding.
query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
### Similarity search by vector[](#similarity-search-by-vector "Direct link to Similarity search by vector")
It is also possible to do a search for documents similar to a given embedding vector using `similarity_search_by_vector` which accepts an embedding vector as a parameter instead of a string.
embedding_vector = OpenAIEmbeddings().embed_query(query)docs = db.similarity_search_by_vector(embedding_vector)print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Async Operations[](#async-operations "Direct link to Async Operations")
------------------------------------------------------------------------
Vector stores are usually run as a separate service that requires some IO operations, and therefore they might be called asynchronously. That gives performance benefits as you don't waste time waiting for responses from external services. That might also be important if you work with an asynchronous framework, such as [FastAPI](https://fastapi.tiangolo.com/).
LangChain supports async operation on vector stores. All the methods might be called using their async counterparts, with the prefix `a`, meaning `async`.
docs = await db.asimilarity_search(query)docs
[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'state_of_the_union.txt'}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': 'state_of_the_union.txt'}), Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': 'state_of_the_union.txt'}), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': 'state_of_the_union.txt'})]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/vectorstores.mdx)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to trim messages
](/v0.2/docs/how_to/trim_messages/)[
Next
Conceptual guide
](/v0.2/docs/concepts/)
* [Get started](#get-started)
* [Similarity search](#similarity-search)
* [Similarity search by vector](#similarity-search-by-vector)
* [Async Operations](#async-operations) | null |
https://python.langchain.com/v0.2/docs/how_to/passthrough/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to pass through arguments from one step to the next
On this page
How to pass through arguments from one step to the next
=======================================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
* [Calling runnables in parallel](/v0.2/docs/how_to/parallel/)
* [Custom functions](/v0.2/docs/how_to/functions/)
When composing chains with several steps, sometimes you will want to pass data from previous steps unchanged for use as input to a later step. The [`RunnablePassthrough`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) class allows you to do just this, and is typically is used in conjuction with a [RunnableParallel](/v0.2/docs/how_to/parallel/) to pass data through to a later step in your constructed chains.
See the example below:
%pip install -qU langchain langchain-openaiimport osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass()
from langchain_core.runnables import RunnableParallel, RunnablePassthroughrunnable = RunnableParallel( passed=RunnablePassthrough(), modified=lambda x: x["num"] + 1,)runnable.invoke({"num": 1})
**API Reference:**[RunnableParallel](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
{'passed': {'num': 1}, 'modified': 2}
As seen above, `passed` key was called with `RunnablePassthrough()` and so it simply passed on `{'num': 1}`.
We also set a second key in the map with `modified`. This uses a lambda to set a single value adding 1 to the num, which resulted in `modified` key with the value of `2`.
Retrieval Example[](#retrieval-example "Direct link to Retrieval Example")
---------------------------------------------------------------------------
In the example below, we see a more real-world use case where we use `RunnablePassthrough` along with `RunnableParallel` in a chain to properly format inputs to a prompt:
from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()retrieval_chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | model | StrOutputParser())retrieval_chain.invoke("where did harrison work?")
**API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
'Harrison worked at Kensho.'
Here the input to prompt is expected to be a map with keys "context" and "question". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the "question" key. The `RunnablePassthrough` allows us to pass on the user's question to the prompt and model.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now you've learned how to pass data through your chains to help to help format the data flowing through your chains.
To learn more, see the other how-to guides on runnables in this section.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/passthrough.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use built-in tools and toolkits
](/v0.2/docs/how_to/tools_builtin/)[
Next
How to compose prompts together
](/v0.2/docs/how_to/prompts_composition/)
* [Retrieval Example](#retrieval-example)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/versions/overview/ | * [](/v0.2/)
* Versions
* Overview
On this page
LangChain over time
===================
What’s new in LangChain?[](#whats-new-in-langchain "Direct link to What’s new in LangChain?")
----------------------------------------------------------------------------------------------
The following features have been added during the development of 0.1.x:
* Better streaming support via the [Event Streaming API](https://python.langchain.com/docs/expression_language/streaming/#using-stream-events).
* [Standardized tool calling support](https://blog.langchain.dev/tool-calling-with-langchain/)
* A standardized interface for [structuring output](https://github.com/langchain-ai/langchain/discussions/18154)
* [@chain decorator](https://python.langchain.com/docs/expression_language/how_to/decorator/) to more easily create **RunnableLambdas**
* [https://python.langchain.com/docs/expression\_language/how\_to/inspect/](https://python.langchain.com/docs/expression_language/how_to/inspect/)
* In Python, better async support for many core abstractions (thank you [@cbornet](https://github.com/cbornet)!!)
* Include response metadata in `AIMessage` to make it easy to access raw output from the underlying models
* Tooling to visualize [your runnables](https://python.langchain.com/docs/expression_language/how_to/inspect/) or [your langgraph app](https://github.com/langchain-ai/langgraph/blob/main/examples/visualization.ipynb)
* Interoperability of chat message histories across most providers
* [Over 20+ partner packages in python](https://python.langchain.com/docs/integrations/platforms/) for popular integrations
What’s coming to LangChain?[](#whats-coming-to-langchain "Direct link to What’s coming to LangChain?")
-------------------------------------------------------------------------------------------------------
* We’ve been working hard on [langgraph](https://langchain-ai.github.io/langgraph/). We will be building more capabilities on top of it and focusing on making it the go-to framework for agent architectures.
* Vectorstores V2! We’ll be revisiting our vectorstores abstractions to help improve usability and reliability.
* Better documentation and versioned docs!
* We’re planning a breaking release (0.3.0) sometime between July-September to [upgrade to full support of Pydantic 2](https://github.com/langchain-ai/langchain/discussions/19339), and will drop support for Pydantic 1 (including objects originating from the `v1` namespace of Pydantic 2).
What changed?[](#what-changed "Direct link to What changed?")
--------------------------------------------------------------
Due to the rapidly evolving field, LangChain has also evolved rapidly.
This document serves to outline at a high level what has changed and why.
### TLDR[](#tldr "Direct link to TLDR")
**As of 0.2.0:**
* This release completes the work that we started with release 0.1.0 by removing the dependency of `langchain` on `langchain-community`.
* `langchain` package no longer requires `langchain-community` . Instead `langchain-community` will now depend on `langchain-core` and `langchain` .
* User code that still relies on deprecated imports from `langchain` will continue to work as long `langchain_community` is installed. These imports will start raising errors in release 0.4.x.
**As of 0.1.0:**
* `langchain` was split into the following component packages: `langchain-core`, `langchain`, `langchain-community`, `langchain-[partner]` to improve the usability of langchain code in production settings. You can read more about it on our [blog](https://blog.langchain.dev/langchain-v0-1-0/).
### Ecosystem organization[](#ecosystem-organization "Direct link to Ecosystem organization")
By the release of 0.1.0, LangChain had grown to a large ecosystem with many integrations and a large community.
To improve the usability of LangChain in production, we split the single `langchain` package into multiple packages. This allowed us to create a good foundation architecture for the LangChain ecosystem and improve the usability of `langchain` in production.
Here is the high level break down of the Eco-system:
* **langchain-core**: contains core abstractions involving LangChain Runnables, tooling for observability, and base implementations of important abstractions (e.g., Chat Models).
* **langchain:** contains generic code that is built using interfaces defined in `langchain-core`. This package is for code that generalizes well across different implementations of specific interfaces. For example, `create_tool_calling_agent` works across chat models that support [tool calling capabilities](https://blog.langchain.dev/tool-calling-with-langchain/).
* **langchain-community**: community maintained 3rd party integrations. Contains integrations based on interfaces defined in **langchain-core**. Maintained by the LangChain community.
* **Partner Packages (e.g., langchain-\[partner\])**: Partner packages are packages dedicated to especially popular integrations (e.g., `langchain-openai`, `langchain-anthropic` etc.). The dedicated packages generally benefit from better reliability and support.
* `langgraph`: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
* `langserve`: Deploy LangChain chains as REST APIs.
In the 0.1.0 release, `langchain-community` was retained as required a dependency of `langchain`.
This allowed imports of vectorstores, chat models, and other integrations to continue working through `langchain` rather than forcing users to update all of their imports to `langchain-community`.
For the 0.2.0 release, we’re removing the dependency of `langchain` on `langchain-community`. This is something we’ve been planning to do since the 0.1 release because we believe this is the right package architecture.
Old imports will continue to work as long as `langchain-community` is installed. These imports will be removed in the 0.4.0 release.
To understand why we think breaking the dependency of `langchain` on `langchain-community` is best we should understand what each package is meant to do.
`langchain` is meant to contain high-level chains and agent architectures. The logic in these should be specified at the level of abstractions like `ChatModel` and `Retriever`, and should not be specific to any one integration. This has two main benefits:
1. `langchain` is fairly lightweight. Here is the full list of required dependencies (after the split)
python = ">=3.8.1,<4.0"langchain-core = "^0.2.0"langchain-text-splitters = ">=0.0.1,<0.1"langsmith = "^0.1.17"pydantic = ">=1,<3"SQLAlchemy = ">=1.4,<3"requests = "^2"PyYAML = ">=5.3"numpy = "^1"aiohttp = "^3.8.3"tenacity = "^8.1.0"jsonpatch = "^1.33"
2. `langchain` chains/agents are largely integration-agnostic, which makes it easy to experiment with different integrations and future-proofs your code should there be issues with one specific integration.
There is also a third less tangible benefit which is that being integration-agnostic forces us to find only those very generic abstractions and architectures which generalize well across integrations. Given how general the abilities of the foundational tech are, and how quickly the space is moving, having generic architectures is a good way of future-proofing your applications.
`langchain-community` is intended to have all integration-specific components that are not yet being maintained in separate `langchain-{partner}` packages. Today this is still the majority of integrations and a lot of code. This code is primarily contributed by the community, while `langchain` is largely written by core maintainers. All of these integrations use optional dependencies and conditional imports, which prevents dependency bloat and conflicts but means compatible dependency versions are not made explicit. Given the volume of integrations in `langchain-community` and the speed at which integrations change, it’s very hard to follow semver versioning, and we currently don’t.
All of which is to say that there’s no large benefits to `langchain` depending on `langchain-community` and some obvious downsides: the functionality in `langchain` should be integration agnostic anyways, `langchain-community` can’t be properly versioned, and depending on `langchain-community` increases the [vulnerability surface](https://github.com/langchain-ai/langchain/discussions/19083) of `langchain`.
For more context about the reason for the organization please see our blog: [https://blog.langchain.dev/langchain-v0-1-0/](https://blog.langchain.dev/langchain-v0-1-0/)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/versions/overview.mdx)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
🦜️🏓 LangServe
](/v0.2/docs/langserve/)[
Next
Release Policy
](/v0.2/docs/versions/release_policy/)
* [What’s new in LangChain?](#whats-new-in-langchain)
* [What’s coming to LangChain?](#whats-coming-to-langchain)
* [What changed?](#what-changed)
* [TLDR](#tldr)
* [Ecosystem organization](#ecosystem-organization) | null |
https://python.langchain.com/v0.2/docs/how_to/prompts_composition/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to compose prompts together
On this page
How to compose prompts together
===============================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
LangChain provides a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components.
String prompt composition[](#string-prompt-composition "Direct link to String prompt composition")
---------------------------------------------------------------------------------------------------
When working with string prompts, each template is joined together. You can work with either prompts directly or strings (the first element in the list needs to be a prompt).
from langchain_core.prompts import PromptTemplateprompt = ( PromptTemplate.from_template("Tell me a joke about {topic}") + ", make it funny" + "\n\nand in {language}")prompt
**API Reference:**[PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html)
PromptTemplate(input_variables=['language', 'topic'], template='Tell me a joke about {topic}, make it funny\n\nand in {language}')
prompt.format(topic="sports", language="spanish")
'Tell me a joke about sports, make it funny\n\nand in spanish'
Chat prompt composition[](#chat-prompt-composition "Direct link to Chat prompt composition")
---------------------------------------------------------------------------------------------
A chat prompt is made up a of a list of messages. Similarly to the above example, we can concatenate chat prompt templates. Each new element is a new message in the final prompt.
First, let's initialize the a [`ChatPromptTemplate`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) with a [`SystemMessage`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html).
from langchain_core.messages import AIMessage, HumanMessage, SystemMessageprompt = SystemMessage(content="You are a nice pirate")
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [SystemMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html)
You can then easily create a pipeline combining it with other messages _or_ message templates. Use a `Message` when there is no variables to be formatted, use a `MessageTemplate` when there are variables to be formatted. You can also use just a string (note: this will automatically get inferred as a [`HumanMessagePromptTemplate`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.HumanMessagePromptTemplate.html).)
new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}")
Under the hood, this creates an instance of the ChatPromptTemplate class, so you can use it just as you did before!
new_prompt.format_messages(input="i said hi")
[SystemMessage(content='You are a nice pirate'), HumanMessage(content='hi'), AIMessage(content='what?'), HumanMessage(content='i said hi')]
Using PipelinePrompt[](#using-pipelineprompt "Direct link to Using PipelinePrompt")
------------------------------------------------------------------------------------
LangChain includes a class called [`PipelinePromptTemplate`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html), which can be useful when you want to reuse parts of prompts. A PipelinePrompt consists of two main parts:
* Final prompt: The final prompt that is returned
* Pipeline prompts: A list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.
from langchain_core.prompts import PipelinePromptTemplate, PromptTemplatefull_template = """{introduction}{example}{start}"""full_prompt = PromptTemplate.from_template(full_template)introduction_template = """You are impersonating {person}."""introduction_prompt = PromptTemplate.from_template(introduction_template)example_template = """Here's an example of an interaction:Q: {example_q}A: {example_a}"""example_prompt = PromptTemplate.from_template(example_template)start_template = """Now, do this for real!Q: {input}A:"""start_prompt = PromptTemplate.from_template(start_template)input_prompts = [ ("introduction", introduction_prompt), ("example", example_prompt), ("start", start_prompt),]pipeline_prompt = PipelinePromptTemplate( final_prompt=full_prompt, pipeline_prompts=input_prompts)pipeline_prompt.input_variables
**API Reference:**[PipelinePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html)
['person', 'example_a', 'example_q', 'input']
print( pipeline_prompt.format( person="Elon Musk", example_q="What's your favorite car?", example_a="Tesla", input="What's your favorite social media site?", ))
You are impersonating Elon Musk.Here's an example of an interaction:Q: What's your favorite car?A: TeslaNow, do this for real!Q: What's your favorite social media site?A:
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to compose prompts together.
Next, check out the other how-to guides on prompt templates in this section, like [adding few-shot examples to your prompt templates](/v0.2/docs/how_to/few_shot_examples_chat/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/prompts_composition.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to pass through arguments from one step to the next
](/v0.2/docs/how_to/passthrough/)[
Next
How to handle multiple retrievers when doing query analysis
](/v0.2/docs/how_to/query_multiple_retrievers/)
* [String prompt composition](#string-prompt-composition)
* [Chat prompt composition](#chat-prompt-composition)
* [Using PipelinePrompt](#using-pipelineprompt)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/assign/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add values to a chain's state
On this page
How to add values to a chain's state
====================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
* [Calling runnables in parallel](/v0.2/docs/how_to/parallel/)
* [Custom functions](/v0.2/docs/how_to/functions/)
* [Passing data through](/v0.2/docs/how_to/passthrough/)
An alternate way of [passing data through](/v0.2/docs/how_to/passthrough/) steps of a chain is to leave the current values of the chain state unchanged while assigning a new value under a given key. The [`RunnablePassthrough.assign()`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html#langchain_core.runnables.passthrough.RunnablePassthrough.assign) static method takes an input value and adds the extra arguments passed to the assign function.
This is useful in the common [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language) pattern of additively creating a dictionary to use as input to a later step.
Here's an example:
%pip install --upgrade --quiet langchain langchain-openaiimport osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass()
from langchain_core.runnables import RunnableParallel, RunnablePassthroughrunnable = RunnableParallel( extra=RunnablePassthrough.assign(mult=lambda x: x["num"] * 3), modified=lambda x: x["num"] + 1,)runnable.invoke({"num": 1})
**API Reference:**[RunnableParallel](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)
{'extra': {'num': 1, 'mult': 3}, 'modified': 2}
Let's break down what's happening here.
* The input to the chain is `{"num": 1}`. This is passed into a `RunnableParallel`, which invokes the runnables it is passed in parallel with that input.
* The value under the `extra` key is invoked. `RunnablePassthrough.assign()` keeps the original keys in the input dict (`{"num": 1}`), and assigns a new key called `mult`. The value is `lambda x: x["num"] * 3)`, which is `3`. Thus, the result is `{"num": 1, "mult": 3}`.
* `{"num": 1, "mult": 3}` is returned to the `RunnableParallel` call, and is set as the value to the key `extra`.
* At the same time, the `modified` key is called. The result is `2`, since the lambda extracts a key called `"num"` from its input and adds one.
Thus, the result is `{'extra': {'num': 1, 'mult': 3}, 'modified': 2}`.
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
One convenient feature of this method is that it allows values to pass through as soon as they are available. To show this off, we'll use `RunnablePassthrough.assign()` to immediately return source docs in a retrieval chain:
from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()generation_chain = prompt | model | StrOutputParser()retrieval_chain = { "context": retriever, "question": RunnablePassthrough(),} | RunnablePassthrough.assign(output=generation_chain)stream = retrieval_chain.stream("where did harrison work?")for chunk in stream: print(chunk)
**API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
{'question': 'where did harrison work?'}{'context': [Document(page_content='harrison worked at kensho')]}{'output': ''}{'output': 'H'}{'output': 'arrison'}{'output': ' worked'}{'output': ' at'}{'output': ' Kens'}{'output': 'ho'}{'output': '.'}{'output': ''}
We can see that the first chunk contains the original `"question"` since that is immediately available. The second chunk contains `"context"` since the retriever finishes second. Finally, the output from the `generation_chain` streams in chunks as soon as it is available.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now you've learned how to pass data through your chains to help to help format the data flowing through your chains.
To learn more, see the other how-to guides on runnables in this section.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/assign.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to handle multiple retrievers when doing query analysis
](/v0.2/docs/how_to/query_multiple_retrievers/)[
Next
How to construct filters for query analysis
](/v0.2/docs/how_to/query_constructing_filters/)
* [Streaming](#streaming)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/concepts/ | * [](/v0.2/)
* Conceptual guide
On this page
Conceptual guide
================
This section contains introductions to key parts of LangChain.
Architecture[](#architecture "Direct link to Architecture")
------------------------------------------------------------
LangChain as a framework consists of a number of packages.
### `langchain-core`[](#langchain-core "Direct link to langchain-core")
This package contains base abstractions of different components and ways to compose them together. The interfaces for core components like LLMs, vector stores, retrievers and more are defined here. No third party integrations are defined here. The dependencies are kept purposefully very lightweight.
### Partner packages[](#partner-packages "Direct link to Partner packages")
While the long tail of integrations are in `langchain-community`, we split popular integrations into their own packages (e.g. `langchain-openai`, `langchain-anthropic`, etc). This was done in order to improve support for these important integrations.
### `langchain`[](#langchain "Direct link to langchain")
The main `langchain` package contains chains, agents, and retrieval strategies that make up an application's cognitive architecture. These are NOT third party integrations. All chains, agents, and retrieval strategies here are NOT specific to any one integration, but rather generic across all integrations.
### `langchain-community`[](#langchain-community "Direct link to langchain-community")
This package contains third party integrations that are maintained by the LangChain community. Key partner packages are separated out (see below). This contains all integrations for various components (LLMs, vector stores, retrievers). All dependencies in this package are optional to keep the package as lightweight as possible.
### [`langgraph`](https://langchain-ai.github.io/langgraph)[](#langgraph "Direct link to langgraph")
`langgraph` is an extension of `langchain` aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows.
### [`langserve`](/v0.2/docs/langserve/)[](#langserve "Direct link to langserve")
A package to deploy LangChain chains as REST APIs. Makes it easy to get a production ready API up and running.
### [LangSmith](https://docs.smith.langchain.com)[](#langsmith "Direct link to langsmith")
A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack.svg "LangChain Framework Overview")![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack_dark.svg "LangChain Framework Overview")
LangChain Expression Language (LCEL)[](#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)")
----------------------------------------------------------------------------------------------------------------------------------
LangChain Expression Language, or LCEL, is a declarative way to chain LangChain components. LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
**First-class streaming support** When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
**Async support** Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/v0.2/docs/langserve/) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
**Optimized parallel execution** Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
**Retries and fallbacks** Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
**Access intermediate results** For more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and it’s available on every [LangServe](/v0.2/docs/langserve/) server.
**Input and output schemas** Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
[**Seamless LangSmith tracing**](https://docs.smith.langchain.com) As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step. With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.smith.langchain.com/) for maximum observability and debuggability.
[**Seamless LangServe deployment**](/v0.2/docs/langserve/) Any chain created with LCEL can be easily deployed using [LangServe](/v0.2/docs/langserve/).
### Runnable interface[](#runnable-interface "Direct link to Runnable interface")
To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.
This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. The standard interface includes:
* `stream`: stream back chunks of the response
* `invoke`: call the chain on an input
* `batch`: call the chain on a list of inputs
These also have corresponding async methods that should be used with [asyncio](https://docs.python.org/3/library/asyncio.html) `await` syntax for concurrency:
* `astream`: stream back chunks of the response async
* `ainvoke`: call the chain on an input async
* `abatch`: call the chain on a list of inputs async
* `astream_log`: stream back intermediate steps as they happen, in addition to the final response
* `astream_events`: **beta** stream events as they happen in the chain (introduced in `langchain-core` 0.1.14)
The **input type** and **output type** varies by component:
Component
Input Type
Output Type
Prompt
Dictionary
PromptValue
ChatModel
Single string, list of chat messages or a PromptValue
ChatMessage
LLM
Single string, list of chat messages or a PromptValue
String
OutputParser
The output of an LLM or ChatModel
Depends on the parser
Retriever
Single string
List of Documents
Tool
Single string or dictionary, depending on the tool
Depends on the tool
All runnables expose input and output **schemas** to inspect the inputs and outputs:
* `input_schema`: an input Pydantic model auto-generated from the structure of the Runnable
* `output_schema`: an output Pydantic model auto-generated from the structure of the Runnable
Components[](#components "Direct link to Components")
------------------------------------------------------
LangChain provides standard, extendable interfaces and external integrations for various components useful for building with LLMs. Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix.
### Chat models[](#chat-models "Direct link to Chat models")
Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text). These are traditionally newer models (older models are generally `LLMs`, see below). Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages.
Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This means you can easily use chat models in place of LLMs.
When a string is passed in as input, it is converted to a `HumanMessage` and then passed to the underlying model.
LangChain does not host any Chat Models, rather we rely on third party integrations.
We have some standardized parameters when constructing ChatModels:
* `model`: the name of the model
* `temperature`: the sampling temperature
* `timeout`: request timeout
* `max_tokens`: max tokens to generate
* `stop`: default stop sequences
* `max_retries`: max number of times to retry requests
* `api_key`: API key for the model provider
* `base_url`: endpoint to send requests to
Some important things to note:
* standard params only apply to model providers that expose parameters with the intended functionality. For example, some providers do not expose a configuration for maximum output tokens, so max\_tokens can't be supported on these.
* standard params are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in `langchain-community`.
ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the API reference for that model.
info
**Tool Calling** Some chat models have been fine-tuned for tool calling and provide a dedicated API for tool calling. Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling. Please see the [tool calling section](/v0.2/docs/concepts/#functiontool-calling) for more information.
For specifics on how to use chat models, see the [relevant how-to guides here](/v0.2/docs/how_to/#chat-models).
#### Multimodality[](#multimodality "Direct link to Multimodality")
Some chat models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures.
In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
For specifics on how to use multimodal models, see the [relevant how-to guides here](/v0.2/docs/how_to/#multimodal).
For a full list of LangChain model providers with multimodal models, [check out this table](/v0.2/docs/integrations/chat/#advanced-features).
### LLMs[](#llms "Direct link to LLMs")
caution
Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as [chat completion models](/v0.2/docs/concepts/#chat-models), even for non-chat use cases.
You are probably looking for [the section above instead](/v0.2/docs/concepts/#chat-models).
Language models that takes a string as input and returns a string. These are traditionally older models (newer models generally are [Chat Models](/v0.2/docs/concepts/#chat-models), see above).
Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input. This gives them the same interface as [Chat Models](/v0.2/docs/concepts/#chat-models). When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.
LangChain does not host any LLMs, rather we rely on third party integrations.
For specifics on how to use LLMs, see the [relevant how-to guides here](/v0.2/docs/how_to/#llms).
### Messages[](#messages "Direct link to Messages")
Some language models take a list of messages as input and return a message. There are a few different types of messages. All messages have a `role`, `content`, and `response_metadata` property.
The `role` describes WHO is saying the message. LangChain has different message classes for different roles.
The `content` property describes the content of the message. This can be a few different things:
* A string (most models deal this type of content)
* A List of dictionaries (this is used for multimodal input, where the dictionary contains information about that input type and that input location)
#### HumanMessage[](#humanmessage "Direct link to HumanMessage")
This represents a message from the user.
#### AIMessage[](#aimessage "Direct link to AIMessage")
This represents a message from the model. In addition to the `content` property, these messages also have:
**`response_metadata`**
The `response_metadata` property contains additional metadata about the response. The data here is often specific to each model provider. This is where information like log-probs and token usage may be stored.
**`tool_calls`**
These represent a decision from an language model to call a tool. They are included as part of an `AIMessage` output. They can be accessed from there with the `.tool_calls` property.
This property returns a list of dictionaries. Each dictionary has the following keys:
* `name`: The name of the tool that should be called.
* `args`: The arguments to that tool.
* `id`: The id of that tool call.
#### SystemMessage[](#systemmessage "Direct link to SystemMessage")
This represents a system message, which tells the model how to behave. Not every model provider supports this.
#### FunctionMessage[](#functionmessage "Direct link to FunctionMessage")
This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
#### ToolMessage[](#toolmessage "Direct link to ToolMessage")
This represents the result of a tool call. This is distinct from a FunctionMessage in order to match OpenAI's `function` and `tool` message types. In addition to `role` and `content`, this message has a `tool_call_id` parameter which conveys the id of the call to the tool that was called to produce this result.
### Prompt templates[](#prompt-templates "Direct link to Prompt templates")
Prompt templates help to translate user input and parameters into instructions for a language model. This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output.
Prompt Templates take as input a dictionary, where each key represents a variable in the prompt template to fill in.
Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or a list of messages. The reason this PromptValue exists is to make it easy to switch between strings and messages.
There are a few different types of prompt templates:
#### String PromptTemplates[](#string-prompttemplates "Direct link to String PromptTemplates")
These prompt templates are used to format a single string, and generally are used for simpler inputs. For example, a common way to construct and use a PromptTemplate is as follows:
from langchain_core.prompts import PromptTemplateprompt_template = PromptTemplate.from_template("Tell me a joke about {topic}")prompt_template.invoke({"topic": "cats"})
**API Reference:**[PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html)
#### ChatPromptTemplates[](#chatprompttemplates "Direct link to ChatPromptTemplates")
These prompt templates are used to format a list of messages. These "templates" consist of a list of templates themselves. For example, a common way to construct and use a ChatPromptTemplate is as follows:
from langchain_core.prompts import ChatPromptTemplateprompt_template = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant"), ("user", "Tell me a joke about {topic}")])prompt_template.invoke({"topic": "cats"})
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
In the above example, this ChatPromptTemplate will construct two messages when called. The first is a system message, that has no variables to format. The second is a HumanMessage, and will be formatted by the `topic` variable the user passes in.
#### MessagesPlaceholder[](#messagesplaceholder "Direct link to MessagesPlaceholder")
This prompt template is responsible for adding a list of messages in a particular place. In the above ChatPromptTemplate, we saw how we could format two messages, each one a string. But what if we wanted the user to pass in a list of messages that we would slot into a particular spot? This is how you use MessagesPlaceholder.
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_core.messages import HumanMessageprompt_template = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant"), MessagesPlaceholder("msgs")])prompt_template.invoke({"msgs": [HumanMessage(content="hi!")]})
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html)
This will produce a list of two messages, the first one being a system message, and the second one being the HumanMessage we passed in. If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in). This is useful for letting a list of messages be slotted into a particular spot.
An alternative way to accomplish the same thing without using the `MessagesPlaceholder` class explicitly is:
prompt_template = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant"), ("placeholder", "{msgs}") # <-- This is the changed part])
For specifics on how to use prompt templates, see the [relevant how-to guides here](/v0.2/docs/how_to/#prompt-templates).
### Example selectors[](#example-selectors "Direct link to Example selectors")
One common prompting technique for achieving better performance is to include examples as part of the prompt. This gives the language model concrete examples of how it should behave. Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them. Example Selectors are classes responsible for selecting and then formatting examples into prompts.
For specifics on how to use example selectors, see the [relevant how-to guides here](/v0.2/docs/how_to/#example-selectors).
### Output parsers[](#output-parsers "Direct link to Output parsers")
note
The information here refers to parsers that take a text output from a model try to parse it into a more structured representation. More and more models are supporting function (or tool) calling, which handles this automatically. It is recommended to use function/tool calling rather than output parsing. See documentation for that [here](/v0.2/docs/concepts/#function-tool-calling).
Responsible for taking the output of a model and transforming it to a more suitable format for downstream tasks. Useful when you are using LLMs to generate structured data, or to normalize output from chat models and LLMs.
LangChain has lots of different types of output parsers. This is a list of output parsers LangChain supports. The table below has various pieces of information:
**Name**: The name of the output parser
**Supports Streaming**: Whether the output parser supports streaming.
**Has Format Instructions**: Whether the output parser has format instructions. This is generally available except when (a) the desired schema is not specified in the prompt but rather in other parameters (like OpenAI function calling), or (b) when the OutputParser wraps another OutputParser.
**Calls LLM**: Whether this output parser itself calls an LLM. This is usually only done by output parsers that attempt to correct misformatted output.
**Input Type**: Expected input type. Most output parsers work on both strings and messages, but some (like OpenAI Functions) need a message with specific kwargs.
**Output Type**: The output type of the object returned by the parser.
**Description**: Our commentary on this output parser and when to use it.
Name
Supports Streaming
Has Format Instructions
Calls LLM
Input Type
Output Type
Description
[JSON](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html#langchain_core.output_parsers.json.JsonOutputParser)
✅
✅
`str` | `Message`
JSON object
Returns a JSON object as specified. You can specify a Pydantic model and it will return JSON for that model. Probably the most reliable output parser for getting structured data that does NOT use function calling.
[XML](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.xml.XMLOutputParser.html#langchain_core.output_parsers.xml.XMLOutputParser)
✅
✅
`str` | `Message`
`dict`
Returns a dictionary of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's).
[CSV](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.list.CommaSeparatedListOutputParser.html#langchain_core.output_parsers.list.CommaSeparatedListOutputParser)
✅
✅
`str` | `Message`
`List[str]`
Returns a list of comma separated values.
[OutputFixing](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html#langchain.output_parsers.fix.OutputFixingParser)
✅
`str` | `Message`
Wraps another output parser. If that output parser errors, then this will pass the error message and the bad output to an LLM and ask it to fix the output.
[RetryWithError](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryWithErrorOutputParser.html#langchain.output_parsers.retry.RetryWithErrorOutputParser)
✅
`str` | `Message`
Wraps another output parser. If that output parser errors, then this will pass the original inputs, the bad output, and the error message to an LLM and ask it to fix it. Compared to OutputFixingParser, this one also sends the original instructions.
[Pydantic](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.pydantic.PydanticOutputParser.html#langchain_core.output_parsers.pydantic.PydanticOutputParser)
✅
`str` | `Message`
`pydantic.BaseModel`
Takes a user defined Pydantic model and returns data in that format.
[YAML](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.yaml.YamlOutputParser.html#langchain.output_parsers.yaml.YamlOutputParser)
✅
`str` | `Message`
`pydantic.BaseModel`
Takes a user defined Pydantic model and returns data in that format. Uses YAML to encode it.
[PandasDataFrame](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.pandas_dataframe.PandasDataFrameOutputParser.html#langchain.output_parsers.pandas_dataframe.PandasDataFrameOutputParser)
✅
`str` | `Message`
`dict`
Useful for doing operations with pandas DataFrames.
[Enum](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.enum.EnumOutputParser.html#langchain.output_parsers.enum.EnumOutputParser)
✅
`str` | `Message`
`Enum`
Parses response into one of the provided enum values.
[Datetime](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html#langchain.output_parsers.datetime.DatetimeOutputParser)
✅
`str` | `Message`
`datetime.datetime`
Parses response into a datetime string.
[Structured](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html#langchain.output_parsers.structured.StructuredOutputParser)
✅
`str` | `Message`
`Dict[str, str]`
An output parser that returns structured information. It is less powerful than other output parsers since it only allows for fields to be strings. This can be useful when you are working with smaller LLMs.
For specifics on how to use output parsers, see the [relevant how-to guides here](/v0.2/docs/how_to/#output-parsers).
### Chat history[](#chat-history "Direct link to Chat history")
Most LLM applications have a conversational interface. An essential component of a conversation is being able to refer to information introduced earlier in the conversation. At bare minimum, a conversational system should be able to access some window of past messages directly.
The concept of `ChatHistory` refers to a class in LangChain which can be used to wrap an arbitrary chain. This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database. Future interactions will then load those messages and pass them into the chain as part of the input.
### Documents[](#documents "Direct link to Documents")
A Document object in LangChain contains information about some data. It has two attributes:
* `page_content: str`: The content of this document. Currently is only a string.
* `metadata: dict`: Arbitrary metadata associated with this document. Can track the document id, file name, etc.
### Document loaders[](#document-loaders "Direct link to Document loaders")
These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc.
Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the `.load` method. An example use case is as follows:
from langchain_community.document_loaders.csv_loader import CSVLoaderloader = CSVLoader( ... # <-- Integration specific parameters here)data = loader.load()
**API Reference:**[CSVLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.csv_loader.CSVLoader.html)
For specifics on how to use document loaders, see the [relevant how-to guides here](/v0.2/docs/how_to/#document-loaders).
### Text splitters[](#text-splitters "Direct link to Text splitters")
Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.
When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What "semantically related" means could depend on the type of text. This notebook showcases several ways to do that.
At a high level, text splitters work as following:
1. Split the text up into small, semantically meaningful chunks (often sentences).
2. Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).
3. Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).
That means there are two different axes along which you can customize your text splitter:
1. How the text is split
2. How the chunk size is measured
For specifics on how to use text splitters, see the [relevant how-to guides here](/v0.2/docs/how_to/#text-splitters).
### Embedding models[](#embedding-models "Direct link to Embedding models")
Embedding models create a vector representation of a piece of text. You can think of a vector as an array of numbers that captures the semantic meaning of the text. By representing the text in this way, you can perform mathematical operations that allow you to do things like search for other pieces of text that are most similar in meaning. These natural language search capabilities underpin many types of [context retrieval](/v0.2/docs/concepts/#retrieval), where we provide an LLM with the relevant data it needs to effectively respond to a query.
![](/v0.2/assets/images/embeddings-9c2616450a3b4f497a2d95a696b5f1a7.png)
The `Embeddings` class is a class designed for interfacing with text embedding models. There are many different embedding model providers (OpenAI, Cohere, Hugging Face, etc) and local models, and this class is designed to provide a standard interface for all of them.
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
For specifics on how to use embedding models, see the [relevant how-to guides here](/v0.2/docs/how_to/#embedding-models).
### Vector stores[](#vector-stores "Direct link to Vector stores")
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you.
Most vector stores can also store metadata about embedded vectors and support filtering on that metadata before similarity search, allowing you more control over returned documents.
Vector stores can be converted to the retriever interface by doing:
vectorstore = MyVectorStore()retriever = vectorstore.as_retriever()
For specifics on how to use vector stores, see the [relevant how-to guides here](/v0.2/docs/how_to/#vector-stores).
### Retrievers[](#retrievers "Direct link to Retrievers")
A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) them. Retrievers can be created from vector stores, but are also broad enough to include [Wikipedia search](/v0.2/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/v0.2/docs/integrations/retrievers/amazon_kendra_retriever/).
Retrievers accept a string query as input and return a list of Document's as output.
For specifics on how to use retrievers, see the [relevant how-to guides here](/v0.2/docs/how_to/#retrievers).
### Tools[](#tools "Direct link to Tools")
Tools are interfaces that an agent, a chain, or a chat model / LLM can use to interact with the world.
A tool consists of the following components:
1. The name of the tool
2. A description of what the tool does
3. JSON schema of what the inputs to the tool are
4. The function to call
5. Whether the result of a tool should be returned directly to the user (only relevant for agents)
The name, description and JSON schema are provided as context to the LLM, allowing the LLM to determine how to use the tool appropriately.
Given a list of available tools and a prompt, an LLM can request that one or more tools be invoked with appropriate arguments.
Generally, when designing tools to be used by a chat model or LLM, it is important to keep in mind the following:
* Chat models that have been fine-tuned for tool calling will be better at tool calling than non-fine-tuned models.
* Non fine-tuned models may not be able to use tools at all, especially if the tools are complex or require multiple tool calls.
* Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas.
* Simpler tools are generally easier for models to use than more complex tools.
For specifics on how to use tools, see the [relevant how-to guides here](/v0.2/docs/how_to/#tools).
### Toolkits[](#toolkits "Direct link to Toolkits")
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
All Toolkits expose a `get_tools` method which returns a list of tools. You can therefore do:
# Initialize a toolkittoolkit = ExampleTookit(...)# Get list of toolstools = toolkit.get_tools()
### Agents[](#agents "Direct link to Agents")
By themselves, language models can't take actions - they just output text. A big use case for LangChain is creating **agents**. Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.
[LangGraph](https://github.com/langchain-ai/langgraph) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents. Please check out that documentation for a more in depth overview of agent concepts.
There is a legacy agent concept in LangChain that we are moving towards deprecating: `AgentExecutor`. AgentExecutor was essentially a runtime for agents. It was a great place to get started, however, it was not flexible enough as you started to have more customized agents. In order to solve that we built LangGraph to be this flexible, highly-controllable runtime.
If you are still using AgentExecutor, do not fear: we still have a guide on [how to use AgentExecutor](/v0.2/docs/how_to/agent_executor/). It is recommended, however, that you start to transition to LangGraph. In order to assist in this we have put together a [transition guide on how to do so](/v0.2/docs/how_to/migrate_agent/).
### Callbacks[](#callbacks "Direct link to Callbacks")
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
You can subscribe to these events by using the `callbacks` argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail.
#### Callback Events[](#callback-events "Direct link to Callback Events")
Event
Event Trigger
Associated Method
Chat model start
When a chat model starts
`on_chat_model_start`
LLM start
When a llm starts
`on_llm_start`
LLM new token
When an llm OR chat model emits a new token
`on_llm_new_token`
LLM ends
When an llm OR chat model ends
`on_llm_end`
LLM errors
When an llm OR chat model errors
`on_llm_error`
Chain start
When a chain starts running
`on_chain_start`
Chain end
When a chain ends
`on_chain_end`
Chain error
When a chain errors
`on_chain_error`
Tool start
When a tool starts running
`on_tool_start`
Tool end
When a tool ends
`on_tool_end`
Tool error
When a tool errors
`on_tool_error`
Agent action
When an agent takes an action
`on_agent_action`
Agent finish
When an agent ends
`on_agent_finish`
Retriever start
When a retriever starts
`on_retriever_start`
Retriever end
When a retriever ends
`on_retriever_end`
Retriever error
When a retriever errors
`on_retriever_error`
Text
When arbitrary text is run
`on_text`
Retry
When a retry event is run
`on_retry`
#### Callback handlers[](#callback-handlers "Direct link to Callback handlers")
Callback handlers can either be `sync` or `async`:
* Sync callback handlers implement the [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) interface.
* Async callback handlers implement the [AsyncCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) interface.
During run-time LangChain configures an appropriate callback manager (e.g., [CallbackManager](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManager.html) or [AsyncCallbackManager](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html) which will be responsible for calling the appropriate method on each "registered" callback handler when the event is triggered.
#### Passing callbacks[](#passing-callbacks "Direct link to Passing callbacks")
The `callbacks` property is available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places:
The callbacks are available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places:
* **Request time callbacks**: Passed at the time of the request in addition to the input data. Available on all standard `Runnable` objects. These callbacks are INHERITED by all children of the object they are defined on. For example, `chain.invoke({"number": 25}, {"callbacks": [handler]})`.
* **Constructor callbacks**: `chain = TheNameOfSomeChain(callbacks=[handler])`. These callbacks are passed as arguments to the constructor of the object. The callbacks are scoped only to the object they are defined on, and are **not** inherited by any children of the object.
danger
Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children of the object.
If you're creating a custom chain or runnable, you need to remember to propagate request time callbacks to any child objects.
Async in Python<=3.10
Any `RunnableLambda`, a `RunnableGenerator`, or `Tool` that invokes other runnables and is running async in python<=3.10, will have to propagate callbacks to child objects manually. This is because LangChain cannot automatically propagate callbacks to child objects in this case.
This is a common reason why you may fail to see events being emitted from custom runnables or tools.
For specifics on how to use callbacks, see the [relevant how-to guides here](/v0.2/docs/how_to/#callbacks).
Techniques[](#techniques "Direct link to Techniques")
------------------------------------------------------
### Streaming[](#streaming "Direct link to Streaming")
Individual LLM calls often run for much longer than traditional resource requests. This compounds when you build more complex chains or agents that require multiple reasoning steps.
Fortunately, LLMs generate output iteratively, which means it's possible to show sensible intermediate results before the final response is ready. Consuming output as soon as it becomes available has therefore become a vital part of the UX around building apps with LLMs to help alleviate latency issues, and LangChain aims to have first-class support for streaming.
Below, we'll discuss some concepts and considerations around streaming in LangChain.
#### `.stream()` and `.astream()`[](#stream-and-astream "Direct link to stream-and-astream")
Most modules in LangChain include the `.stream()` method (and the equivalent `.astream()` method for [async](https://docs.python.org/3/library/asyncio.html) environments) as an ergonomic streaming interface. `.stream()` returns an iterator, which you can consume with a simple `for` loop. Here's an example with a chat model:
from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229")for chunk in model.stream("what color is the sky?"): print(chunk.content, end="|", flush=True)
**API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html)
For models (or other components) that don't support streaming natively, this iterator would just yield a single chunk, but you could still use the same general pattern when calling them. Using `.stream()` will also automatically call the model in streaming mode without the need to provide additional config.
The type of each outputted chunk depends on the type of component - for example, chat models yield [`AIMessageChunks`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html). Because this method is part of [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language-lcel), you can handle formatting differences from different outputs using an [output parser](/v0.2/docs/concepts/#output-parsers) to transform each yielded chunk.
You can check out [this guide](/v0.2/docs/how_to/streaming/#using-stream) for more detail on how to use `.stream()`.
#### `.astream_events()`[](#astream_events "Direct link to astream_events")
While the `.stream()` method is intuitive, it can only return the final generated value of your chain. This is fine for single LLM calls, but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of the chain alongside the final output - for example, returning sources alongside the final generation when building a chat over documents app.
There are ways to do this [using callbacks](/v0.2/docs/concepts/#callbacks-1), or by constructing your chain in such a way that it passes intermediate values to the end with something like chained [`.assign()`](/v0.2/docs/how_to/passthrough/) calls, but LangChain also includes an `.astream_events()` method that combines the flexibility of callbacks with the ergonomics of `.stream()`. When called, it returns an iterator which yields [various types of events](/v0.2/docs/how_to/streaming/#event-reference) that you can filter and process according to the needs of your project.
Here's one small example that prints just events containing streamed chat model output:
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_anthropic import ChatAnthropicmodel = ChatAnthropic(model="claude-3-sonnet-20240229")prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")parser = StrOutputParser()chain = prompt | model | parserasync for event in chain.astream_events({"topic": "parrot"}, version="v2"): kind = event["event"] if kind == "on_chat_model_stream": print(event, end="|", flush=True)
**API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html)
You can roughly think of it as an iterator over callback events (though the format differs) - and you can use it on almost all LangChain components!
See [this guide](/v0.2/docs/how_to/streaming/#using-stream-events) for more detailed information on how to use `.astream_events()`, including a table listing available events.
#### Callbacks[](#callbacks-1 "Direct link to Callbacks")
The lowest level way to stream outputs from LLMs in LangChain is via the [callbacks](/v0.2/docs/concepts/#callbacks) system. You can pass a callback handler that handles the [`on_llm_new_token`](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_new_token) event into LangChain components. When that component is invoked, any [LLM](/v0.2/docs/concepts/#llms) or [chat model](/v0.2/docs/concepts/#chat-models) contained in the component calls the callback with the generated token. Within the callback, you could pipe the tokens into some other destination, e.g. a HTTP response. You can also handle the [`on_llm_end`](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_end) event to perform any necessary cleanup.
You can see [this how-to section](/v0.2/docs/how_to/#callbacks) for more specifics on using callbacks.
Callbacks were the first technique for streaming introduced in LangChain. While powerful and generalizable, they can be unwieldy for developers. For example:
* You need to explicitly initialize and manage some aggregator or other stream to collect results.
* The execution order isn't explicitly guaranteed, and you could theoretically have a callback run after the `.invoke()` method finishes.
* Providers would often make you pass an additional parameter to stream outputs instead of returning them all at once.
* You would often ignore the result of the actual model call in favor of callback results.
#### Tokens[](#tokens "Direct link to Tokens")
The unit that most model providers use to measure input and output is via a unit called a **token**. Tokens are the basic units that language models read and generate when processing or producing text. The exact definition of a token can vary depending on the specific way the model was trained - for instance, in English, a token could be a single word like "apple", or a part of a word like "app".
When you send a model a prompt, the words and characters in the prompt are encoded into tokens using a **tokenizer**. The model then streams back generated output tokens, which the tokenizer decodes into human-readable text. The below example shows how OpenAI models tokenize `LangChain is cool!`:
![](/v0.2/assets/images/tokenization-10f566ab6774724e63dd99646f69655c.png)
You can see that it gets split into 5 different tokens, and that the boundaries between tokens are not exactly the same as word boundaries.
The reason language models use tokens rather than something more immediately intuitive like "characters" has to do with how they process and understand text. At a high-level, language models iteratively predict their next generated output based on the initial input and their previous generations. Training the model using tokens language models to handle linguistic units (like words or subwords) that carry meaning, rather than individual characters, which makes it easier for the model to learn and understand the structure of the language, including grammar and context. Furthermore, using tokens can also improve efficiency, since the model processes fewer units of text compared to character-level processing.
### Structured output[](#structured-output "Direct link to Structured output")
LLMs are capable of generating arbitrary text. This enables the model to respond appropriately to a wide range of inputs, but for some use-cases, it can be useful to constrain the LLM's output to a specific format or structure. This is referred to as **structured output**.
For example, if the output is to be stored in a relational database, it is much easier if the model generates output that adheres to a defined schema or format. [Extracting specific information](/v0.2/docs/tutorials/extraction/) from unstructured text is another case where this is particularly useful. Most commonly, the output format will be JSON, though other formats such as [YAML](/v0.2/docs/how_to/output_parser_yaml/) can be useful too. Below, we'll discuss a few ways to get structured output from models in LangChain.
#### `.with_structured_output()`[](#with_structured_output "Direct link to with_structured_output")
For convenience, some LangChain chat models support a `.with_structured_output()` method. This method only requires a schema as input, and returns a dict or Pydantic object. Generally, this method is only present on models that support one of the more advanced methods described below, and will use one of them under the hood. It takes care of importing a suitable output parser and formatting the schema in the right format for the model.
For more information, check out this [how-to guide](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method).
#### Raw prompting[](#raw-prompting "Direct link to Raw prompting")
The most intuitive way to get a model to structure output is to ask nicely. In addition to your query, you can give instructions describing what kind of output you'd like, then parse the output using an [output parser](/v0.2/docs/concepts/#output-parsers) to convert the raw model message or string output into something more easily manipulated.
The biggest benefit to raw prompting is its flexibility:
* Raw prompting does not require any special model features, only sufficient reasoning capability to understand the passed schema.
* You can prompt for any format you'd like, not just JSON. This can be useful if the model you are using is more heavily trained on a certain type of data, such as XML or YAML.
However, there are some drawbacks too:
* LLMs are non-deterministic, and prompting a LLM to consistently output data in the exactly correct format for smooth parsing can be surprisingly difficult and model-specific.
* Individual models have quirks depending on the data they were trained on, and optimizing prompts can be quite difficult. Some may be better at interpreting [JSON schema](https://json-schema.org/), others may be best with TypeScript definitions, and still others may prefer XML.
While we'll next go over some ways that you can take advantage of features offered by model providers to increase reliability, prompting techniques remain important for tuning your results no matter what method you choose.
#### JSON mode[](#json-mode "Direct link to JSON mode")
Some models, such as [Mistral](/v0.2/docs/integrations/chat/mistralai/), [OpenAI](/v0.2/docs/integrations/chat/openai/), [Together AI](/v0.2/docs/integrations/chat/together/) and [Ollama](/v0.2/docs/integrations/chat/ollama/), support a feature called **JSON mode**, usually enabled via config.
When enabled, JSON mode will constrain the model's output to always be some sort of valid JSON. Often they require some custom prompting, but it's usually much less burdensome and along the lines of, `"you must always return JSON"`, and the [output is easier to parse](/v0.2/docs/how_to/output_parser_json/).
It's also generally simpler and more commonly available than tool calling.
Here's an example:
from langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIfrom langchain.output_parsers.json import SimpleJsonOutputParsermodel = ChatOpenAI( model="gpt-4o", model_kwargs={ "response_format": { "type": "json_object" } },)prompt = ChatPromptTemplate.from_template( "Answer the user's question to the best of your ability." 'You must always output a JSON object with an "answer" key and a "followup_question" key.' "{question}")chain = prompt | model | SimpleJsonOutputParser()chain.invoke({ "question": "What is the powerhouse of the cell?" })
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [SimpleJsonOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.SimpleJsonOutputParser.html)
{'answer': 'The powerhouse of the cell is the mitochondrion. It is responsible for producing energy in the form of ATP through cellular respiration.', 'followup_question': 'Would you like to know more about how mitochondria produce energy?'}
For a full list of model providers that support JSON mode, see [this table](/v0.2/docs/integrations/chat/#advanced-features).
#### Function/tool calling[](#functiontool-calling "Direct link to Function/tool calling")
info
We use the term tool calling interchangeably with function calling. Although function calling is sometimes meant to refer to invocations of a single function, we treat all models as though they can return multiple tool or function calls in each message
Tool calling allows a model to respond to a given prompt by generating output that matches a user-defined schema. While the name implies that the model is performing some action, this is actually not the case! The model is coming up with the arguments to a tool, and actually running the tool (or not) is up to the user - for example, if you want to [extract output matching some schema](/v0.2/docs/tutorials/extraction/) from unstructured text, you could give the model an "extraction" tool that takes parameters matching the desired schema, then treat the generated output as your final result.
For models that support it, tool calling can be very convenient. It removes the guesswork around how best to prompt schemas in favor of a built-in model feature. It can also more naturally support agentic flows, since you can just pass multiple tool schemas instead of fiddling with enums or unions.
Many LLM providers, including [Anthropic](https://www.anthropic.com/), [Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), [Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, support variants of a tool calling feature. These features typically allow requests to the LLM to include available tools and their schemas, and for responses to include calls to these tools. For instance, given a search engine tool, an LLM might handle a query by first issuing a call to the search engine. The system calling the LLM can receive the tool call, execute it, and return the output to the LLM to inform its response. LangChain includes a suite of [built-in tools](/v0.2/docs/integrations/tools/) and supports several methods for defining your own [custom tools](/v0.2/docs/how_to/custom_tools/).
LangChain provides a standardized interface for tool calling that is consistent across different models.
The standard interface consists of:
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/v0.2/docs/concepts/#tools) here.
* `AIMessage.tool_calls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model.
The following how-to guides are good practical resources for using function/tool calling:
* [How to return structured data from an LLM](/v0.2/docs/how_to/structured_output/)
* [How to use a model to call tools](/v0.2/docs/how_to/tool_calling/)
For a full list of model providers that support tool calling, [see this table](/v0.2/docs/integrations/chat/#advanced-features).
### Retrieval[](#retrieval "Direct link to Retrieval")
LLMs are trained on a large but fixed dataset, limiting their ability to reason over private or recent information. Fine-tuning an LLM with specific facts is one way to mitigate this, but is often [poorly suited for factual recall](https://www.anyscale.com/blog/fine-tuning-is-for-form-not-facts) and [can be costly](https://www.glean.com/blog/how-to-build-an-ai-assistant-for-the-enterprise). Retrieval is the process of providing relevant information to an LLM to improve its response for a given input. Retrieval augmented generation (RAG) is the process of grounding the LLM generation (output) using the retrieved information.
tip
* See our RAG from Scratch [code](https://github.com/langchain-ai/rag-from-scratch) and [video series](https://youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x&feature=shared).
* For a high-level guide on retrieval, see this [tutorial on RAG](/v0.2/docs/tutorials/rag/).
RAG is only as good as the retrieved documents’ relevance and quality. Fortunately, an emerging set of techniques can be employed to design and improve RAG systems. We've focused on taxonomizing and summarizing many of these techniques (see below figure) and will share some high-level strategic guidance in the following sections. You can and should experiment with using different pieces together. You might also find [this LangSmith guide](https://docs.smith.langchain.com/how_to_guides/evaluation/evaluate_llm_application) useful for showing how to evaluate different iterations of your app.
![](/v0.2/assets/images/rag_landscape-627f1d0fd46b92bc2db0af8f99ec3724.png)
#### Query Translation[](#query-translation "Direct link to Query Translation")
First, consider the user input(s) to your RAG system. Ideally, a RAG system can handle a wide range of inputs, from poorly worded questions to complex multi-part queries. **Using an LLM to review and optionally modify the input is the central idea behind query translation.** This serves as a general buffer, optimizing raw user inputs for your retrieval system. For example, this can be as simple as extracting keywords or as complex as generating multiple sub-questions for a complex query.
Name
When to use
Description
[Multi-query](/v0.2/docs/how_to/MultiQueryRetriever/)
When you need to cover multiple perspectives of a question.
Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, return the unique documents for all queries.
[Decomposition](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb)
When a question can be broken down into smaller subproblems.
Decompose a question into a set of subproblems / questions, which can either be solved sequentially (use the answer from first + retrieval to answer the second) or in parallel (consolidate each answer into final answer).
[Step-back](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb)
When a higher-level conceptual understanding is required.
First prompt the LLM to ask a generic step-back question about higher-level concepts or principles, and retrieve relevant facts about them. Use this grounding to help answer the user question.
[HyDE](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb)
If you have challenges retrieving relevant documents using the raw user inputs.
Use an LLM to convert questions into hypothetical documents that answer the question. Use the embedded hypothetical documents to retrieve real documents with the premise that doc-doc similarity search can produce more relevant matches.
tip
See our RAG from Scratch videos for a few different specific approaches:
* [Multi-query](https://youtu.be/JChPi0CRnDY?feature=shared)
* [Decomposition](https://youtu.be/h0OPWlEOank?feature=shared)
* [Step-back](https://youtu.be/xn1jEjRyJ2U?feature=shared)
* [HyDE](https://youtu.be/SaDzIVkYqyY?feature=shared)
#### Routing[](#routing "Direct link to Routing")
Second, consider the data sources available to your RAG system. You want to query across more than one database or across structured and unstructured data sources. **Using an LLM to review the input and route it to the appropriate data source is a simple and effective approach for querying across sources.**
Name
When to use
Description
[Logical routing](/v0.2/docs/how_to/routing/)
When you can prompt an LLM with rules to decide where to route the input.
Logical routing can use an LLM to reason about the query and choose which datastore is most appropriate.
[Semantic routing](/v0.2/docs/how_to/routing/#routing-by-semantic-similarity)
When semantic similarity is an effective way to determine where to route the input.
Semantic routing embeds both query and, typically a set of prompts. It then chooses the appropriate prompt based upon similarity.
tip
See our RAG from Scratch video on [routing](https://youtu.be/pfpIndq7Fi8?feature=shared).
#### Query Construction[](#query-construction "Direct link to Query Construction")
Third, consider whether any of your data sources require specific query formats. Many structured databases use SQL. Vector stores often have specific syntax for applying keyword filters to document metadata. **Using an LLM to convert a natural language query into a query syntax is a popular and powerful approach.** In particular, [text-to-SQL](/v0.2/docs/tutorials/sql_qa/), [text-to-Cypher](/v0.2/docs/tutorials/graph/), and [query analysis for metadata filters](/v0.2/docs/tutorials/query_analysis/#query-analysis) are useful ways to interact with structured, graph, and vector databases respectively.
Name
When to Use
Description
[Text to SQL](/v0.2/docs/tutorials/sql_qa/)
If users are asking questions that require information housed in a relational database, accessible via SQL.
This uses an LLM to transform user input into a SQL query.
[Text-to-Cypher](/v0.2/docs/tutorials/graph/)
If users are asking questions that require information housed in a graph database, accessible via Cypher.
This uses an LLM to transform user input into a Cypher query.
[Self Query](/v0.2/docs/how_to/self_query/)
If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text.
This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filter to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself).
tip
See our [blog post overview](https://blog.langchain.dev/query-construction/) and RAG from Scratch video on [query construction](https://youtu.be/kl6NwWYxvbM?feature=shared), the process of text-to-DSL where DSL is a domain specific language required to interact with a given database. This converts user questions into structured queries.
#### Indexing[](#indexing "Direct link to Indexing")
Fouth, consider the design of your document index. A simple and powerful idea is to **decouple the documents that you index for retrieval from the documents that you pass to the LLM for generation.** Indexing frequently uses embedding models with vector stores, which [compress the semantic information in documents to fixed-size vectors](/v0.2/docs/concepts/#embedding-models).
Many RAG approaches focus on splitting documents into chunks and retrieving some number based on similarity to an input question for the LLM. But chunk size and chunk number can be difficult to set and affect results if they do not provide full context for the LLM to answer a question. Furthermore, LLMs are increasingly capable of processing millions of tokens.
Two approaches can address this tension: (1) [Multi Vector](/v0.2/docs/how_to/multi_vector/) retriever using an LLM to translate documents into any form (e.g., often into a summary) that is well-suited for indexing, but returns full documents to the LLM for generation. (2) [ParentDocument](/v0.2/docs/how_to/parent_document_retriever/) retriever embeds document chunks, but also returns full documents. The idea is to get the best of both worlds: use concise representations (summaries or chunks) for retrieval, but use the full documents for answer generation.
Name
Index Type
Uses an LLM
When to Use
Description
[Vector store](/v0.2/docs/how_to/vectorstore_retriever/)
Vector store
No
If you are just getting started and looking for something quick and easy.
This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text.
[ParentDocument](/v0.2/docs/how_to/parent_document_retriever/)
Vector store + Document Store
No
If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together.
This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks).
[Multi Vector](/v0.2/docs/how_to/multi_vector/)
Vector store + Document Store
Sometimes during indexing
If you are able to extract information from documents that you think is more relevant to index than the text itself.
This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions.
[Time-Weighted Vector store](/v0.2/docs/how_to/time_weighted_vectorstore/)
Vector store
No
If you have timestamps associated with your documents, and you want to retrieve the most recent ones
This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents)
tip
* See our RAG from Scratch video on [indexing fundamentals](https://youtu.be/bjb_EMsTDKI?feature=shared)
* See our RAG from Scratch video on [multi vector retriever](https://youtu.be/gTCU9I6QqCE?feature=shared)
Fifth, consider ways to improve the quality of your similarity search itself. Embedding models compress text into fixed-length (vector) representations that capture the semantic content of the document. This compression is useful for search / retrieval, but puts a heavy burden on that single vector representation to capture the semantic nuance / detail of the document. In some cases, irrelevant or redundant content can dilute the semantic usefulness of the embedding.
[ColBERT](https://docs.google.com/presentation/d/1IRhAdGjIevrrotdplHNcc4aXgIYyKamUKTWtB3m3aMU/edit?usp=sharing) is an interesting approach to address this with a higher granularity embeddings: (1) produce a contextually influenced embedding for each token in the document and query, (2) score similarity between each query token and all document tokens, (3) take the max, (4) do this for all query tokens, and (5) take the sum of the max scores (in step 3) for all query tokens to get a query-document similarity score; this token-wise scoring can yield strong results.
![](/v0.2/assets/images/colbert-0bf5bd7485724d0005a2f5bdadbdaedb.png)
There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](/v0.2/docs/integrations/retrievers/pinecone_hybrid_search/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://python.langchain.com/v0.1/docs/modules/model_io/prompts/example_selectors/mmr/), which attempts to diversify the results of a search to avoid returning similar and redundant documents.
Name
When to use
Description
[ColBERT](/v0.2/docs/integrations/providers/ragatouille/#using-colbert-as-a-reranker)
When higher granularity embeddings are needed.
ColBERT uses contextually influenced embeddings for each token in the document and query to get a granular query-document similarity score.
[Hybrid search](/v0.2/docs/integrations/retrievers/pinecone_hybrid_search/)
When combining keyword-based and semantic similarity.
Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches.
[Maximal Marginal Relevance (MMR)](/v0.2/docs/integrations/vectorstores/pinecone/#maximal-marginal-relevance-searches)
When needing to diversify search results.
MMR attempts to diversify the results of a search to avoid returning similar and redundant documents.
tip
See our RAG from Scratch video on [ColBERT](https://youtu.be/cN6S0Ehm7_8?feature=shared%3E).
#### Post-processing[](#post-processing "Direct link to Post-processing")
Sixth, consider ways to filter or rank retrieved documents. This is very useful if you are [combining documents returned from multiple sources](/v0.2/docs/integrations/retrievers/cohere-reranker/#doing-reranking-with-coherererank), since it can can down-rank less relevant documents and / or [compress similar documents](/v0.2/docs/how_to/contextual_compression/#more-built-in-compressors-filters).
Name
Index Type
Uses an LLM
When to Use
Description
[Contextual Compression](/v0.2/docs/how_to/contextual_compression/)
Any
Sometimes
If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM.
This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM.
[Ensemble](/v0.2/docs/how_to/ensemble_retriever/)
Any
No
If you have multiple retrieval methods and want to try combining them.
This fetches documents from multiple retrievers and then combines them.
[Re-ranking](/v0.2/docs/integrations/retrievers/cohere-reranker/)
Any
Yes
If you want to rank retrieved documents based upon relevance, especially if you want to combine results from multiple retrieval methods .
Given a query and a list of documents, Rerank indexes the documents from most to least semantically relevant to the query.
tip
See our RAG from Scratch video on [RAG-Fusion](https://youtu.be/77qELPbNgxA?feature=shared), on approach for post-processing across multiple queries: Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, and combine the ranks of multiple search result lists to produce a single, unified ranking with [Reciprocal Rank Fusion (RRF)](https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1).
#### Generation[](#generation "Direct link to Generation")
**Finally, consider ways to build self-correction into your RAG system.** RAG systems can suffer from low quality retrieval (e.g., if a user question is out of the domain for the index) and / or hallucinations in generation. A naive retrieve-generate pipeline has no ability to detect or self-correct from these kinds of errors. The concept of ["flow engineering"](https://x.com/karpathy/status/1748043513156272416) has been introduced [in the context of code generation](https://arxiv.org/abs/2401.08500): iteratively build an answer to a code question with unit tests to check and self-correct errors. Several works have applied this RAG, such as Self-RAG and Corrective-RAG. In both cases, checks for document relevance, hallucinations, and / or answer quality are performed in the RAG answer generation flow.
We've found that graphs are a great way to reliably express logical flows and have implemented ideas from several of these papers [using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag), as shown in the figure below (red - routing, blue - fallback, green - self-correction):
* **Routing:** Adaptive RAG ([paper](https://arxiv.org/abs/2403.14403)). Route questions to different retrieval approaches, as discussed above
* **Fallback:** Corrective RAG ([paper](https://arxiv.org/pdf/2401.15884.pdf)). Fallback to web search if docs are not relevant to query
* **Self-correction:** Self-RAG ([paper](https://arxiv.org/abs/2310.11511)). Fix answers w/ hallucinations or don’t address question
![](/v0.2/assets/images/langgraph_rag-f039b41ef268bf46783706e58726fd9c.png)
Name
When to use
Description
Self-RAG
When needing to fix answers with hallucinations or irrelevant content.
Self-RAG performs checks for document relevance, hallucinations, and answer quality during the RAG answer generation flow, iteratively building an answer and self-correcting errors.
Corrective-RAG
When needing a fallback mechanism for low relevance docs.
Corrective-RAG includes a fallback (e.g., to web search) if the retrieved documents are not relevant to the query, ensuring higher quality and more relevant retrieval.
tip
See several videos and cookbooks showcasing RAG with LangGraph:
* [LangGraph Corrective RAG](https://www.youtube.com/watch?v=E2shqsYwxck)
* [LangGraph combining Adaptive, Self-RAG, and Corrective RAG](https://www.youtube.com/watch?v=-ROS6gfYIts)
* [Cookbooks for RAG using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag)
See our LangGraph RAG recipes with partners:
* [Meta](https://github.com/meta-llama/llama-recipes/tree/main/recipes/use_cases/agents/langchain)
* [Mistral](https://github.com/mistralai/cookbook/tree/main/third_party/langchain)
### Text splitting[](#text-splitting "Direct link to Text splitting")
LangChain offers many different types of `text splitters`. These all live in the `langchain-text-splitters` package.
Table columns:
* **Name**: Name of the text splitter
* **Classes**: Classes that implement this text splitter
* **Splits On**: How this text splitter splits text
* **Adds Metadata**: Whether or not this text splitter adds metadata about where each chunk came from.
* **Description**: Description of the splitter, including recommendation on when to use it.
Name
Classes
Splits On
Adds Metadata
Description
Recursive
[RecursiveCharacterTextSplitter](/v0.2/docs/how_to/recursive_text_splitter/), [RecursiveJsonSplitter](/v0.2/docs/how_to/recursive_json_splitter/)
A list of user defined characters
Recursively splits text. This splitting is trying to keep related pieces of text next to each other. This is the `recommended way` to start splitting text.
HTML
[HTMLHeaderTextSplitter](/v0.2/docs/how_to/HTML_header_metadata_splitter/), [HTMLSectionSplitter](/v0.2/docs/how_to/HTML_section_aware_splitter/)
HTML specific characters
✅
Splits text based on HTML-specific characters. Notably, this adds in relevant information about where that chunk came from (based on the HTML)
Markdown
[MarkdownHeaderTextSplitter](/v0.2/docs/how_to/markdown_header_metadata_splitter/),
Markdown specific characters
✅
Splits text based on Markdown-specific characters. Notably, this adds in relevant information about where that chunk came from (based on the Markdown)
Code
[many languages](/v0.2/docs/how_to/code_splitter/)
Code (Python, JS) specific characters
Splits text based on characters specific to coding languages. 15 different languages are available to choose from.
Token
[many classes](/v0.2/docs/how_to/split_by_token/)
Tokens
Splits text on tokens. There exist a few different ways to measure tokens.
Character
[CharacterTextSplitter](/v0.2/docs/how_to/character_text_splitter/)
A user defined character
Splits text based on a user defined character. One of the simpler methods.
Semantic Chunker (Experimental)
[SemanticChunker](/v0.2/docs/how_to/semantic-chunker/)
Sentences
First splits on sentences. Then combines ones next to each other if they are semantically similar enough. Taken from [Greg Kamradt](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb)
Integration: AI21 Semantic
[AI21SemanticTextSplitter](/v0.2/docs/integrations/document_transformers/ai21_semantic_text_splitter/)
✅
Identifies distinct topics that form coherent pieces of text and splits along those.
### Evaluation[](#evaluation "Direct link to Evaluation")
Evaluation is the process of assessing the performance and effectiveness of your LLM-powered applications. It involves testing the model's responses against a set of predefined criteria or benchmarks to ensure it meets the desired quality standards and fulfills the intended purpose. This process is vital for building reliable applications.
![](/v0.2/assets/images/langsmith_evaluate-7d48643f3e4c50d77234e13feb95144d.png)
[LangSmith](https://docs.smith.langchain.com/) helps with this process in a few ways:
* It makes it easier to create and curate datasets via its tracing and annotation features
* It provides an evaluation framework that helps you define metrics and run your app against your dataset
* It allows you to track results over time and automatically run your evaluators on a schedule or as part of CI/Code
To learn more, check out [this LangSmith guide](https://docs.smith.langchain.com/concepts/evaluation).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/concepts.mdx)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create and query vector stores
](/v0.2/docs/how_to/vectorstores/)[
Next
🦜️🏓 LangServe
](/v0.2/docs/langserve/)
* [Architecture](#architecture)
* [`langchain-core`](#langchain-core)
* [Partner packages](#partner-packages)
* [`langchain`](#langchain)
* [`langchain-community`](#langchain-community)
* [`langgraph`](#langgraph)
* [`langserve`](#langserve)
* [LangSmith](#langsmith)
* [LangChain Expression Language (LCEL)](#langchain-expression-language-lcel)
* [Runnable interface](#runnable-interface)
* [Components](#components)
* [Chat models](#chat-models)
* [LLMs](#llms)
* [Messages](#messages)
* [Prompt templates](#prompt-templates)
* [Example selectors](#example-selectors)
* [Output parsers](#output-parsers)
* [Chat history](#chat-history)
* [Documents](#documents)
* [Document loaders](#document-loaders)
* [Text splitters](#text-splitters)
* [Embedding models](#embedding-models)
* [Vector stores](#vector-stores)
* [Retrievers](#retrievers)
* [Tools](#tools)
* [Toolkits](#toolkits)
* [Agents](#agents)
* [Callbacks](#callbacks)
* [Techniques](#techniques)
* [Streaming](#streaming)
* [Structured output](#structured-output)
* [Retrieval](#retrieval)
* [Text splitting](#text-splitting)
* [Evaluation](#evaluation) | null |
https://python.langchain.com/v0.2/docs/how_to/query_multiple_retrievers/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to handle multiple retrievers when doing query analysis
On this page
How to handle multiple retrievers when doing query analysis
===========================================================
Sometimes, a query analysis technique may allow for selection of which retriever to use. To use this, you will need to add some logic to select the retriever to do. We will show a simple example (using mock data) of how to do that.
Setup[](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
# %pip install -qU langchain langchain-community langchain-openai langchain-chroma
#### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
We'll use OpenAI in this example:
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
### Create Index[](#create-index "Direct link to Create Index")
We will create a vectorstore over fake information.
from langchain_chroma import Chromafrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplittertexts = ["Harrison worked at Kensho"]embeddings = OpenAIEmbeddings(model="text-embedding-3-small")vectorstore = Chroma.from_texts(texts, embeddings, collection_name="harrison")retriever_harrison = vectorstore.as_retriever(search_kwargs={"k": 1})texts = ["Ankush worked at Facebook"]embeddings = OpenAIEmbeddings(model="text-embedding-3-small")vectorstore = Chroma.from_texts(texts, embeddings, collection_name="ankush")retriever_ankush = vectorstore.as_retriever(search_kwargs={"k": 1})
**API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
Query analysis[](#query-analysis "Direct link to Query analysis")
------------------------------------------------------------------
We will use function calling to structure the output. We will let it return multiple queries.
from typing import List, Optionalfrom langchain_core.pydantic_v1 import BaseModel, Fieldclass Search(BaseModel): """Search for information about a person.""" query: str = Field( ..., description="Query to look up", ) person: str = Field( ..., description="Person to look things up for. Should be `HARRISON` or `ANKUSH`.", )
from langchain_core.output_parsers.openai_tools import PydanticToolsParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAIoutput_parser = PydanticToolsParser(tools=[Search])system = """You have the ability to issue search queries to get information to help answer user information."""prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ])llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)structured_llm = llm.with_structured_output(Search)query_analyzer = {"question": RunnablePassthrough()} | prompt | structured_llm
**API Reference:**[PydanticToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.PydanticToolsParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
We can see that this allows for routing between retrievers
query_analyzer.invoke("where did Harrison Work")
Search(query='workplace', person='HARRISON')
query_analyzer.invoke("where did ankush Work")
Search(query='workplace', person='ANKUSH')
Retrieval with query analysis[](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis")
---------------------------------------------------------------------------------------------------------------
So how would we include this in a chain? We just need some simple logic to select the retriever and pass in the search query
from langchain_core.runnables import chain
**API Reference:**[chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html)
retrievers = { "HARRISON": retriever_harrison, "ANKUSH": retriever_ankush,}
@chaindef custom_chain(question): response = query_analyzer.invoke(question) retriever = retrievers[response.person] return retriever.invoke(response.query)
custom_chain.invoke("where did Harrison Work")
[Document(page_content='Harrison worked at Kensho')]
custom_chain.invoke("where did ankush Work")
[Document(page_content='Ankush worked at Facebook')]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/query_multiple_retrievers.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to compose prompts together
](/v0.2/docs/how_to/prompts_composition/)[
Next
How to add values to a chain's state
](/v0.2/docs/how_to/assign/)
* [Setup](#setup)
* [Create Index](#create-index)
* [Query analysis](#query-analysis)
* [Retrieval with query analysis](#retrieval-with-query-analysis) | null |
https://python.langchain.com/v0.2/docs/langserve/ | * [](/v0.2/)
* Ecosystem
* 🦜️🏓 LangServe
On this page
🦜️🏓 LangServe
===============
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langserve)](https://github.com/langchain-ai/langserve/releases) [![Downloads](https://static.pepy.tech/badge/langserve/month)](https://pepy.tech/project/langserve) [![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langserve)](https://github.com/langchain-ai/langserve/issues) [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.com/channels/1038097195422978059/1170024642245832774)
🚩 We will be releasing a hosted version of LangServe for one-click deployments of LangChain applications. [Sign up here](https://forms.gle/KC13Nzn76UeLaghK7) to get on the waitlist.
Overview[](#overview "Direct link to Overview")
------------------------------------------------
[LangServe](https://github.com/langchain-ai/langserve) helps developers deploy `LangChain` [runnables and chains](https://python.langchain.com/docs/expression_language/) as a REST API.
This library is integrated with [FastAPI](https://fastapi.tiangolo.com/) and uses [pydantic](https://docs.pydantic.dev/latest/) for data validation.
In addition, it provides a client that can be used to call into runnables deployed on a server. A JavaScript client is available in [LangChain.js](https://js.langchain.com/docs/ecosystem/langserve).
Features[](#features "Direct link to Features")
------------------------------------------------
* Input and Output schemas automatically inferred from your LangChain object, and enforced on every API call, with rich error messages
* API docs page with JSONSchema and Swagger (insert example link)
* Efficient `/invoke`, `/batch` and `/stream` endpoints with support for many concurrent requests on a single server
* `/stream_log` endpoint for streaming all (or some) intermediate steps from your chain/agent
* **new** as of 0.0.40, supports `/stream_events` to make it easier to stream without needing to parse the output of `/stream_log`.
* Playground page at `/playground/` with streaming output and intermediate steps
* Built-in (optional) tracing to [LangSmith](https://www.langchain.com/langsmith), just add your API key (see [Instructions](https://docs.smith.langchain.com/))
* All built with battle-tested open-source Python libraries like FastAPI, Pydantic, uvloop and asyncio.
* Use the client SDK to call a LangServe server as if it was a Runnable running locally (or call the HTTP API directly)
* [LangServe Hub](https://github.com/langchain-ai/langchain/blob/master/templates/README.md)
Limitations[](#limitations "Direct link to Limitations")
---------------------------------------------------------
* Client callbacks are not yet supported for events that originate on the server
* OpenAPI docs will not be generated when using Pydantic V2. Fast API does not support [mixing pydantic v1 and v2 namespaces](https://github.com/tiangolo/fastapi/issues/10360). See section below for more details.
Hosted LangServe[](#hosted-langserve "Direct link to Hosted LangServe")
------------------------------------------------------------------------
We will be releasing a hosted version of LangServe for one-click deployments of LangChain applications. [Sign up here](https://forms.gle/KC13Nzn76UeLaghK7) to get on the waitlist.
Security[](#security "Direct link to Security")
------------------------------------------------
* Vulnerability in Versions 0.0.13 - 0.0.15 -- playground endpoint allows accessing arbitrary files on server. [Resolved in 0.0.16](https://github.com/langchain-ai/langserve/pull/98).
Installation[](#installation "Direct link to Installation")
------------------------------------------------------------
For both client and server:
pip install "langserve[all]"
or `pip install "langserve[client]"` for client code, and `pip install "langserve[server]"` for server code.
LangChain CLI 🛠️[](#langchain-cli-️ "Direct link to LangChain CLI 🛠️")
-------------------------------------------------------------------------
Use the `LangChain` CLI to bootstrap a `LangServe` project quickly.
To use the langchain CLI make sure that you have a recent version of `langchain-cli` installed. You can install it with `pip install -U langchain-cli`.
Setup[](#setup "Direct link to Setup")
---------------------------------------
**Note**: We use `poetry` for dependency management. Please follow poetry [doc](https://python-poetry.org/docs/) to learn more about it.
### 1\. Create new app using langchain cli command[](#1-create-new-app-using-langchain-cli-command "Direct link to 1. Create new app using langchain cli command")
langchain app new my-app
### 2\. Define the runnable in add\_routes. Go to server.py and edit[](#2-define-the-runnable-in-add_routes-go-to-serverpy-and-edit "Direct link to 2. Define the runnable in add_routes. Go to server.py and edit")
add_routes(app. NotImplemented)
### 3\. Use `poetry` to add 3rd party packages (e.g., langchain-openai, langchain-anthropic, langchain-mistral etc).[](#3-use-poetry-to-add-3rd-party-packages-eg-langchain-openai-langchain-anthropic-langchain-mistral-etc "Direct link to 3-use-poetry-to-add-3rd-party-packages-eg-langchain-openai-langchain-anthropic-langchain-mistral-etc")
poetry add [package-name] // e.g `poetry add langchain-openai`
### 4\. Set up relevant env variables. For example,[](#4-set-up-relevant-env-variables-for-example "Direct link to 4. Set up relevant env variables. For example,")
export OPENAI_API_KEY="sk-..."
### 5\. Serve your app[](#5-serve-your-app "Direct link to 5. Serve your app")
poetry run langchain serve --port=8100
Examples[](#examples "Direct link to Examples")
------------------------------------------------
Get your LangServe instance started quickly with [LangChain Templates](https://github.com/langchain-ai/langchain/blob/master/templates/README.md).
For more examples, see the templates [index](https://github.com/langchain-ai/langchain/blob/master/templates/docs/INDEX.md) or the [examples](https://github.com/langchain-ai/langserve/tree/main/examples) directory.
Description
Links
**LLMs** Minimal example that reserves OpenAI and Anthropic chat models. Uses async, supports batching and streaming.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/llm/server.py), [client](https://github.com/langchain-ai/langserve/blob/main/examples/llm/client.ipynb)
**Retriever** Simple server that exposes a retriever as a runnable.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/retrieval/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/retrieval/client.ipynb)
**Conversational Retriever** A [Conversational Retriever](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) exposed via LangServe
[server](https://github.com/langchain-ai/langserve/tree/main/examples/conversational_retrieval_chain/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/conversational_retrieval_chain/client.ipynb)
**Agent** without **conversation history** based on [OpenAI tools](https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent)
[server](https://github.com/langchain-ai/langserve/tree/main/examples/agent/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/agent/client.ipynb)
**Agent** with **conversation history** based on [OpenAI tools](https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent)
[server](https://github.com/langchain-ai/langserve/blob/main/examples/agent_with_history/server.py), [client](https://github.com/langchain-ai/langserve/blob/main/examples/agent_with_history/client.ipynb)
[RunnableWithMessageHistory](https://python.langchain.com/docs/expression_language/how_to/message_history) to implement chat persisted on backend, keyed off a `session_id` supplied by client.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/chat_with_persistence/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/chat_with_persistence/client.ipynb)
[RunnableWithMessageHistory](https://python.langchain.com/docs/expression_language/how_to/message_history) to implement chat persisted on backend, keyed off a `conversation_id` supplied by client, and `user_id` (see Auth for implementing `user_id` properly).
[server](https://github.com/langchain-ai/langserve/tree/main/examples/chat_with_persistence_and_user/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/chat_with_persistence_and_user/client.ipynb)
[Configurable Runnable](https://python.langchain.com/docs/expression_language/how_to/configure) to create a retriever that supports run time configuration of the index name.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/configurable_retrieval/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/configurable_retrieval/client.ipynb)
[Configurable Runnable](https://python.langchain.com/docs/expression_language/how_to/configure) that shows configurable fields and configurable alternatives.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/configurable_chain/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/configurable_chain/client.ipynb)
**APIHandler** Shows how to use `APIHandler` instead of `add_routes`. This provides more flexibility for developers to define endpoints. Works well with all FastAPI patterns, but takes a bit more effort.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/api_handler_examples/server.py)
**LCEL Example** Example that uses LCEL to manipulate a dictionary input.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/passthrough_dict/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/passthrough_dict/client.ipynb)
**Auth** with `add_routes`: Simple authentication that can be applied across all endpoints associated with app. (Not useful on its own for implementing per user logic.)
[server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/global_deps/server.py)
**Auth** with `add_routes`: Simple authentication mechanism based on path dependencies. (No useful on its own for implementing per user logic.)
[server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/path_dependencies/server.py)
**Auth** with `add_routes`: Implement per user logic and auth for endpoints that use per request config modifier. (**Note**: At the moment, does not integrate with OpenAPI docs.)
[server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/per_req_config_modifier/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/auth/per_req_config_modifier/client.ipynb)
**Auth** with `APIHandler`: Implement per user logic and auth that shows how to search only within user owned documents.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/api_handler/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/auth/api_handler/client.ipynb)
**Widgets** Different widgets that can be used with playground (file upload and chat)
[server](https://github.com/langchain-ai/langserve/tree/main/examples/widgets/chat/tuples/server.py)
**Widgets** File upload widget used for LangServe playground.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing/client.ipynb)
Sample Application[](#sample-application "Direct link to Sample Application")
------------------------------------------------------------------------------
### Server[](#server "Direct link to Server")
Here's a server that deploys an OpenAI chat model, an Anthropic chat model, and a chain that uses the Anthropic model to tell a joke about a topic.
#!/usr/bin/env pythonfrom fastapi import FastAPIfrom langchain.prompts import ChatPromptTemplatefrom langchain.chat_models import ChatAnthropic, ChatOpenAIfrom langserve import add_routesapp = FastAPI( title="LangChain Server", version="1.0", description="A simple api server using Langchain's Runnable interfaces",)add_routes( app, ChatOpenAI(model="gpt-3.5-turbo-0125"), path="/openai",)add_routes( app, ChatAnthropic(model="claude-3-haiku-20240307"), path="/anthropic",)model = ChatAnthropic(model="claude-3-haiku-20240307")prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")add_routes( app, prompt | model, path="/joke",)if __name__ == "__main__": import uvicorn uvicorn.run(app, host="localhost", port=8000)
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.anthropic.ChatAnthropic.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.openai.ChatOpenAI.html)
If you intend to call your endpoint from the browser, you will also need to set CORS headers. You can use FastAPI's built-in middleware for that:
from fastapi.middleware.cors import CORSMiddleware# Set all CORS enabled originsapp.add_middleware( CORSMiddleware, allow_origins=["*"], allow_credentials=True, allow_methods=["*"], allow_headers=["*"], expose_headers=["*"],)
### Docs[](#docs "Direct link to Docs")
If you've deployed the server above, you can view the generated OpenAPI docs using:
> ⚠️ If using pydantic v2, docs will not be generated for _invoke_, _batch_, _stream_, _stream\_log_. See [Pydantic](#pydantic) section below for more details.
curl localhost:8000/docs
make sure to **add** the `/docs` suffix.
> ⚠️ Index page `/` is not defined by **design**, so `curl localhost:8000` or visiting the URL will return a 404. If you want content at `/` define an endpoint `@app.get("/")`.
### Client[](#client "Direct link to Client")
Python SDK
from langchain.schema import SystemMessage, HumanMessagefrom langchain.prompts import ChatPromptTemplatefrom langchain.schema.runnable import RunnableMapfrom langserve import RemoteRunnableopenai = RemoteRunnable("http://localhost:8000/openai/")anthropic = RemoteRunnable("http://localhost:8000/anthropic/")joke_chain = RemoteRunnable("http://localhost:8000/joke/")joke_chain.invoke({"topic": "parrots"})# or asyncawait joke_chain.ainvoke({"topic": "parrots"})prompt = [ SystemMessage(content='Act like either a cat or a parrot.'), HumanMessage(content='Hello!')]# Supports astreamasync for msg in anthropic.astream(prompt): print(msg, end="", flush=True)prompt = ChatPromptTemplate.from_messages( [("system", "Tell me a long story about {topic}")])# Can define custom chainschain = prompt | RunnableMap({ "openai": openai, "anthropic": anthropic,})chain.batch([{"topic": "parrots"}, {"topic": "cats"}])
**API Reference:**[SystemMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnableMap](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableMap.html)
In TypeScript (requires LangChain.js version 0.0.166 or later):
import { RemoteRunnable } from "@langchain/core/runnables/remote";const chain = new RemoteRunnable({ url: `http://localhost:8000/joke/`,});const result = await chain.invoke({ topic: "cats",});
Python using `requests`:
import requestsresponse = requests.post( "http://localhost:8000/joke/invoke", json={'input': {'topic': 'cats'}})response.json()
You can also use `curl`:
curl --location --request POST 'http://localhost:8000/joke/invoke' \ --header 'Content-Type: application/json' \ --data-raw '{ "input": { "topic": "cats" } }'
Endpoints[](#endpoints "Direct link to Endpoints")
---------------------------------------------------
The following code:
...add_routes( app, runnable, path="/my_runnable",)
adds of these endpoints to the server:
* `POST /my_runnable/invoke` - invoke the runnable on a single input
* `POST /my_runnable/batch` - invoke the runnable on a batch of inputs
* `POST /my_runnable/stream` - invoke on a single input and stream the output
* `POST /my_runnable/stream_log` - invoke on a single input and stream the output, including output of intermediate steps as it's generated
* `POST /my_runnable/astream_events` - invoke on a single input and stream events as they are generated, including from intermediate steps.
* `GET /my_runnable/input_schema` - json schema for input to the runnable
* `GET /my_runnable/output_schema` - json schema for output of the runnable
* `GET /my_runnable/config_schema` - json schema for config of the runnable
These endpoints match the [LangChain Expression Language interface](https://python.langchain.com/docs/expression_language/interface) -- please reference this documentation for more details.
Playground[](#playground "Direct link to Playground")
------------------------------------------------------
You can find a playground page for your runnable at `/my_runnable/playground/`. This exposes a simple UI to [configure](https://python.langchain.com/docs/expression_language/how_to/configure) and invoke your runnable with streaming output and intermediate steps.
![](https://github.com/langchain-ai/langserve/assets/3205522/5ca56e29-f1bb-40f4-84b5-15916384a276)
### Widgets[](#widgets "Direct link to Widgets")
The playground supports [widgets](#playground-widgets) and can be used to test your runnable with different inputs. See the [widgets](#widgets) section below for more details.
### Sharing[](#sharing "Direct link to Sharing")
In addition, for configurable runnables, the playground will allow you to configure the runnable and share a link with the configuration:
![](https://github.com/langchain-ai/langserve/assets/3205522/86ce9c59-f8e4-4d08-9fa3-62030e0f521d)
Chat playground[](#chat-playground "Direct link to Chat playground")
---------------------------------------------------------------------
LangServe also supports a chat-focused playground that opt into and use under `/my_runnable/playground/`. Unlike the general playground, only certain types of runnables are supported - the runnable's input schema must be a `dict` with either:
* a single key, and that key's value must be a list of chat messages.
* two keys, one whose value is a list of messages, and the other representing the most recent message.
We recommend you use the first format.
The runnable must also return either an `AIMessage` or a string.
To enable it, you must set `playground_type="chat",` when adding your route. Here's an example:
# Declare a chainprompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful, professional assistant named Cob."), MessagesPlaceholder(variable_name="messages"), ])chain = prompt | ChatAnthropic(model="claude-2")class InputChat(BaseModel): """Input for the chat endpoint.""" messages: List[Union[HumanMessage, AIMessage, SystemMessage]] = Field( ..., description="The chat messages representing the current conversation.", )add_routes( app, chain.with_types(input_type=InputChat), enable_feedback_endpoint=True, enable_public_trace_link_endpoint=True, playground_type="chat",)
If you are using LangSmith, you can also set `enable_feedback_endpoint=True` on your route to enable thumbs-up/thumbs-down buttons after each message, and `enable_public_trace_link_endpoint=True` to add a button that creates a public traces for runs. Note that you will also need to set the following environment variables:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_PROJECT="YOUR_PROJECT_NAME"export LANGCHAIN_API_KEY="YOUR_API_KEY"
Here's an example with the above two options turned on:
![](./.github/img/chat_playground.png)
Note: If you enable public trace links, the internals of your chain will be exposed. We recommend only using this setting for demos or testing.
Legacy Chains[](#legacy-chains "Direct link to Legacy Chains")
---------------------------------------------------------------
LangServe works with both Runnables (constructed via [LangChain Expression Language](https://python.langchain.com/docs/expression_language/)) and legacy chains (inheriting from `Chain`). However, some of the input schemas for legacy chains may be incomplete/incorrect, leading to errors. This can be fixed by updating the `input_schema` property of those chains in LangChain. If you encounter any errors, please open an issue on THIS repo, and we will work to address it.
Deployment[](#deployment "Direct link to Deployment")
------------------------------------------------------
### Deploy to AWS[](#deploy-to-aws "Direct link to Deploy to AWS")
You can deploy to AWS using the [AWS Copilot CLI](https://aws.github.io/copilot-cli/)
copilot init --app [application-name] --name [service-name] --type 'Load Balanced Web Service' --dockerfile './Dockerfile' --deploy
Click [here](https://aws.amazon.com/containers/copilot/) to learn more.
### Deploy to Azure[](#deploy-to-azure "Direct link to Deploy to Azure")
You can deploy to Azure using Azure Container Apps (Serverless):
az containerapp up --name [container-app-name] --source . --resource-group [resource-group-name] --environment [environment-name] --ingress external --target-port 8001 --env-vars=OPENAI_API_KEY=your_key
You can find more info [here](https://learn.microsoft.com/en-us/azure/container-apps/containerapp-up)
### Deploy to GCP[](#deploy-to-gcp "Direct link to Deploy to GCP")
You can deploy to GCP Cloud Run using the following command:
gcloud run deploy [your-service-name] --source . --port 8001 --allow-unauthenticated --region us-central1 --set-env-vars=OPENAI_API_KEY=your_key
### Community Contributed[](#community-contributed "Direct link to Community Contributed")
#### Deploy to Railway[](#deploy-to-railway "Direct link to Deploy to Railway")
[Example Railway Repo](https://github.com/PaulLockett/LangServe-Railway/tree/main)
[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template/pW9tXP?referralCode=c-aq4K)
Pydantic[](#pydantic "Direct link to Pydantic")
------------------------------------------------
LangServe provides support for Pydantic 2 with some limitations.
1. OpenAPI docs will not be generated for invoke/batch/stream/stream\_log when using Pydantic V2. Fast API does not support \[mixing pydantic v1 and v2 namespaces\].
2. LangChain uses the v1 namespace in Pydantic v2. Please read the [following guidelines to ensure compatibility with LangChain](https://github.com/langchain-ai/langchain/discussions/9337)
Except for these limitations, we expect the API endpoints, the playground and any other features to work as expected.
Advanced[](#advanced "Direct link to Advanced")
------------------------------------------------
### Handling Authentication[](#handling-authentication "Direct link to Handling Authentication")
If you need to add authentication to your server, please read Fast API's documentation about [dependencies](https://fastapi.tiangolo.com/tutorial/dependencies/) and [security](https://fastapi.tiangolo.com/tutorial/security/).
The below examples show how to wire up authentication logic LangServe endpoints using FastAPI primitives.
You are responsible for providing the actual authentication logic, the users table etc.
If you're not sure what you're doing, you could try using an existing solution [Auth0](https://auth0.com/).
#### Using add\_routes[](#using-add_routes "Direct link to Using add_routes")
If you're using `add_routes`, see examples [here](https://github.com/langchain-ai/langserve/tree/main/examples/auth).
Description
Links
**Auth** with `add_routes`: Simple authentication that can be applied across all endpoints associated with app. (Not useful on its own for implementing per user logic.)
[server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/global_deps/server.py)
**Auth** with `add_routes`: Simple authentication mechanism based on path dependencies. (No useful on its own for implementing per user logic.)
[server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/path_dependencies/server.py)
**Auth** with `add_routes`: Implement per user logic and auth for endpoints that use per request config modifier. (**Note**: At the moment, does not integrate with OpenAPI docs.)
[server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/per_req_config_modifier/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/auth/per_req_config_modifier/client.ipynb)
Alternatively, you can use FastAPI's [middleware](https://fastapi.tiangolo.com/tutorial/middleware/).
Using global dependencies and path dependencies has the advantage that auth will be properly supported in the OpenAPI docs page, but these are not sufficient for implement per user logic (e.g., making an application that can search only within user owned documents).
If you need to implement per user logic, you can use the `per_req_config_modifier` or `APIHandler` (below) to implement this logic.
**Per User**
If you need authorization or logic that is user dependent, specify `per_req_config_modifier` when using `add_routes`. Use a callable receives the raw `Request` object and can extract relevant information from it for authentication and authorization purposes.
#### Using APIHandler[](#using-apihandler "Direct link to Using APIHandler")
If you feel comfortable with FastAPI and python, you can use LangServe's [APIHandler](https://github.com/langchain-ai/langserve/blob/main/examples/api_handler_examples/server.py).
Description
Links
**Auth** with `APIHandler`: Implement per user logic and auth that shows how to search only within user owned documents.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/auth/api_handler/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/auth/api_handler/client.ipynb)
**APIHandler** Shows how to use `APIHandler` instead of `add_routes`. This provides more flexibility for developers to define endpoints. Works well with all FastAPI patterns, but takes a bit more effort.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/api_handler_examples/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/api_handler_examples/client.ipynb)
It's a bit more work, but gives you complete control over the endpoint definitions, so you can do whatever custom logic you need for auth.
### Files[](#files "Direct link to Files")
LLM applications often deal with files. There are different architectures that can be made to implement file processing; at a high level:
1. The file may be uploaded to the server via a dedicated endpoint and processed using a separate endpoint
2. The file may be uploaded by either value (bytes of file) or reference (e.g., s3 url to file content)
3. The processing endpoint may be blocking or non-blocking
4. If significant processing is required, the processing may be offloaded to a dedicated process pool
You should determine what is the appropriate architecture for your application.
Currently, to upload files by value to a runnable, use base64 encoding for the file (`multipart/form-data` is not supported yet).
Here's an [example](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing) that shows how to use base64 encoding to send a file to a remote runnable.
Remember, you can always upload files by reference (e.g., s3 url) or upload them as multipart/form-data to a dedicated endpoint.
### Custom Input and Output Types[](#custom-input-and-output-types "Direct link to Custom Input and Output Types")
Input and Output types are defined on all runnables.
You can access them via the `input_schema` and `output_schema` properties.
`LangServe` uses these types for validation and documentation.
If you want to override the default inferred types, you can use the `with_types` method.
Here's a toy example to illustrate the idea:
from typing import Anyfrom fastapi import FastAPIfrom langchain.schema.runnable import RunnableLambdaapp = FastAPI()def func(x: Any) -> int: """Mistyped function that should accept an int but accepts anything.""" return x + 1runnable = RunnableLambda(func).with_types( input_type=int,)add_routes(app, runnable)
**API Reference:**[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html)
### Custom User Types[](#custom-user-types "Direct link to Custom User Types")
Inherit from `CustomUserType` if you want the data to de-serialize into a pydantic model rather than the equivalent dict representation.
At the moment, this type only works _server_ side and is used to specify desired _decoding_ behavior. If inheriting from this type the server will keep the decoded type as a pydantic model instead of converting it into a dict.
from fastapi import FastAPIfrom langchain.schema.runnable import RunnableLambdafrom langserve import add_routesfrom langserve.schema import CustomUserTypeapp = FastAPI()class Foo(CustomUserType): bar: intdef func(foo: Foo) -> int: """Sample function that expects a Foo type which is a pydantic model""" assert isinstance(foo, Foo) return foo.bar# Note that the input and output type are automatically inferred!# You do not need to specify them.# runnable = RunnableLambda(func).with_types( # <-- Not needed in this case# input_type=Foo,# output_type=int,#add_routes(app, RunnableLambda(func), path="/foo")
**API Reference:**[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html)
### Playground Widgets[](#playground-widgets "Direct link to Playground Widgets")
The playground allows you to define custom widgets for your runnable from the backend.
Here are a few examples:
Description
Links
**Widgets** Different widgets that can be used with playground (file upload and chat)
[server](https://github.com/langchain-ai/langserve/tree/main/examples/widgets/chat/tuples/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/widgets/client.ipynb)
**Widgets** File upload widget used for LangServe playground.
[server](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing/server.py), [client](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing/client.ipynb)
#### Schema[](#schema "Direct link to Schema")
* A widget is specified at the field level and shipped as part of the JSON schema of the input type
* A widget must contain a key called `type` with the value being one of a well known list of widgets
* Other widget keys will be associated with values that describe paths in a JSON object
type JsonPath = number | string | (number | string)[];type NameSpacedPath = { title: string; path: JsonPath }; // Using title to mimick json schema, but can use namespacetype OneOfPath = { oneOf: JsonPath[] };type Widget = { type: string; // Some well known type (e.g., base64file, chat etc.) [key: string]: JsonPath | NameSpacedPath | OneOfPath;};
### Available Widgets[](#available-widgets "Direct link to Available Widgets")
There are only two widgets that the user can specify manually right now:
1. File Upload Widget
2. Chat History Widget
See below more information about these widgets.
All other widgets on the playground UI are created and managed automatically by the UI based on the config schema of the Runnable. When you create Configurable Runnables, the playground should create appropriate widgets for you to control the behavior.
#### File Upload Widget[](#file-upload-widget "Direct link to File Upload Widget")
Allows creation of a file upload input in the UI playground for files that are uploaded as base64 encoded strings. Here's the full [example](https://github.com/langchain-ai/langserve/tree/main/examples/file_processing).
Snippet:
try: from pydantic.v1 import Fieldexcept ImportError: from pydantic import Fieldfrom langserve import CustomUserType# ATTENTION: Inherit from CustomUserType instead of BaseModel otherwise# the server will decode it into a dict instead of a pydantic model.class FileProcessingRequest(CustomUserType): """Request including a base64 encoded file.""" # The extra field is used to specify a widget for the playground UI. file: str = Field(..., extra={"widget": {"type": "base64file"}}) num_chars: int = 100
Example widget:
![](https://github.com/langchain-ai/langserve/assets/3205522/52199e46-9464-4c2e-8be8-222250e08c3f)
### Chat Widget[](#chat-widget "Direct link to Chat Widget")
Look at the [widget example](https://github.com/langchain-ai/langserve/tree/main/examples/widgets/chat/tuples/server.py).
To define a chat widget, make sure that you pass "type": "chat".
* "input" is JSONPath to the field in the _Request_ that has the new input message.
* "output" is JSONPath to the field in the _Response_ that has new output message(s).
* Don't specify these fields if the entire input or output should be used as they are ( e.g., if the output is a list of chat messages.)
Here's a snippet:
class ChatHistory(CustomUserType): chat_history: List[Tuple[str, str]] = Field( ..., examples=[[("human input", "ai response")]], extra={"widget": {"type": "chat", "input": "question", "output": "answer"}}, ) question: strdef _format_to_messages(input: ChatHistory) -> List[BaseMessage]: """Format the input to a list of messages.""" history = input.chat_history user_input = input.question messages = [] for human, ai in history: messages.append(HumanMessage(content=human)) messages.append(AIMessage(content=ai)) messages.append(HumanMessage(content=user_input)) return messagesmodel = ChatOpenAI()chat_model = RunnableParallel({"answer": (RunnableLambda(_format_to_messages) | model)})add_routes( app, chat_model.with_types(input_type=ChatHistory), config_keys=["configurable"], path="/chat",)
Example widget:
![](https://github.com/langchain-ai/langserve/assets/3205522/a71ff37b-a6a9-4857-a376-cf27c41d3ca4)
You can also specify a list of messages as your a parameter directly, as shown in this snippet:
prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful assisstant named Cob."), MessagesPlaceholder(variable_name="messages"), ])chain = prompt | ChatAnthropic(model="claude-2")class MessageListInput(BaseModel): """Input for the chat endpoint.""" messages: List[Union[HumanMessage, AIMessage]] = Field( ..., description="The chat messages representing the current conversation.", extra={"widget": {"type": "chat", "input": "messages"}}, )add_routes( app, chain.with_types(input_type=MessageListInput), path="/chat",)
See [this sample file](https://github.com/langchain-ai/langserve/tree/main/examples/widgets/chat/message_list/server.py) for an example.
### Enabling / Disabling Endpoints (LangServe >=0.0.33)[](#enabling--disabling-endpoints-langserve-0033 "Direct link to Enabling / Disabling Endpoints (LangServe >=0.0.33)")
You can enable / disable which endpoints are exposed when adding routes for a given chain.
Use `enabled_endpoints` if you want to make sure to never get a new endpoint when upgrading langserve to a newer verison.
Enable: The code below will only enable `invoke`, `batch` and the corresponding `config_hash` endpoint variants.
add_routes(app, chain, enabled_endpoints=["invoke", "batch", "config_hashes"], path="/mychain")
Disable: The code below will disable the playground for the chain
add_routes(app, chain, disabled_endpoints=["playground"], path="/mychain")
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Conceptual guide
](/v0.2/docs/concepts/)[
Next
Overview
](/v0.2/docs/versions/overview/)
* [Overview](#overview)
* [Features](#features)
* [Limitations](#limitations)
* [Hosted LangServe](#hosted-langserve)
* [Security](#security)
* [Installation](#installation)
* [LangChain CLI 🛠️](#langchain-cli-️)
* [Setup](#setup)
* [1\. Create new app using langchain cli command](#1-create-new-app-using-langchain-cli-command)
* [2\. Define the runnable in add\_routes. Go to server.py and edit](#2-define-the-runnable-in-add_routes-go-to-serverpy-and-edit)
* [3\. Use `poetry` to add 3rd party packages (e.g., langchain-openai, langchain-anthropic, langchain-mistral etc).](#3-use-poetry-to-add-3rd-party-packages-eg-langchain-openai-langchain-anthropic-langchain-mistral-etc)
* [4\. Set up relevant env variables. For example,](#4-set-up-relevant-env-variables-for-example)
* [5\. Serve your app](#5-serve-your-app)
* [Examples](#examples)
* [Sample Application](#sample-application)
* [Server](#server)
* [Docs](#docs)
* [Client](#client)
* [Endpoints](#endpoints)
* [Playground](#playground)
* [Widgets](#widgets)
* [Sharing](#sharing)
* [Chat playground](#chat-playground)
* [Legacy Chains](#legacy-chains)
* [Deployment](#deployment)
* [Deploy to AWS](#deploy-to-aws)
* [Deploy to Azure](#deploy-to-azure)
* [Deploy to GCP](#deploy-to-gcp)
* [Community Contributed](#community-contributed)
* [Pydantic](#pydantic)
* [Advanced](#advanced)
* [Handling Authentication](#handling-authentication)
* [Files](#files)
* [Custom Input and Output Types](#custom-input-and-output-types)
* [Custom User Types](#custom-user-types)
* [Playground Widgets](#playground-widgets)
* [Available Widgets](#available-widgets)
* [Chat Widget](#chat-widget)
* [Enabling / Disabling Endpoints (LangServe >=0.0.33)](#enabling--disabling-endpoints-langserve-0033) | null |
https://python.langchain.com/v0.2/docs/how_to/chat_token_usage_tracking/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to track token usage in ChatModels
On this page
How to track token usage in ChatModels
======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.
This guide requires `langchain-openai >= 0.1.8`.
%pip install --upgrade --quiet langchain langchain-openai
Using LangSmith[](#using-langsmith "Direct link to Using LangSmith")
---------------------------------------------------------------------
You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).
Using AIMessage.usage\_metadata[](#using-aimessageusage_metadata "Direct link to Using AIMessage.usage_metadata")
------------------------------------------------------------------------------------------------------------------
A number of model providers return token usage information as part of the chat generation response. When available, this information will be included on the `AIMessage` objects produced by the corresponding model.
LangChain `AIMessage` objects include a [usage\_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.usage_metadata) attribute. When populated, this attribute will be a [UsageMetadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.UsageMetadata.html) dictionary with standard keys (e.g., `"input_tokens"` and `"output_tokens"`).
Examples:
**OpenAI**:
# # !pip install -qU langchain-openaifrom langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")openai_response = llm.invoke("hello")openai_response.usage_metadata
**API Reference:**[ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
**Anthropic**:
# !pip install -qU langchain-anthropicfrom langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-haiku-20240307")anthropic_response = llm.invoke("hello")anthropic_response.usage_metadata
**API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html)
{'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20}
### Using AIMessage.response\_metadata[](#using-aimessageresponse_metadata "Direct link to Using AIMessage.response_metadata")
Metadata from the model response is also included in the AIMessage [response\_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata) attribute. These data are typically not standardized. Note that different providers adopt different conventions for representing token counts:
print(f'OpenAI: {openai_response.response_metadata["token_usage"]}\n')print(f'Anthropic: {anthropic_response.response_metadata["usage"]}')
OpenAI: {'completion_tokens': 9, 'prompt_tokens': 8, 'total_tokens': 17}Anthropic: {'input_tokens': 8, 'output_tokens': 12}
### Streaming[](#streaming "Direct link to Streaming")
Some providers support token count metadata in a streaming context.
#### OpenAI[](#openai "Direct link to OpenAI")
For example, OpenAI will return a message [chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.8` and can be enabled by setting `stream_options={"include_usage": True}`.
note
By default, the last message chunk in a stream will include a `"finish_reason"` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `"finish_reason"` appears on the second to last message chunk.
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")aggregate = Nonefor chunk in llm.stream("hello", stream_options={"include_usage": True}): print(chunk) aggregate = chunk if aggregate is None else aggregate + chunk
content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content='Hello' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content='!' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' How' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' can' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' I' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' assist' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' you' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content=' today' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content='?' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content='' response_metadata={'finish_reason': 'stop'} id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
Note that the usage metadata will be included in the sum of the individual message chunks:
print(aggregate.content)print(aggregate.usage_metadata)
Hello! How can I assist you today?{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
To disable streaming token counts for OpenAI, set `"include_usage"` to False in `stream_options`, or omit it from the parameters:
aggregate = Nonefor chunk in llm.stream("hello"): print(chunk)
content='' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content='Hello' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content='!' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' How' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' can' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' I' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' assist' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' you' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content=' today' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content='?' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'content='' response_metadata={'finish_reason': 'stop'} id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'
You can also enable streaming token usage by setting `model_kwargs` when instantiating the chat model. This can be useful when incorporating chat models into LangChain [chains](/v0.2/docs/concepts/#langchain-expression-language-lcel): usage metadata can be monitored when [streaming intermediate steps](/v0.2/docs/how_to/streaming/#using-stream-events) or using tracing software such as [LangSmith](https://docs.smith.langchain.com/).
See the below example, where we return output structured to a desired schema, but can still observe token usage streamed from intermediate steps.
from langchain_core.pydantic_v1 import BaseModel, Fieldclass Joke(BaseModel): """Joke to tell user.""" setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke")llm = ChatOpenAI( model="gpt-3.5-turbo-0125", model_kwargs={"stream_options": {"include_usage": True}},)# Under the hood, .with_structured_output binds tools to the# chat model and appends a parser.structured_llm = llm.with_structured_output(Joke)async for event in structured_llm.astream_events("Tell me a joke", version="v2"): if event["event"] == "on_chat_model_end": print(f'Token usage: {event["data"]["output"].usage_metadata}\n') elif event["event"] == "on_chain_end": print(event["data"]["output"]) else: pass
Token usage: {'input_tokens': 79, 'output_tokens': 23, 'total_tokens': 102}setup='Why was the math book sad?' punchline='Because it had too many problems.'
Token usage is also visible in the corresponding [LangSmith trace](https://smith.langchain.com/public/fe6513d5-7212-4045-82e0-fefa28bc7656/r) in the payload from the chat model.
Using callbacks[](#using-callbacks "Direct link to Using callbacks")
---------------------------------------------------------------------
There are also some API-specific callback context managers that allow you to track token usage across multiple calls. It is currently only implemented for the OpenAI API and Bedrock Anthropic API.
### OpenAI[](#openai-1 "Direct link to OpenAI")
Let's first look at an extremely simple example of tracking token usage for a single Chat model call.
# !pip install -qU langchain-community wikipediafrom langchain_community.callbacks.manager import get_openai_callbackllm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)with get_openai_callback() as cb: result = llm.invoke("Tell me a joke") print(cb)
**API Reference:**[get\_openai\_callback](https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.manager.get_openai_callback.html)
Tokens Used: 27 Prompt Tokens: 11 Completion Tokens: 16Successful Requests: 1Total Cost (USD): $2.95e-05
Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence.
with get_openai_callback() as cb: result = llm.invoke("Tell me a joke") result2 = llm.invoke("Tell me a joke") print(cb.total_tokens)
55
note
Cost information is currently not available in streaming mode. This is because model names are currently not propagated through chunks in streaming mode, and the model name is used to look up the correct pricing. Token counts however are available:
with get_openai_callback() as cb: for chunk in llm.stream("Tell me a joke", stream_options={"include_usage": True}): pass print(cb.total_tokens)
28
If a chain or agent with multiple steps in it is used, it will track all those steps.
from langchain.agents import AgentExecutor, create_tool_calling_agent, load_toolsfrom langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_messages( [ ("system", "You're a helpful assistant"), ("human", "{input}"), ("placeholder", "{agent_scratchpad}"), ])tools = load_tools(["wikipedia"])agent = create_tool_calling_agent(llm, tools, prompt)agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, stream_runnable=False)
**API Reference:**[AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html) | [create\_tool\_calling\_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html) | [load\_tools](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.load_tools.load_tools.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
note
We have to set `stream_runnable=False` for cost information, as described above. By default the AgentExecutor will stream the underlying agent so that you can get the most granular results when streaming events via AgentExecutor.stream\_events.
with get_openai_callback() as cb: response = agent_executor.invoke( { "input": "What's a hummingbird's scientific name and what's the fastest bird species?" } ) print(f"Total Tokens: {cb.total_tokens}") print(f"Prompt Tokens: {cb.prompt_tokens}") print(f"Completion Tokens: {cb.completion_tokens}") print(f"Total Cost (USD): ${cb.total_cost}")
[1m> Entering new AgentExecutor chain...[0m[32;1m[1;3mInvoking: `wikipedia` with `{'query': 'hummingbird scientific name'}`[0m[36;1m[1;3mPage: HummingbirdSummary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.5–13 cm (3–5 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 18–24 grams (0.63–0.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.They are known as hummingbirds because of the humming sound created by their beating wings, which flap at high frequencies audible to other birds and humans. They hover at rapid wing-flapping rates, which vary from around 12 beats per second in the largest species to 80 per second in small hummingbirds.Hummingbirds have the highest mass-specific metabolic rate of any homeothermic animal. To conserve energy when food is scarce and at night when not foraging, they can enter torpor, a state similar to hibernation, and slow their metabolic rate to 1⁄15 of its normal rate. While most hummingbirds do not migrate, the rufous hummingbird has one of the longest migrations among birds, traveling twice per year between Alaska and Mexico, a distance of about 3,900 miles (6,300 km).Hummingbirds split from their sister group, the swifts and treeswifts, around 42 million years ago. The oldest known fossil hummingbird is Eurotrochilus, from the Rupelian Stage of Early Oligocene Europe.Page: Rufous hummingbirdSummary: The rufous hummingbird (Selasphorus rufus) is a small hummingbird, about 8 cm (3.1 in) long with a long, straight and slender bill. These birds are known for their extraordinary flight skills, flying 2,000 mi (3,200 km) during their migratory transits. It is one of nine species in the genus Selasphorus.Page: Anna's hummingbirdSummary: Anna's hummingbird (Calypte anna) is a North American species of hummingbird. It was named after Anna Masséna, Duchess of Rivoli.It is native to western coastal regions of North America. In the early 20th century, Anna's hummingbirds bred only in northern Baja California and Southern California. The transplanting of exotic ornamental plants in residential areas throughout the Pacific coast and inland deserts provided expanded nectar and nesting sites, allowing the species to expand its breeding range. Year-round residence of Anna's hummingbirds in the Pacific Northwest is an example of ecological release dependent on acclimation to colder winter temperatures, introduced plants, and human provision of nectar feeders during winter.These birds feed on nectar from flowers using a long extendable tongue. They also consume small insects and other arthropods caught in flight or gleaned from vegetation.[0m[32;1m[1;3mInvoking: `wikipedia` with `{'query': 'fastest bird species'}`[0m[36;1m[1;3mPage: List of birds by flight speedSummary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon (Falco peregrinus), able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.Page: Fastest animalsSummary: This is a list of the fastest animals in the world, by types of animal.Page: FalconSummary: Falcons () are birds of prey in the genus Falco, which includes about 40 species. Falcons are widely distributed on all continents of the world except Antarctica, though closely related raptors did occur there in the Eocene.Adult falcons have thin, tapered wings, which enable them to fly at high speed and change direction rapidly. Fledgling falcons, in their first year of flying, have longer flight feathers, which make their configuration more like that of a general-purpose bird such as a broad wing. This makes flying easier while learning the exceptional skills required to be effective hunters as adults.The falcons are the largest genus in the Falconinae subfamily of Falconidae, which itself also includes another subfamily comprising caracaras and a few other species. All these birds kill with their beaks, using a tomial "tooth" on the side of their beaks—unlike the hawks, eagles, and other birds of prey in the Accipitridae, which use their feet.The largest falcon is the gyrfalcon at up to 65 cm in length. The smallest falcon species is the pygmy falcon, which measures just 20 cm. As with hawks and owls, falcons exhibit sexual dimorphism, with the females typically larger than the males, thus allowing a wider range of prey species.Some small falcons with long, narrow wings are called "hobbies" and some which hover while hunting are called "kestrels".As is the case with many birds of prey, falcons have exceptional powers of vision; the visual acuity of one species has been measured at 2.6 times that of a normal human. Peregrine falcons have been recorded diving at speeds of 320 km/h (200 mph), making them the fastest-moving creatures on Earth; the fastest recorded dive attained a vertical speed of 390 km/h (240 mph).[0m[32;1m[1;3mThe scientific name for a hummingbird is Trochilidae. The fastest bird species is the peregrine falcon (Falco peregrinus), which can exceed speeds of 320 km/h (200 mph) in its dives.[0m[1m> Finished chain.[0mTotal Tokens: 1787Prompt Tokens: 1687Completion Tokens: 100Total Cost (USD): $0.0009935
### Bedrock Anthropic[](#bedrock-anthropic "Direct link to Bedrock Anthropic")
The `get_bedrock_anthropic_callback` works very similarly:
# !pip install langchain-awsfrom langchain_aws import ChatBedrockfrom langchain_community.callbacks.manager import get_bedrock_anthropic_callbackllm = ChatBedrock(model_id="anthropic.claude-v2")with get_bedrock_anthropic_callback() as cb: result = llm.invoke("Tell me a joke") result2 = llm.invoke("Tell me a joke") print(cb)
**API Reference:**[get\_bedrock\_anthropic\_callback](https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.manager.get_bedrock_anthropic_callback.html)
Tokens Used: 96 Prompt Tokens: 26 Completion Tokens: 70Successful Requests: 2Total Cost (USD): $0.001888
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now seen a few examples of how to track token usage for supported providers.
Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output/) or [how to add caching to your chat models](/v0.2/docs/how_to/chat_model_caching/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/chat_token_usage_tracking.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to init any model in one line
](/v0.2/docs/how_to/chat_models_universal_init/)[
Next
How to add tools to chatbots
](/v0.2/docs/how_to/chatbots_tools/)
* [Using LangSmith](#using-langsmith)
* [Using AIMessage.usage\_metadata](#using-aimessageusage_metadata)
* [Using AIMessage.response\_metadata](#using-aimessageresponse_metadata)
* [Streaming](#streaming)
* [Using callbacks](#using-callbacks)
* [OpenAI](#openai-1)
* [Bedrock Anthropic](#bedrock-anthropic)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/query_constructing_filters/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to construct filters for query analysis
How to construct filters for query analysis
===========================================
We may want to do query analysis to extract filters to pass into retrievers. One way we ask the LLM to represent these filters is as a Pydantic model. There is then the issue of converting that Pydantic model into a filter that can be passed into a retriever.
This can be done manually, but LangChain also provides some "Translators" that are able to translate from a common syntax into filters specific to each retriever. Here, we will cover how to use those translators.
from typing import Optionalfrom langchain.chains.query_constructor.ir import ( Comparator, Comparison, Operation, Operator, StructuredQuery,)from langchain.retrievers.self_query.chroma import ChromaTranslatorfrom langchain.retrievers.self_query.elasticsearch import ElasticsearchTranslatorfrom langchain_core.pydantic_v1 import BaseModel
**API Reference:**[Comparator](https://api.python.langchain.com/en/latest/structured_query/langchain_core.structured_query.Comparator.html) | [Comparison](https://api.python.langchain.com/en/latest/structured_query/langchain_core.structured_query.Comparison.html) | [Operation](https://api.python.langchain.com/en/latest/structured_query/langchain_core.structured_query.Operation.html) | [Operator](https://api.python.langchain.com/en/latest/structured_query/langchain_core.structured_query.Operator.html) | [StructuredQuery](https://api.python.langchain.com/en/latest/structured_query/langchain_core.structured_query.StructuredQuery.html) | [ChromaTranslator](https://api.python.langchain.com/en/latest/query_constructors/langchain_community.query_constructors.chroma.ChromaTranslator.html) | [ElasticsearchTranslator](https://api.python.langchain.com/en/latest/query_constructors/langchain_community.query_constructors.elasticsearch.ElasticsearchTranslator.html)
In this example, `year` and `author` are both attributes to filter on.
class Search(BaseModel): query: str start_year: Optional[int] author: Optional[str]
search_query = Search(query="RAG", start_year=2022, author="LangChain")
def construct_comparisons(query: Search): comparisons = [] if query.start_year is not None: comparisons.append( Comparison( comparator=Comparator.GT, attribute="start_year", value=query.start_year, ) ) if query.author is not None: comparisons.append( Comparison( comparator=Comparator.EQ, attribute="author", value=query.author, ) ) return comparisons
comparisons = construct_comparisons(search_query)
_filter = Operation(operator=Operator.AND, arguments=comparisons)
ElasticsearchTranslator().visit_operation(_filter)
{'bool': {'must': [{'range': {'metadata.start_year': {'gt': 2022}}}, {'term': {'metadata.author.keyword': 'LangChain'}}]}}
ChromaTranslator().visit_operation(_filter)
{'$and': [{'start_year': {'$gt': 2022}}, {'author': {'$eq': 'LangChain'}}]}
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/query_constructing_filters.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add values to a chain's state
](/v0.2/docs/how_to/assign/)[
Next
How to configure runtime chain internals
](/v0.2/docs/how_to/configure/) | null |
https://python.langchain.com/v0.2/docs/versions/release_policy/ | * [](/v0.2/)
* Versions
* Release Policy
On this page
LangChain releases
==================
The LangChain ecosystem is composed of different component packages (e.g., `langchain-core`, `langchain`, `langchain-community`, `langgraph`, `langserve`, partner packages etc.)
Versioning[](#versioning "Direct link to Versioning")
------------------------------------------------------
### `langchain` and `langchain-core`[](#langchain-and-langchain-core "Direct link to langchain-and-langchain-core")
`langchain` and `langchain-core` follow [semantic versioning](https://semver.org/) in the format of 0.**Y**.**Z**. The packages are under rapid development, and so are currently versioning the packages with a major version of 0.
Minor version increases will occur for:
* Breaking changes for any public interfaces marked as `beta`.
Patch version increases will occur for:
* Bug fixes
* New features
* Any changes to private interfaces
* Any changes to `beta` features
When upgrading between minor versions, users should review the list of breaking changes and deprecations.
From time to time, we will version packages as **release candidates**. These are versions that are intended to be released as stable versions, but we want to get feedback from the community before doing so. Release candidates will be versioned as 0.**Y**.**Z**rc**N**. For example, 0.2.0rc1. If no issues are found, the release candidate will be released as a stable version with the same version number. If issues are found, we will release a new release candidate with an incremented `N` value (e.g., 0.2.0rc2).
### Other packages in the langchain ecosystem[](#other-packages-in-the-langchain-ecosystem "Direct link to Other packages in the langchain ecosystem")
Other packages in the ecosystem (including user packages) can follow a different versioning scheme, but are generally expected to pin to specific minor versions of `langchain` and `langchain-core`.
Release cadence[](#release-cadence "Direct link to Release cadence")
---------------------------------------------------------------------
We expect to space out **minor** releases (e.g., from 0.2.0 to 0.3.0) of `langchain` and `langchain-core` by at least 2-3 months, as such releases may contain breaking changes.
Patch versions are released frequently as they contain bug fixes and new features.
API stability[](#api-stability "Direct link to API stability")
---------------------------------------------------------------
The development of LLM applications is a rapidly evolving field, and we are constantly learning from our users and the community. As such, we expect that the APIs in `langchain` and `langchain-core` will continue to evolve to better serve the needs of our users.
Even though both `langchain` and `langchain-core` are currently in a pre-1.0 state, we are committed to maintaining API stability in these packages.
* Breaking changes to the public API will result in a minor version bump (the second digit)
* Any bug fixes or new features will result in a patch version bump (the third digit)
We will generally try to avoid making unnecessary changes, and will provide a deprecation policy for features that are being removed.
### Stability of other packages[](#stability-of-other-packages "Direct link to Stability of other packages")
The stability of other packages in the LangChain ecosystem may vary:
* `langchain-community` is a community maintained package that contains 3rd party integrations. While we do our best to review and test changes in `langchain-community`, `langchain-community` is expected to experience more breaking changes than `langchain` and `langchain-core` as it contains many community contributions.
* Partner packages may follow different stability and versioning policies, and users should refer to the documentation of those packages for more information; however, in general these packages are expected to be stable.
### What is a "API stability"?[](#what-is-a-api-stability "Direct link to What is a \"API stability\"?")
API stability means:
* All the public APIs (everything in this documentation) will not be moved or renamed without providing backwards-compatible aliases.
* If new features are added to these APIs – which is quite possible – they will not break or change the meaning of existing methods. In other words, "stable" does not (necessarily) mean "complete."
* If, for some reason, an API declared stable must be removed or replaced, it will be declared deprecated but will remain in the API for at least two minor releases. Warnings will be issued when the deprecated method is called.
### **APIs marked as internal**[](#apis-marked-as-internal "Direct link to apis-marked-as-internal")
Certain APIs are explicitly marked as “internal” in a couple of ways:
* Some documentation refers to internals and mentions them as such. If the documentation says that something is internal, it may change.
* Functions, methods, and other objects prefixed by a leading underscore (**`_`**). This is the standard Python convention of indicating that something is private; if any method starts with a single **`_`**, it’s an internal API.
* **Exception:** Certain methods are prefixed with `_` , but do not contain an implementation. These methods are _meant_ to be overridden by sub-classes that provide the implementation. Such methods are generally part of the **Public API** of LangChain.
Deprecation policy[](#deprecation-policy "Direct link to Deprecation policy")
------------------------------------------------------------------------------
We will generally avoid deprecating features until a better alternative is available.
When a feature is deprecated, it will continue to work in the current and next minor version of `langchain` and `langchain-core`. After that, the feature will be removed.
Since we're expecting to space out minor releases by at least 2-3 months, this means that a feature can be removed within 2-6 months of being deprecated.
In some situations, we may allow the feature to remain in the code base for longer periods of time, if it's not causing issues in the packages, to reduce the burden on users.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/versions/release_policy.mdx)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Overview
](/v0.2/docs/versions/overview/)[
Next
Packages
](/v0.2/docs/versions/packages/)
* [Versioning](#versioning)
* [`langchain` and `langchain-core`](#langchain-and-langchain-core)
* [Other packages in the langchain ecosystem](#other-packages-in-the-langchain-ecosystem)
* [Release cadence](#release-cadence)
* [API stability](#api-stability)
* [Stability of other packages](#stability-of-other-packages)
* [What is a "API stability"?](#what-is-a-api-stability)
* [**APIs marked as internal**](#apis-marked-as-internal)
* [Deprecation policy](#deprecation-policy) | null |
https://python.langchain.com/v0.2/docs/how_to/configure/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to configure runtime chain internals
On this page
How to configure runtime chain internals
========================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
* [Binding runtime arguments](/v0.2/docs/how_to/binding/)
Sometimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things within your chains. This can include tweaking parameters such as temperature or even swapping out one model for another. In order to make this experience as easy as possible, we have defined two methods.
* A `configurable_fields` method. This lets you configure particular fields of a runnable.
* This is related to the [`.bind`](/v0.2/docs/how_to/binding/) method on runnables, but allows you to specify parameters for a given step in a chain at runtime rather than specifying them beforehand.
* A `configurable_alternatives` method. With this method, you can list out alternatives for any particular runnable that can be set during runtime, and swap them for those specified alternatives.
Configurable Fields[](#configurable-fields "Direct link to Configurable Fields")
---------------------------------------------------------------------------------
Let's walk through an example that configures chat model fields like temperature at runtime:
%pip install --upgrade --quiet langchain langchain-openaiimport osfrom getpass import getpassos.environ["OPENAI_API_KEY"] = getpass()
[33mWARNING: You are using pip version 22.0.4; however, version 24.0 is available.You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.[0m[33m[0mNote: you may need to restart the kernel to use updated packages.
from langchain_core.prompts import PromptTemplatefrom langchain_core.runnables import ConfigurableFieldfrom langchain_openai import ChatOpenAImodel = ChatOpenAI(temperature=0).configurable_fields( temperature=ConfigurableField( id="llm_temperature", name="LLM Temperature", description="The temperature of the LLM", ))model.invoke("pick a random number")
**API Reference:**[PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) | [ConfigurableField](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.utils.ConfigurableField.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
AIMessage(content='17', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 11, 'total_tokens': 12}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba26a0da-0a69-4533-ab7f-21178a73d303-0')
Above, we defined `temperature` as a [`ConfigurableField`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.utils.ConfigurableField.html#langchain_core.runnables.utils.ConfigurableField) that we can set at runtime. To do so, we use the [`with_config`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_config) method like this:
model.with_config(configurable={"llm_temperature": 0.9}).invoke("pick a random number")
AIMessage(content='12', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 11, 'total_tokens': 12}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba8422ad-be77-4cb1-ac45-ad0aae74e3d9-0')
Note that the passed `llm_temperature` entry in the dict has the same key as the `id` of the `ConfigurableField`.
We can also do this to affect just one step that's part of a chain:
prompt = PromptTemplate.from_template("Pick a random number above {x}")chain = prompt | modelchain.invoke({"x": 0})
AIMessage(content='27', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 14, 'total_tokens': 15}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ecd4cadd-1b72-4f92-b9a0-15e08091f537-0')
chain.with_config(configurable={"llm_temperature": 0.9}).invoke({"x": 0})
AIMessage(content='35', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 14, 'total_tokens': 15}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-a916602b-3460-46d3-a4a8-7c926ec747c0-0')
### With HubRunnables[](#with-hubrunnables "Direct link to With HubRunnables")
This is useful to allow for switching of prompts
from langchain.runnables.hub import HubRunnableprompt = HubRunnable("rlm/rag-prompt").configurable_fields( owner_repo_commit=ConfigurableField( id="hub_commit", name="Hub Commit", description="The Hub commit to pull from", ))prompt.invoke({"question": "foo", "context": "bar"})
**API Reference:**[HubRunnable](https://api.python.langchain.com/en/latest/runnables/langchain.runnables.hub.HubRunnable.html)
ChatPromptValue(messages=[HumanMessage(content="You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: foo \nContext: bar \nAnswer:")])
prompt.with_config(configurable={"hub_commit": "rlm/rag-prompt-llama"}).invoke( {"question": "foo", "context": "bar"})
ChatPromptValue(messages=[HumanMessage(content="[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \nQuestion: foo \nContext: bar \nAnswer: [/INST]")])
Configurable Alternatives[](#configurable-alternatives "Direct link to Configurable Alternatives")
---------------------------------------------------------------------------------------------------
The `configurable_alternatives()` method allows us to swap out steps in a chain with an alternative. Below, we swap out one chat model for another:
%pip install --upgrade --quiet langchain-anthropicimport osfrom getpass import getpassos.environ["ANTHROPIC_API_KEY"] = getpass()
[33mWARNING: You are using pip version 22.0.4; however, version 24.0 is available.You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.[0m[33m[0mNote: you may need to restart the kernel to use updated packages.
from langchain_anthropic import ChatAnthropicfrom langchain_core.prompts import PromptTemplatefrom langchain_core.runnables import ConfigurableFieldfrom langchain_openai import ChatOpenAIllm = ChatAnthropic( model="claude-3-haiku-20240307", temperature=0).configurable_alternatives( # This gives this field an id # When configuring the end runnable, we can then use this id to configure this field ConfigurableField(id="llm"), # This sets a default_key. # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used default_key="anthropic", # This adds a new option, with name `openai` that is equal to `ChatOpenAI()` openai=ChatOpenAI(), # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model="gpt-4")` gpt4=ChatOpenAI(model="gpt-4"), # You can add more configuration options here)prompt = PromptTemplate.from_template("Tell me a joke about {topic}")chain = prompt | llm# By default it will call Anthropicchain.invoke({"topic": "bears"})
**API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) | [ConfigurableField](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.utils.ConfigurableField.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
AIMessage(content="Here's a bear joke for you:\n\nWhy don't bears wear socks? \nBecause they have bear feet!\n\nHow's that? I tried to come up with a simple, silly pun-based joke about bears. Puns and wordplay are a common way to create humorous bear jokes. Let me know if you'd like to hear another one!", response_metadata={'id': 'msg_018edUHh5fUbWdiimhrC3dZD', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 80}}, id='run-775bc58c-28d7-4e6b-a268-48fa6661f02f-0')
# We can use `.with_config(configurable={"llm": "openai"})` to specify an llm to usechain.with_config(configurable={"llm": "openai"}).invoke({"topic": "bears"})
AIMessage(content="Why don't bears like fast food?\n\nBecause they can't catch it!", response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 13, 'total_tokens': 28}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-7bdaa992-19c9-4f0d-9a0c-1f326bc992d4-0')
# If we use the `default_key` then it uses the defaultchain.with_config(configurable={"llm": "anthropic"}).invoke({"topic": "bears"})
AIMessage(content="Here's a bear joke for you:\n\nWhy don't bears wear socks? \nBecause they have bear feet!\n\nHow's that? I tried to come up with a simple, silly pun-based joke about bears. Puns and wordplay are a common way to create humorous bear jokes. Let me know if you'd like to hear another one!", response_metadata={'id': 'msg_01BZvbmnEPGBtcxRWETCHkct', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 80}}, id='run-59b6ee44-a1cd-41b8-a026-28ee67cdd718-0')
### With Prompts[](#with-prompts "Direct link to With Prompts")
We can do a similar thing, but alternate between prompts
llm = ChatAnthropic(model="claude-3-haiku-20240307", temperature=0)prompt = PromptTemplate.from_template( "Tell me a joke about {topic}").configurable_alternatives( # This gives this field an id # When configuring the end runnable, we can then use this id to configure this field ConfigurableField(id="prompt"), # This sets a default_key. # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used default_key="joke", # This adds a new option, with name `poem` poem=PromptTemplate.from_template("Write a short poem about {topic}"), # You can add more configuration options here)chain = prompt | llm# By default it will write a jokechain.invoke({"topic": "bears"})
AIMessage(content="Here's a bear joke for you:\n\nWhy don't bears wear socks? \nBecause they have bear feet!", response_metadata={'id': 'msg_01DtM1cssjNFZYgeS3gMZ49H', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 28}}, id='run-8199af7d-ea31-443d-b064-483693f2e0a1-0')
# We can configure it write a poemchain.with_config(configurable={"prompt": "poem"}).invoke({"topic": "bears"})
AIMessage(content="Here is a short poem about bears:\n\nMajestic bears, strong and true,\nRoaming the forests, wild and free.\nPowerful paws, fur soft and brown,\nCommanding respect, nature's crown.\n\nForaging for berries, fishing streams,\nProtecting their young, fierce and keen.\nMighty bears, a sight to behold,\nGuardians of the wilderness, untold.\n\nIn the wild they reign supreme,\nEmbodying nature's grand theme.\nBears, a symbol of strength and grace,\nCaptivating all who see their face.", response_metadata={'id': 'msg_01Wck3qPxrjURtutvtodaJFn', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 134}}, id='run-69414a1e-51d7-4bec-a307-b34b7d61025e-0')
### With Prompts and LLMs[](#with-prompts-and-llms "Direct link to With Prompts and LLMs")
We can also have multiple things configurable! Here's an example doing that with both prompts and LLMs.
llm = ChatAnthropic( model="claude-3-haiku-20240307", temperature=0).configurable_alternatives( # This gives this field an id # When configuring the end runnable, we can then use this id to configure this field ConfigurableField(id="llm"), # This sets a default_key. # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used default_key="anthropic", # This adds a new option, with name `openai` that is equal to `ChatOpenAI()` openai=ChatOpenAI(), # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model="gpt-4")` gpt4=ChatOpenAI(model="gpt-4"), # You can add more configuration options here)prompt = PromptTemplate.from_template( "Tell me a joke about {topic}").configurable_alternatives( # This gives this field an id # When configuring the end runnable, we can then use this id to configure this field ConfigurableField(id="prompt"), # This sets a default_key. # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used default_key="joke", # This adds a new option, with name `poem` poem=PromptTemplate.from_template("Write a short poem about {topic}"), # You can add more configuration options here)chain = prompt | llm# We can configure it write a poem with OpenAIchain.with_config(configurable={"prompt": "poem", "llm": "openai"}).invoke( {"topic": "bears"})
AIMessage(content="In the forest deep and wide,\nBears roam with grace and pride.\nWith fur as dark as night,\nThey rule the land with all their might.\n\nIn winter's chill, they hibernate,\nIn spring they emerge, hungry and great.\nWith claws sharp and eyes so keen,\nThey hunt for food, fierce and lean.\n\nBut beneath their tough exterior,\nLies a gentle heart, warm and superior.\nThey love their cubs with all their might,\nProtecting them through day and night.\n\nSo let us admire these majestic creatures,\nIn awe of their strength and features.\nFor in the wild, they reign supreme,\nThe mighty bears, a timeless dream.", response_metadata={'token_usage': {'completion_tokens': 133, 'prompt_tokens': 13, 'total_tokens': 146}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-5eec0b96-d580-49fd-ac4e-e32a0803b49b-0')
# We can always just configure only one if we wantchain.with_config(configurable={"llm": "openai"}).invoke({"topic": "bears"})
AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 13, 'total_tokens': 26}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-c1b14c9c-4988-49b8-9363-15bfd479973a-0')
### Saving configurations[](#saving-configurations "Direct link to Saving configurations")
We can also easily save configured chains as their own objects
openai_joke = chain.with_config(configurable={"llm": "openai"})openai_joke.invoke({"topic": "bears"})
AIMessage(content="Why did the bear break up with his girlfriend? \nBecause he couldn't bear the relationship anymore!", response_metadata={'token_usage': {'completion_tokens': 20, 'prompt_tokens': 13, 'total_tokens': 33}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-391ebd55-9137-458b-9a11-97acaff6a892-0')
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You now know how to configure a chain's internal steps at runtime.
To learn more, see the other how-to guides on runnables in this section, including:
* Using [.bind()](/v0.2/docs/how_to/binding/) as a simpler way to set a runnable's runtime parameters
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/configure.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to construct filters for query analysis
](/v0.2/docs/how_to/query_constructing_filters/)[
Next
How deal with high cardinality categoricals when doing query analysis
](/v0.2/docs/how_to/query_high_cardinality/)
* [Configurable Fields](#configurable-fields)
* [With HubRunnables](#with-hubrunnables)
* [Configurable Alternatives](#configurable-alternatives)
* [With Prompts](#with-prompts)
* [With Prompts and LLMs](#with-prompts-and-llms)
* [Saving configurations](#saving-configurations)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/versions/v0_2/ | * [](/v0.2/)
* Versions
* v0.2
On this page
LangChain v0.2
==============
LangChain v0.2 was released in May 2024. This release includes a number of [breaking changes and deprecations](/v0.2/docs/versions/v0_2/deprecations/). This document contains a guide on upgrading to 0.2.x.
Reference
* [Breaking Changes & Deprecations](/v0.2/docs/versions/v0_2/deprecations/)
* [Migrating to Astream Events v2](/v0.2/docs/versions/v0_2/migrating_astream_events/)
Migration
=========
This documentation will help you upgrade your code to LangChain `0.2.x.`. To prepare for migration, we first recommend you take the following steps:
1. Install the 0.2.x versions of langchain-core, langchain and upgrade to recent versions of other packages that you may be using. (e.g. langgraph, langchain-community, langchain-openai, etc.)
2. Verify that your code runs properly with the new packages (e.g., unit tests pass).
3. Install a recent version of `langchain-cli` , and use the tool to replace old imports used by your code with the new imports. (See instructions below.)
4. Manually resolve any remaining deprecation warnings.
5. Re-run unit tests.
6. If you are using `astream_events`, please review how to [migrate to astream events v2](/v0.2/docs/versions/v0_2/migrating_astream_events/).
Upgrade to new imports[](#upgrade-to-new-imports "Direct link to Upgrade to new imports")
------------------------------------------------------------------------------------------
We created a tool to help migrate your code. This tool is still in **beta** and may not cover all cases, but we hope that it will help you migrate your code more quickly.
The migration script has the following limitations:
1. It’s limited to helping users move from old imports to new imports. It does not help address other deprecations.
2. It can’t handle imports that involve `as` .
3. New imports are always placed in global scope, even if the old import that was replaced was located inside some local scope (e..g, function body).
4. It will likely miss some deprecated imports.
Here is an example of the import changes that the migration script can help apply automatically:
From Package
To Package
Deprecated Import
New Import
langchain
langchain-community
from langchain.vectorstores import InMemoryVectorStore
from langchain\_community.vectorstores import InMemoryVectorStore
langchain-community
langchain\_openai
from langchain\_community.chat\_models import ChatOpenAI
from langchain\_openai import ChatOpenAI
langchain-community
langchain-core
from langchain\_community.document\_loaders import Blob
from langchain\_core.document\_loaders import Blob
langchain
langchain-core
from langchain.schema.document import Document
from langchain\_core.documents import Document
langchain
langchain-text-splitters
from langchain.text\_splitter import RecursiveCharacterTextSplitter
from langchain\_text\_splitters import RecursiveCharacterTextSplitter
Installation[](#installation "Direct link to Installation")
------------------------------------------------------------
pip install langchain-clilangchain-cli --version # <-- Make sure the version is at least 0.0.22
Usage[](#usage "Direct link to Usage")
---------------------------------------
Given that the migration script is not perfect, you should make sure you have a backup of your code first (e.g., using version control like `git`).
You will need to run the migration script **twice** as it only applies one import replacement per run.
For example, say your code still uses `from langchain.chat_models import ChatOpenAI`:
After the first run, you’ll get: `from langchain_community.chat_models import ChatOpenAI` After the second run, you’ll get: `from langchain_openai import ChatOpenAI`
# Run a first time# Will replace from langchain.chat_models import ChatOpenAIlangchain-cli migrate --diff [path to code] # Previewlangchain-cli migrate [path to code] # Apply# Run a second time to apply more import replacementslangchain-cli migrate --diff [path to code] # Previewlangchain-cli migrate [path to code] # Apply
### Other options[](#other-options "Direct link to Other options")
# See help menulangchain-cli migrate --help# Preview Changes without applyinglangchain-cli migrate --diff [path to code]# Run on code including ipython notebooks# Apply all import updates except for updates from langchain to langchain-corelangchain-cli migrate --disable langchain_to_core --include-ipynb [path to code]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/versions/v0_2/index.mdx)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Packages
](/v0.2/docs/versions/packages/)[
Next
LangChain v0.2
](/v0.2/docs/versions/v0_2/)
* [Upgrade to new imports](#upgrade-to-new-imports)
* [Installation](#installation)
* [Usage](#usage)
* [Other options](#other-options) | null |
https://python.langchain.com/v0.2/docs/how_to/chatbots_tools/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add tools to chatbots
On this page
How to add tools to chatbots
============================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chatbots](/v0.2/docs/concepts/#messages)
* [Agents](/v0.2/docs/tutorials/agents/)
* [Chat history](/v0.2/docs/concepts/#chat-history)
This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.
Setup[](#setup "Direct link to Setup")
---------------------------------------
For this guide, we'll be using a [tool calling agent](/v0.2/docs/how_to/agent_executor/) with a single tool for searching the web. The default will be powered by [Tavily](/v0.2/docs/integrations/tools/tavily_search/), but you can switch it out for any similar tool. The rest of this section will assume you're using Tavily.
You'll need to [sign up for an account](https://tavily.com/) on the Tavily website, and install the following packages:
%pip install --upgrade --quiet langchain-community langchain-openai tavily-python# Set env var OPENAI_API_KEY or load from a .env file:import dotenvdotenv.load_dotenv()
You will also need your OpenAI key set as `OPENAI_API_KEY` and your Tavily API key set as `TAVILY_API_KEY`.
Creating an agent[](#creating-an-agent "Direct link to Creating an agent")
---------------------------------------------------------------------------
Our end goal is to create an agent that can respond conversationally to user questions while looking up information as needed.
First, let's initialize Tavily and an OpenAI chat model capable of tool calling:
from langchain_community.tools.tavily_search import TavilySearchResultsfrom langchain_openai import ChatOpenAItools = [TavilySearchResults(max_results=1)]# Choose the LLM that will drive the agent# Only certain models support thischat = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0)
**API Reference:**[TavilySearchResults](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.tavily_search.tool.TavilySearchResults.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
To make our agent conversational, we must also choose a prompt with a placeholder for our chat history. Here's an example:
from langchain_core.prompts import ChatPromptTemplate# Adapted from https://smith.langchain.com/hub/jacob/tool-calling-agentprompt = ChatPromptTemplate.from_messages( [ ( "system", "You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!", ), ("placeholder", "{messages}"), ("placeholder", "{agent_scratchpad}"), ])
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
Great! Now let's assemble our agent:
from langchain.agents import AgentExecutor, create_tool_calling_agentagent = create_tool_calling_agent(chat, tools, prompt)agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
**API Reference:**[AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html) | [create\_tool\_calling\_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html)
Running the agent[](#running-the-agent "Direct link to Running the agent")
---------------------------------------------------------------------------
Now that we've set up our agent, let's try interacting with it! It can handle both trivial queries that require no lookup:
from langchain_core.messages import HumanMessageagent_executor.invoke({"messages": [HumanMessage(content="I'm Nemo!")]})
**API Reference:**[HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html)
[1m> Entering new AgentExecutor chain...[0m[32;1m[1;3mHello Nemo! It's great to meet you. How can I assist you today?[0m[1m> Finished chain.[0m
{'messages': [HumanMessage(content="I'm Nemo!")], 'output': "Hello Nemo! It's great to meet you. How can I assist you today?"}
Or, it can use of the passed search tool to get up to date information if needed:
agent_executor.invoke( { "messages": [ HumanMessage( content="What is the current conservation status of the Great Barrier Reef?" ) ], })
[1m> Entering new AgentExecutor chain...[0m[32;1m[1;3mInvoking: `tavily_search_results_json` with `{'query': 'current conservation status of the Great Barrier Reef'}`[0m[36;1m[1;3m[{'url': 'https://www.abc.net.au/news/2022-08-04/great-barrier-reef-report-says-coral-recovering-after-bleaching/101296186', 'content': 'Great Barrier Reef hit with widespread and severe bleaching event\n\'Devastating\': Over 90pc of reefs on Great Barrier Reef suffered bleaching over summer, report reveals\nTop Stories\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\nTaylor Swift puts an Aussie twist on a classic as she packs the MCG for the biggest show of her career — as it happened\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\nAustralian Border Force investigates after arrival of more than 20 men by boat north of Broome\nOpenAI launches video model that can instantly create short clips from text prompts\nAntoinette Lattouf loses bid to force ABC to produce emails calling for her dismissal\nCategory one cyclone makes landfall in Gulf of Carpentaria off NT-Queensland border\nWhy the RBA may be forced to cut before the Fed\nBrisbane records \'wettest day since 2022\', as woman dies in floodwaters near Mount Isa\n$45m Sydney beachside home once owned by late radio star is demolished less than a year after sale\nAnnabel Sutherland\'s historic double century puts Australia within reach of Test victory over South Africa\nAlmighty defensive effort delivers Indigenous victory in NRL All Stars clash\nLisa Wilkinson feared she would have to sell home to pay legal costs of Bruce Lehrmann\'s defamation case, court documents reveal\nSupermarkets as you know them are disappearing from our cities\nNRL issues Broncos\' Reynolds, Carrigan with breach notices after public scrap\nPopular Now\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\nTaylor Swift puts an Aussie twist on a classic as she packs the MCG for the biggest show of her career — as it happened\n$45m Sydney beachside home once owned by late radio star is demolished less than a year after sale\nAustralian Border Force investigates after arrival of more than 20 men by boat north of Broome\nDealer sentenced for injecting children as young as 12 with methylamphetamine\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\nTop Stories\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\nTaylor Swift puts an Aussie twist on a classic as she packs the MCG for the biggest show of her career — as it happened\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\nAustralian Border Force investigates after arrival of more than 20 men by boat north of Broome\nOpenAI launches video model that can instantly create short clips from text prompts\nJust In\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\nTraveller alert after one-year-old in Adelaide reported with measles\nAntoinette Lattouf loses bid to force ABC to produce emails calling for her dismissal\nFooter\nWe acknowledge Aboriginal and Torres Strait Islander peoples as the First Australians and Traditional Custodians of the lands where we live, learn, and work.\n Increased coral cover could come at a cost\nThe rapid growth in coral cover appears to have come at the expense of the diversity of coral on the reef, with most of the increases accounted for by fast-growing branching coral called Acropora.\n Documents obtained by the ABC under Freedom of Information laws revealed the Morrison government had forced AIMS to rush the report\'s release and orchestrated a "leak" of the material to select media outlets ahead of the reef being considered for inclusion on the World Heritage In Danger list.\n The reef\'s status and potential inclusion on the In Danger list were due to be discussed at the 45th session of the World Heritage Committee in Russia in June this year, but the meeting was indefinitely postponed due to the war in Ukraine.\n More from ABC\nEditorial Policies\nGreat Barrier Reef coral cover at record levels after mass-bleaching events, report shows\nGreat Barrier Reef coral cover at record levels after mass-bleaching events, report shows\nRecord coral cover is being seen across much of the Great Barrier Reef as it recovers from past storms and mass-bleaching events.'}][0m[32;1m[1;3mThe Great Barrier Reef is currently showing signs of recovery, with record coral cover being seen across much of the reef. This recovery comes after past storms and mass-bleaching events. However, the rapid growth in coral cover appears to have come at the expense of the diversity of coral on the reef, with most of the increases accounted for by fast-growing branching coral called Acropora. There were discussions about the reef's potential inclusion on the World Heritage In Danger list, but the meeting to consider this was indefinitely postponed due to the war in Ukraine.You can read more about it in this article: [Great Barrier Reef hit with widespread and severe bleaching event](https://www.abc.net.au/news/2022-08-04/great-barrier-reef-report-says-coral-recovering-after-bleaching/101296186)[0m[1m> Finished chain.[0m
{'messages': [HumanMessage(content='What is the current conservation status of the Great Barrier Reef?')], 'output': "The Great Barrier Reef is currently showing signs of recovery, with record coral cover being seen across much of the reef. This recovery comes after past storms and mass-bleaching events. However, the rapid growth in coral cover appears to have come at the expense of the diversity of coral on the reef, with most of the increases accounted for by fast-growing branching coral called Acropora. There were discussions about the reef's potential inclusion on the World Heritage In Danger list, but the meeting to consider this was indefinitely postponed due to the war in Ukraine.\n\nYou can read more about it in this article: [Great Barrier Reef hit with widespread and severe bleaching event](https://www.abc.net.au/news/2022-08-04/great-barrier-reef-report-says-coral-recovering-after-bleaching/101296186)"}
Conversational responses[](#conversational-responses "Direct link to Conversational responses")
------------------------------------------------------------------------------------------------
Because our prompt contains a placeholder for chat history messages, our agent can also take previous interactions into account and respond conversationally like a standard chatbot:
from langchain_core.messages import AIMessage, HumanMessageagent_executor.invoke( { "messages": [ HumanMessage(content="I'm Nemo!"), AIMessage(content="Hello Nemo! How can I assist you today?"), HumanMessage(content="What is my name?"), ], })
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html)
[1m> Entering new AgentExecutor chain...[0m[32;1m[1;3mYour name is Nemo![0m[1m> Finished chain.[0m
{'messages': [HumanMessage(content="I'm Nemo!"), AIMessage(content='Hello Nemo! How can I assist you today?'), HumanMessage(content='What is my name?')], 'output': 'Your name is Nemo!'}
If preferred, you can also wrap the agent executor in a [`RunnableWithMessageHistory`](/v0.2/docs/how_to/message_history/) class to internally manage history messages. Let's redeclare it this way:
agent = create_tool_calling_agent(chat, tools, prompt)agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
Then, because our agent executor has multiple outputs, we also have to set the `output_messages_key` property when initializing the wrapper:
from langchain_community.chat_message_histories import ChatMessageHistoryfrom langchain_core.runnables.history import RunnableWithMessageHistorydemo_ephemeral_chat_history_for_chain = ChatMessageHistory()conversational_agent_executor = RunnableWithMessageHistory( agent_executor, lambda session_id: demo_ephemeral_chat_history_for_chain, input_messages_key="messages", output_messages_key="output",)conversational_agent_executor.invoke( {"messages": [HumanMessage("I'm Nemo!")]}, {"configurable": {"session_id": "unused"}},)
**API Reference:**[ChatMessageHistory](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.ChatMessageHistory.html) | [RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html)
[1m> Entering new AgentExecutor chain...[0m[32;1m[1;3mHi Nemo! It's great to meet you. How can I assist you today?[0m[1m> Finished chain.[0m
{'messages': [HumanMessage(content="I'm Nemo!")], 'output': "Hi Nemo! It's great to meet you. How can I assist you today?"}
And then if we rerun our wrapped agent executor:
conversational_agent_executor.invoke( {"messages": [HumanMessage("What is my name?")]}, {"configurable": {"session_id": "unused"}},)
[1m> Entering new AgentExecutor chain...[0m[32;1m[1;3mYour name is Nemo! How can I assist you today, Nemo?[0m[1m> Finished chain.[0m
{'messages': [HumanMessage(content="I'm Nemo!"), AIMessage(content="Hi Nemo! It's great to meet you. How can I assist you today?"), HumanMessage(content='What is my name?')], 'output': 'Your name is Nemo! How can I assist you today, Nemo?'}
This [LangSmith trace](https://smith.langchain.com/public/1a9f712a-7918-4661-b3ff-d979bcc2af42/r) shows what's going on under the hood.
Further reading[](#further-reading "Direct link to Further reading")
---------------------------------------------------------------------
Other types agents can also support conversational responses too - for more, check out the [agents section](/v0.2/docs/tutorials/agents/).
For more on tool usage, you can also check out [this use case section](/v0.2/docs/how_to/#tools).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/chatbots_tools.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to track token usage in ChatModels
](/v0.2/docs/how_to/chat_token_usage_tracking/)[
Next
How to split code
](/v0.2/docs/how_to/code_splitter/)
* [Setup](#setup)
* [Creating an agent](#creating-an-agent)
* [Running the agent](#running-the-agent)
* [Conversational responses](#conversational-responses)
* [Further reading](#further-reading) | null |
https://python.langchain.com/v0.2/docs/how_to/code_splitter/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to split code
On this page
How to split code
=================
[RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) includes pre-built lists of separators that are useful for splitting text in a specific programming language.
Supported languages are stored in the `langchain_text_splitters.Language` enum. They include:
"cpp","go","java","kotlin","js","ts","php","proto","python","rst","ruby","rust","scala","swift","markdown","latex","html","sol","csharp","cobol","c","lua","perl","haskell"
To view the list of separators for a given language, pass a value from this enum into
RecursiveCharacterTextSplitter.get_separators_for_language`
To instantiate a splitter that is tailored for a specific language, pass a value from the enum into
RecursiveCharacterTextSplitter.from_language
Below we demonstrate examples for the various languages.
%pip install -qU langchain-text-splitters
from langchain_text_splitters import ( Language, RecursiveCharacterTextSplitter,)
**API Reference:**[Language](https://api.python.langchain.com/en/latest/base/langchain_text_splitters.base.Language.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
To view the full list of supported languages:
[e.value for e in Language]
['cpp', 'go', 'java', 'kotlin', 'js', 'ts', 'php', 'proto', 'python', 'rst', 'ruby', 'rust', 'scala', 'swift', 'markdown', 'latex', 'html', 'sol', 'csharp', 'cobol', 'c', 'lua', 'perl', 'haskell']
You can also see the separators used for a given language:
RecursiveCharacterTextSplitter.get_separators_for_language(Language.PYTHON)
['\nclass ', '\ndef ', '\n\tdef ', '\n\n', '\n', ' ', '']
Python[](#python "Direct link to Python")
------------------------------------------
Here's an example using the PythonTextSplitter:
PYTHON_CODE = """def hello_world(): print("Hello, World!")# Call the functionhello_world()"""python_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.PYTHON, chunk_size=50, chunk_overlap=0)python_docs = python_splitter.create_documents([PYTHON_CODE])python_docs
[Document(page_content='def hello_world():\n print("Hello, World!")'), Document(page_content='# Call the function\nhello_world()')]
JS[](#js "Direct link to JS")
------------------------------
Here's an example using the JS text splitter:
JS_CODE = """function helloWorld() { console.log("Hello, World!");}// Call the functionhelloWorld();"""js_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0)js_docs = js_splitter.create_documents([JS_CODE])js_docs
[Document(page_content='function helloWorld() {\n console.log("Hello, World!");\n}'), Document(page_content='// Call the function\nhelloWorld();')]
TS[](#ts "Direct link to TS")
------------------------------
Here's an example using the TS text splitter:
TS_CODE = """function helloWorld(): void { console.log("Hello, World!");}// Call the functionhelloWorld();"""ts_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.TS, chunk_size=60, chunk_overlap=0)ts_docs = ts_splitter.create_documents([TS_CODE])ts_docs
[Document(page_content='function helloWorld(): void {'), Document(page_content='console.log("Hello, World!");\n}'), Document(page_content='// Call the function\nhelloWorld();')]
Markdown[](#markdown "Direct link to Markdown")
------------------------------------------------
Here's an example using the Markdown text splitter:
markdown_text = """# 🦜️🔗 LangChain⚡ Building applications with LLMs through composability ⚡## Quick Install```bash# Hopefully this code block isn't splitpip install langchain
As an open-source project in a rapidly developing field, we are extremely open to contributions. """
```pythonmd_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0)md_docs = md_splitter.create_documents([markdown_text])md_docs
[Document(page_content='# 🦜️🔗 LangChain'), Document(page_content='⚡ Building applications with LLMs through composability ⚡'), Document(page_content='## Quick Install\n\n```bash'), Document(page_content="# Hopefully this code block isn't split"), Document(page_content='pip install langchain'), Document(page_content='```'), Document(page_content='As an open-source project in a rapidly developing field, we'), Document(page_content='are extremely open to contributions.')]
Latex[](#latex "Direct link to Latex")
---------------------------------------
Here's an example on Latex text:
latex_text = """\documentclass{article}\begin{document}\maketitle\section{Introduction}Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.\subsection{History of LLMs}The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.\subsection{Applications of LLMs}LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\end{document}"""
latex_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0)latex_docs = latex_splitter.create_documents([latex_text])latex_docs
[Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle'), Document(page_content='\\section{Introduction}'), Document(page_content='Large language models (LLMs) are a type of machine learning'), Document(page_content='model that can be trained on vast amounts of text data to'), Document(page_content='generate human-like language. In recent years, LLMs have'), Document(page_content='made significant advances in a variety of natural language'), Document(page_content='processing tasks, including language translation, text'), Document(page_content='generation, and sentiment analysis.'), Document(page_content='\\subsection{History of LLMs}'), Document(page_content='The earliest LLMs were developed in the 1980s and 1990s,'), Document(page_content='but they were limited by the amount of data that could be'), Document(page_content='processed and the computational power available at the'), Document(page_content='time. In the past decade, however, advances in hardware and'), Document(page_content='software have made it possible to train LLMs on massive'), Document(page_content='datasets, leading to significant improvements in'), Document(page_content='performance.'), Document(page_content='\\subsection{Applications of LLMs}'), Document(page_content='LLMs have many applications in industry, including'), Document(page_content='chatbots, content creation, and virtual assistants. They'), Document(page_content='can also be used in academia for research in linguistics,'), Document(page_content='psychology, and computational linguistics.'), Document(page_content='\\end{document}')]
HTML[](#html "Direct link to HTML")
------------------------------------
Here's an example using an HTML text splitter:
html_text = """<!DOCTYPE html><html> <head> <title>🦜️🔗 LangChain</title> <style> body { font-family: Arial, sans-serif; } h1 { color: darkblue; } </style> </head> <body> <div> <h1>🦜️🔗 LangChain</h1> <p>⚡ Building applications with LLMs through composability ⚡</p> </div> <div> As an open-source project in a rapidly developing field, we are extremely open to contributions. </div> </body></html>"""
html_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.HTML, chunk_size=60, chunk_overlap=0)html_docs = html_splitter.create_documents([html_text])html_docs
[Document(page_content='<!DOCTYPE html>\n<html>'), Document(page_content='<head>\n <title>🦜️🔗 LangChain</title>'), Document(page_content='<style>\n body {\n font-family: Aria'), Document(page_content='l, sans-serif;\n }\n h1 {'), Document(page_content='color: darkblue;\n }\n </style>\n </head'), Document(page_content='>'), Document(page_content='<body>'), Document(page_content='<div>\n <h1>🦜️🔗 LangChain</h1>'), Document(page_content='<p>⚡ Building applications with LLMs through composability ⚡'), Document(page_content='</p>\n </div>'), Document(page_content='<div>\n As an open-source project in a rapidly dev'), Document(page_content='eloping field, we are extremely open to contributions.'), Document(page_content='</div>\n </body>\n</html>')]
Solidity[](#solidity "Direct link to Solidity")
------------------------------------------------
Here's an example using the Solidity text splitter:
SOL_CODE = """pragma solidity ^0.8.20;contract HelloWorld { function add(uint a, uint b) pure public returns(uint) { return a + b; }}"""sol_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.SOL, chunk_size=128, chunk_overlap=0)sol_docs = sol_splitter.create_documents([SOL_CODE])sol_docs
[Document(page_content='pragma solidity ^0.8.20;'), Document(page_content='contract HelloWorld {\n function add(uint a, uint b) pure public returns(uint) {\n return a + b;\n }\n}')]
C#[](#c "Direct link to C#")
-----------------------------
Here's an example using the C# text splitter:
C_CODE = """using System;class Program{ static void Main() { int age = 30; // Change the age value as needed // Categorize the age without any console output if (age < 18) { // Age is under 18 } else if (age >= 18 && age < 65) { // Age is an adult } else { // Age is a senior citizen } }}"""c_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.CSHARP, chunk_size=128, chunk_overlap=0)c_docs = c_splitter.create_documents([C_CODE])c_docs
[Document(page_content='using System;'), Document(page_content='class Program\n{\n static void Main()\n {\n int age = 30; // Change the age value as needed'), Document(page_content='// Categorize the age without any console output\n if (age < 18)\n {\n // Age is under 18'), Document(page_content='}\n else if (age >= 18 && age < 65)\n {\n // Age is an adult\n }\n else\n {'), Document(page_content='// Age is a senior citizen\n }\n }\n}')]
Haskell[](#haskell "Direct link to Haskell")
---------------------------------------------
Here's an example using the Haskell text splitter:
HASKELL_CODE = """main :: IO ()main = do putStrLn "Hello, World!"-- Some sample functionsadd :: Int -> Int -> Intadd x y = x + y"""haskell_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.HASKELL, chunk_size=50, chunk_overlap=0)haskell_docs = haskell_splitter.create_documents([HASKELL_CODE])haskell_docs
[Document(page_content='main :: IO ()'), Document(page_content='main = do\n putStrLn "Hello, World!"\n-- Some'), Document(page_content='sample functions\nadd :: Int -> Int -> Int\nadd x y'), Document(page_content='= x + y')]
PHP[](#php "Direct link to PHP")
---------------------------------
Here's an example using the PHP text splitter:
PHP_CODE = """<?phpnamespace foo;class Hello { public function __construct() { }}function hello() { echo "Hello World!";}interface Human { public function breath();}trait Foo { }enum Color{ case Red; case Blue;}"""php_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.PHP, chunk_size=50, chunk_overlap=0)haskell_docs = php_splitter.create_documents([PHP_CODE])haskell_docs
[Document(page_content='<?php\nnamespace foo;'), Document(page_content='class Hello {'), Document(page_content='public function __construct() { }\n}'), Document(page_content='function hello() {\n echo "Hello World!";\n}'), Document(page_content='interface Human {\n public function breath();\n}'), Document(page_content='trait Foo { }\nenum Color\n{\n case Red;'), Document(page_content='case Blue;\n}')]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/code_splitter.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add tools to chatbots
](/v0.2/docs/how_to/chatbots_tools/)[
Next
How to do retrieval with contextual compression
](/v0.2/docs/how_to/contextual_compression/)
* [Python](#python)
* [JS](#js)
* [TS](#ts)
* [Markdown](#markdown)
* [Latex](#latex)
* [HTML](#html)
* [Solidity](#solidity)
* [C#](#c)
* [Haskell](#haskell)
* [PHP](#php) | null |
https://python.langchain.com/v0.2/docs/versions/v0_2/migrating_astream_events/ | * [](/v0.2/)
* Versions
* [v0.2](/v0.2/docs/versions/v0_2/)
* astream\_events v2
On this page
Migrating to Astream Events v2
==============================
danger
This migration guide is a work in progress and is not complete. Please wait to migrate astream\_events.
We've added a `v2` of the astream\_events API with the release of `0.2.0`. You can see this [PR](https://github.com/langchain-ai/langchain/pull/21638) for more details.
The `v2` version is a re-write of the `v1` version, and should be more efficient, with more consistent output for the events. The `v1` version of the API will be deprecated in favor of the `v2` version and will be removed in `0.4.0`.
Below is a list of changes between the `v1` and `v2` versions of the API.
### output for `on_chat_model_end`[](#output-for-on_chat_model_end "Direct link to output-for-on_chat_model_end")
In `v1`, the outputs associated with `on_chat_model_end` changed depending on whether the chat model was run as a root level runnable or as part of a chain.
As a root level runnable the output was:
"data": {"output": AIMessageChunk(content="hello world!", id='some id')}
As part of a chain the output was:
"data": { "output": { "generations": [ [ { "generation_info": None, "message": AIMessageChunk( content="hello world!", id=AnyStr() ), "text": "hello world!", "type": "ChatGenerationChunk", } ] ], "llm_output": None, } },
As of `v2`, the output will always be the simpler representation:
"data": {"output": AIMessageChunk(content="hello world!", id='some id')}
note
Non chat models (i.e., regular LLMs) are will be consistently associated with the more verbose format for now.
### output for `on_retriever_end`[](#output-for-on_retriever_end "Direct link to output-for-on_retriever_end")
`on_retriever_end` output will always return a list of `Documents`.
Before:
{ "data": { "output": [ Document(...), Document(...), ... ] }}
### Removed `on_retriever_stream`[](#removed-on_retriever_stream "Direct link to removed-on_retriever_stream")
The `on_retriever_stream` event was an artifact of the implementation and has been removed.
Full information associated with the event is already available in the `on_retriever_end` event.
Please use `on_retriever_end` instead.
### Removed `on_tool_stream`[](#removed-on_tool_stream "Direct link to removed-on_tool_stream")
The `on_tool_stream` event was an artifact of the implementation and has been removed.
Full information associated with the event is already available in the `on_tool_end` event.
Please use `on_tool_end` instead.
### Propagating Names[](#propagating-names "Direct link to Propagating Names")
Names of runnables have been updated to be more consistent.
model = GenericFakeChatModel(messages=infinite_cycle).configurable_fields( messages=ConfigurableField( id="messages", name="Messages", description="Messages return by the LLM", ))
In `v1`, the event name was `RunnableConfigurableFields`.
In `v2`, the event name is `GenericFakeChatModel`.
If you're filtering by event names, check if you need to update your filters.
### RunnableRetry[](#runnableretry "Direct link to RunnableRetry")
Usage of [RunnableRetry](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.retry.RunnableRetry.html) within an LCEL chain being streamed generated an incorrect `on_chain_end` event in `v1` corresponding to the failed runnable invocation that was being retried. This event has been removed in `v2`.
No action is required for this change.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/versions/v0_2/migrating_astream_events.mdx)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
LangChain v0.2
](/v0.2/docs/versions/v0_2/)[
Next
Changes
](/v0.2/docs/versions/v0_2/deprecations/)
* [output for `on_chat_model_end`](#output-for-on_chat_model_end)
* [output for `on_retriever_end`](#output-for-on_retriever_end)
* [Removed `on_retriever_stream`](#removed-on_retriever_stream)
* [Removed `on_tool_stream`](#removed-on_tool_stream)
* [Propagating Names](#propagating-names)
* [RunnableRetry](#runnableretry) | null |
https://python.langchain.com/v0.2/docs/how_to/query_high_cardinality/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How deal with high cardinality categoricals when doing query analysis
On this page
How deal with high cardinality categoricals when doing query analysis
=====================================================================
You may want to do query analysis to create a filter on a categorical column. One of the difficulties here is that you usually need to specify the EXACT categorical value. The issue is you need to make sure the LLM generates that categorical value exactly. This can be done relatively easy with prompting when there are only a few values that are valid. When there are a high number of valid values then it becomes more difficult, as those values may not fit in the LLM context, or (if they do) there may be too many for the LLM to properly attend to.
In this notebook we take a look at how to approach this.
Setup[](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
# %pip install -qU langchain langchain-community langchain-openai faker langchain-chroma
#### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
We'll use OpenAI in this example:
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
#### Set up data[](#set-up-data "Direct link to Set up data")
We will generate a bunch of fake names
from faker import Fakerfake = Faker()names = [fake.name() for _ in range(10000)]
Let's look at some of the names
names[0]
'Hayley Gonzalez'
names[567]
'Jesse Knight'
Query Analysis[](#query-analysis "Direct link to Query Analysis")
------------------------------------------------------------------
We can now set up a baseline query analysis
from langchain_core.pydantic_v1 import BaseModel, Field
class Search(BaseModel): query: str author: str
from langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAIsystem = """Generate a relevant search query for a library system"""prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ])llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)structured_llm = llm.with_structured_output(Search)query_analyzer = {"question": RunnablePassthrough()} | prompt | structured_llm
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
/Users/harrisonchase/workplace/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change. warn_beta(
We can see that if we spell the name exactly correctly, it knows how to handle it
query_analyzer.invoke("what are books about aliens by Jesse Knight")
Search(query='books about aliens', author='Jesse Knight')
The issue is that the values you want to filter on may NOT be spelled exactly correctly
query_analyzer.invoke("what are books about aliens by jess knight")
Search(query='books about aliens', author='Jess Knight')
### Add in all values[](#add-in-all-values "Direct link to Add in all values")
One way around this is to add ALL possible values to the prompt. That will generally guide the query in the right direction
system = """Generate a relevant search query for a library system.`author` attribute MUST be one of:{authors}Do NOT hallucinate author name!"""base_prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ])prompt = base_prompt.partial(authors=", ".join(names))
query_analyzer_all = {"question": RunnablePassthrough()} | prompt | structured_llm
However... if the list of categoricals is long enough, it may error!
try: res = query_analyzer_all.invoke("what are books about aliens by jess knight")except Exception as e: print(e)
Error code: 400 - {'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 33885 tokens (33855 in the messages, 30 in the functions). Please reduce the length of the messages or functions.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
We can try to use a longer context window... but with so much information in there, it is not garunteed to pick it up reliably
llm_long = ChatOpenAI(model="gpt-4-turbo-preview", temperature=0)structured_llm_long = llm_long.with_structured_output(Search)query_analyzer_all = {"question": RunnablePassthrough()} | prompt | structured_llm_long
query_analyzer_all.invoke("what are books about aliens by jess knight")
Search(query='aliens', author='Kevin Knight')
### Find and all relevant values[](#find-and-all-relevant-values "Direct link to Find and all relevant values")
Instead, what we can do is create an index over the relevant values and then query that for the N most relevant values,
from langchain_chroma import Chromafrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings(model="text-embedding-3-small")vectorstore = Chroma.from_texts(names, embeddings, collection_name="author_names")
**API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
def select_names(question): _docs = vectorstore.similarity_search(question, k=10) _names = [d.page_content for d in _docs] return ", ".join(_names)
create_prompt = { "question": RunnablePassthrough(), "authors": select_names,} | base_prompt
query_analyzer_select = create_prompt | structured_llm
create_prompt.invoke("what are books by jess knight")
ChatPromptValue(messages=[SystemMessage(content='Generate a relevant search query for a library system.\n\n`author` attribute MUST be one of:\n\nJesse Knight, Kelly Knight, Scott Knight, Richard Knight, Andrew Knight, Katherine Knight, Erica Knight, Ashley Knight, Becky Knight, Kevin Knight\n\nDo NOT hallucinate author name!'), HumanMessage(content='what are books by jess knight')])
query_analyzer_select.invoke("what are books about aliens by jess knight")
Search(query='books about aliens', author='Jesse Knight')
### Replace after selection[](#replace-after-selection "Direct link to Replace after selection")
Another method is to let the LLM fill in whatever value, but then convert that value to a valid value. This can actually be done with the Pydantic class itself!
from langchain_core.pydantic_v1 import validatorclass Search(BaseModel): query: str author: str @validator("author") def double(cls, v: str) -> str: return vectorstore.similarity_search(v, k=1)[0].page_content
system = """Generate a relevant search query for a library system"""prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ])corrective_structure_llm = llm.with_structured_output(Search)corrective_query_analyzer = ( {"question": RunnablePassthrough()} | prompt | corrective_structure_llm)
corrective_query_analyzer.invoke("what are books about aliens by jes knight")
Search(query='books about aliens', author='Jesse Knight')
# TODO: show trigram similarity
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/query_high_cardinality.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to configure runtime chain internals
](/v0.2/docs/how_to/configure/)[
Next
Custom Document Loader
](/v0.2/docs/how_to/document_loader_custom/)
* [Setup](#setup)
* [Query Analysis](#query-analysis)
* [Add in all values](#add-in-all-values)
* [Find and all relevant values](#find-and-all-relevant-values)
* [Replace after selection](#replace-after-selection) | null |
https://python.langchain.com/v0.2/docs/versions/v0_2/deprecations/ | * [](/v0.2/)
* Versions
* [v0.2](/v0.2/docs/versions/v0_2/)
* Changes
On this page
Deprecations and Breaking Changes
=================================
This code contains a list of deprecations and removals in the `langchain` and `langchain-core` packages.
New features and improvements are not listed here. See the [overview](/v0.2/docs/versions/overview/) for a summary of what's new in this release.
Breaking changes[](#breaking-changes "Direct link to Breaking changes")
------------------------------------------------------------------------
As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly.
The following functions and classes require an explicit LLM to be passed as an argument:
* `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit`
* `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit`
* `langchain.chains.openai_functions.get_openapi_chain`
* `langchain.chains.router.MultiRetrievalQAChain.from_retrievers`
* `langchain.indexes.VectorStoreIndexWrapper.query`
* `langchain.indexes.VectorStoreIndexWrapper.query_with_sources`
* `langchain.indexes.VectorStoreIndexWrapper.aquery_with_sources`
* `langchain.chains.flare.FlareChain`
The following classes now require passing an explicit Embedding model as an argument:
* `langchain.indexes.VectostoreIndexCreator`
The following code has been removed:
* `langchain.natbot.NatBotChain.from_default` removed in favor of the `from_llm` class method.
Behavior was changed for the following code:
### @tool decorator[](#tool-decorator "Direct link to @tool decorator")
`@tool` decorator now assigns the function doc-string as the tool description. Previously, the `@tool` decorator using to prepend the function signature.
Before 0.2.0:
@tooldef my_tool(x: str) -> str: """Some description.""" return "something"print(my_tool.description)
Would result in: `my_tool: (x: str) -> str - Some description.`
As of 0.2.0:
It will result in: `Some description.`
Code that moved to another package[](#code-that-moved-to-another-package "Direct link to Code that moved to another package")
------------------------------------------------------------------------------------------------------------------------------
Code that was moved from `langchain` into another package (e.g, `langchain-community`)
If you try to import it from `langchain`, the import will keep on working, but will raise a deprecation warning. The warning will provide a replacement import statement.
python -c "from langchain.document_loaders.markdown import UnstructuredMarkdownLoader"
LangChainDeprecationWarning: Importing UnstructuredMarkdownLoader from langchain.document_loaders is deprecated. Please replace deprecated imports:>> from langchain.document_loaders import UnstructuredMarkdownLoaderwith new imports of:>> from langchain_community.document_loaders import UnstructuredMarkdownLoader
We will continue supporting the imports in `langchain` until release 0.4 as long as the relevant package where the code lives is installed. (e.g., as long as `langchain_community` is installed.)
However, we advise for users to not rely on these imports and instead migrate to the new imports. To help with this process, we’re releasing a migration script via the LangChain CLI. See further instructions in migration guide.
Code targeted for removal[](#code-targeted-for-removal "Direct link to Code targeted for removal")
---------------------------------------------------------------------------------------------------
Code that has better alternatives available and will eventually be removed, so there’s only a single way to do things. (e.g., `predict_messages` method in ChatModels has been deprecated in favor of `invoke`).
### astream events V1[](#astream-events-v1 "Direct link to astream events V1")
If you are using `astream_events`, please review how to [migrate to astream events v2](/v0.2/docs/versions/v0_2/migrating_astream_events/).
### langchain\_core[](#langchain_core "Direct link to langchain_core")
#### try\_load\_from\_hub[](#try_load_from_hub "Direct link to try_load_from_hub")
In module: `utils.loading` Deprecated: 0.1.30 Removal: 0.3.0
Alternative: Using the hwchase17/langchain-hub repo for prompts is deprecated. Please use [https://smith.langchain.com/hub](https://smith.langchain.com/hub) instead.
#### BaseLanguageModel.predict[](#baselanguagemodelpredict "Direct link to BaseLanguageModel.predict")
In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: invoke
#### BaseLanguageModel.predict\_messages[](#baselanguagemodelpredict_messages "Direct link to BaseLanguageModel.predict_messages")
In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: invoke
#### BaseLanguageModel.apredict[](#baselanguagemodelapredict "Direct link to BaseLanguageModel.apredict")
In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: ainvoke
#### BaseLanguageModel.apredict\_messages[](#baselanguagemodelapredict_messages "Direct link to BaseLanguageModel.apredict_messages")
In module: `language_models.base` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: ainvoke
#### RunTypeEnum[](#runtypeenum "Direct link to RunTypeEnum")
In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: Use string instead.
#### TracerSessionV1Base[](#tracersessionv1base "Direct link to TracerSessionV1Base")
In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### TracerSessionV1Create[](#tracersessionv1create "Direct link to TracerSessionV1Create")
In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### TracerSessionV1[](#tracersessionv1 "Direct link to TracerSessionV1")
In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### TracerSessionBase[](#tracersessionbase "Direct link to TracerSessionBase")
In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### TracerSession[](#tracersession "Direct link to TracerSession")
In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### BaseRun[](#baserun "Direct link to BaseRun")
In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: Run
#### LLMRun[](#llmrun "Direct link to LLMRun")
In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: Run
#### ChainRun[](#chainrun "Direct link to ChainRun")
In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: Run
#### ToolRun[](#toolrun "Direct link to ToolRun")
In module: `tracers.schemas` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: Run
#### BaseChatModel.**call**[](#basechatmodelcall "Direct link to basechatmodelcall")
In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: invoke
#### BaseChatModel.call\_as\_llm[](#basechatmodelcall_as_llm "Direct link to BaseChatModel.call_as_llm")
In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: invoke
#### BaseChatModel.predict[](#basechatmodelpredict "Direct link to BaseChatModel.predict")
In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: invoke
#### BaseChatModel.predict\_messages[](#basechatmodelpredict_messages "Direct link to BaseChatModel.predict_messages")
In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: invoke
#### BaseChatModel.apredict[](#basechatmodelapredict "Direct link to BaseChatModel.apredict")
In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: ainvoke
#### BaseChatModel.apredict\_messages[](#basechatmodelapredict_messages "Direct link to BaseChatModel.apredict_messages")
In module: `language_models.chat_models` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: ainvoke
#### BaseLLM.**call**[](#basellmcall "Direct link to basellmcall")
In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: invoke
#### BaseLLM.predict[](#basellmpredict "Direct link to BaseLLM.predict")
In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: invoke
#### BaseLLM.predict\_messages[](#basellmpredict_messages "Direct link to BaseLLM.predict_messages")
In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: invoke
#### BaseLLM.apredict[](#basellmapredict "Direct link to BaseLLM.apredict")
In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: ainvoke
#### BaseLLM.apredict\_messages[](#basellmapredict_messages "Direct link to BaseLLM.apredict_messages")
In module: `language_models.llms` Deprecated: 0.1.7 Removal: 0.3.0
Alternative: ainvoke
#### BaseRetriever.get\_relevant\_documents[](#baseretrieverget_relevant_documents "Direct link to BaseRetriever.get_relevant_documents")
In module: `retrievers` Deprecated: 0.1.46 Removal: 0.3.0
Alternative: invoke
#### BaseRetriever.aget\_relevant\_documents[](#baseretrieveraget_relevant_documents "Direct link to BaseRetriever.aget_relevant_documents")
In module: `retrievers` Deprecated: 0.1.46 Removal: 0.3.0
Alternative: ainvoke
#### ChatPromptTemplate.from\_role\_strings[](#chatprompttemplatefrom_role_strings "Direct link to ChatPromptTemplate.from_role_strings")
In module: `prompts.chat` Deprecated: 0.0.1 Removal:
Alternative: from\_messages classmethod
#### ChatPromptTemplate.from\_strings[](#chatprompttemplatefrom_strings "Direct link to ChatPromptTemplate.from_strings")
In module: `prompts.chat` Deprecated: 0.0.1 Removal:
Alternative: from\_messages classmethod
#### BaseTool.**call**[](#basetoolcall "Direct link to basetoolcall")
In module: `tools` Deprecated: 0.1.47 Removal: 0.3.0
Alternative: invoke
#### convert\_pydantic\_to\_openai\_function[](#convert_pydantic_to_openai_function "Direct link to convert_pydantic_to_openai_function")
In module: `utils.function_calling` Deprecated: 0.1.16 Removal: 0.3.0
Alternative: langchain\_core.utils.function\_calling.convert\_to\_openai\_function()
#### convert\_pydantic\_to\_openai\_tool[](#convert_pydantic_to_openai_tool "Direct link to convert_pydantic_to_openai_tool")
In module: `utils.function_calling` Deprecated: 0.1.16 Removal: 0.3.0
Alternative: langchain\_core.utils.function\_calling.convert\_to\_openai\_tool()
#### convert\_python\_function\_to\_openai\_function[](#convert_python_function_to_openai_function "Direct link to convert_python_function_to_openai_function")
In module: `utils.function_calling` Deprecated: 0.1.16 Removal: 0.3.0
Alternative: langchain\_core.utils.function\_calling.convert\_to\_openai\_function()
#### format\_tool\_to\_openai\_function[](#format_tool_to_openai_function "Direct link to format_tool_to_openai_function")
In module: `utils.function_calling` Deprecated: 0.1.16 Removal: 0.3.0
Alternative: langchain\_core.utils.function\_calling.convert\_to\_openai\_function()
#### format\_tool\_to\_openai\_tool[](#format_tool_to_openai_tool "Direct link to format_tool_to_openai_tool")
In module: `utils.function_calling` Deprecated: 0.1.16 Removal: 0.3.0
Alternative: langchain\_core.utils.function\_calling.convert\_to\_openai\_tool()
### langchain[](#langchain "Direct link to langchain")
#### AgentType[](#agenttype "Direct link to AgentType")
In module: `agents.agent_types` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: Use [LangGraph](/v0.2/docs/how_to/migrate_agent/) or new agent constructor methods like create\_react\_agent, create\_json\_agent, create\_structured\_chat\_agent, etc.
#### Chain.**call**[](#chaincall "Direct link to chaincall")
In module: `chains.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: invoke
#### Chain.acall[](#chainacall "Direct link to Chain.acall")
In module: `chains.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: ainvoke
#### Chain.run[](#chainrun-1 "Direct link to Chain.run")
In module: `chains.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: invoke
#### Chain.arun[](#chainarun "Direct link to Chain.arun")
In module: `chains.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: ainvoke
#### Chain.apply[](#chainapply "Direct link to Chain.apply")
In module: `chains.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: batch
#### LLMChain[](#llmchain "Direct link to LLMChain")
In module: `chains.llm` Deprecated: 0.1.17 Removal: 0.3.0
Alternative: [RunnableSequence](/v0.2/docs/how_to/sequence/), e.g., `prompt | llm`
#### LLMSingleActionAgent[](#llmsingleactionagent "Direct link to LLMSingleActionAgent")
In module: `agents.agent` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: Use [LangGraph](/v0.2/docs/how_to/migrate_agent/) or new agent constructor methods like create\_react\_agent, create\_json\_agent, create\_structured\_chat\_agent, etc.
#### Agent[](#agent "Direct link to Agent")
In module: `agents.agent` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: Use [LangGraph](/v0.2/docs/how_to/migrate_agent/) or new agent constructor methods like create\_react\_agent, create\_json\_agent, create\_structured\_chat\_agent, etc.
#### OpenAIFunctionsAgent[](#openaifunctionsagent "Direct link to OpenAIFunctionsAgent")
In module: `agents.openai_functions_agent.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: create\_openai\_functions\_agent
#### ZeroShotAgent[](#zeroshotagent "Direct link to ZeroShotAgent")
In module: `agents.mrkl.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: create\_react\_agent
#### MRKLChain[](#mrklchain "Direct link to MRKLChain")
In module: `agents.mrkl.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### ConversationalAgent[](#conversationalagent "Direct link to ConversationalAgent")
In module: `agents.conversational.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: create\_react\_agent
#### ConversationalChatAgent[](#conversationalchatagent "Direct link to ConversationalChatAgent")
In module: `agents.conversational_chat.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: create\_json\_chat\_agent
#### ChatAgent[](#chatagent "Direct link to ChatAgent")
In module: `agents.chat.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: create\_react\_agent
#### OpenAIMultiFunctionsAgent[](#openaimultifunctionsagent "Direct link to OpenAIMultiFunctionsAgent")
In module: `agents.openai_functions_multi_agent.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: create\_openai\_tools\_agent
#### ReActDocstoreAgent[](#reactdocstoreagent "Direct link to ReActDocstoreAgent")
In module: `agents.react.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### DocstoreExplorer[](#docstoreexplorer "Direct link to DocstoreExplorer")
In module: `agents.react.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### ReActTextWorldAgent[](#reacttextworldagent "Direct link to ReActTextWorldAgent")
In module: `agents.react.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### ReActChain[](#reactchain "Direct link to ReActChain")
In module: `agents.react.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### SelfAskWithSearchAgent[](#selfaskwithsearchagent "Direct link to SelfAskWithSearchAgent")
In module: `agents.self_ask_with_search.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: create\_self\_ask\_with\_search
#### SelfAskWithSearchChain[](#selfaskwithsearchchain "Direct link to SelfAskWithSearchChain")
In module: `agents.self_ask_with_search.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### StructuredChatAgent[](#structuredchatagent "Direct link to StructuredChatAgent")
In module: `agents.structured_chat.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: create\_structured\_chat\_agent
#### RetrievalQA[](#retrievalqa "Direct link to RetrievalQA")
In module: `chains.retrieval_qa.base` Deprecated: 0.1.17 Removal: 0.3.0
Alternative: [create\_retrieval\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain)
#### load\_agent\_from\_config[](#load_agent_from_config "Direct link to load_agent_from_config")
In module: `agents.loading` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### load\_agent[](#load_agent "Direct link to load_agent")
In module: `agents.loading` Deprecated: 0.1.0 Removal: 0.3.0
Alternative:
#### initialize\_agent[](#initialize_agent "Direct link to initialize_agent")
In module: `agents.initialize` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: Use [LangGraph](/v0.2/docs/how_to/migrate_agent/) or new agent constructor methods like create\_react\_agent, create\_json\_agent, create\_structured\_chat\_agent, etc.
#### XMLAgent[](#xmlagent "Direct link to XMLAgent")
In module: `agents.xml.base` Deprecated: 0.1.0 Removal: 0.3.0
Alternative: create\_xml\_agent
#### CohereRerank[](#coherererank "Direct link to CohereRerank")
In module: `retrievers.document_compressors.cohere_rerank` Deprecated: 0.0.30 Removal: 0.3.0
Alternative: langchain\_cohere.CohereRerank
#### ConversationalRetrievalChain[](#conversationalretrievalchain "Direct link to ConversationalRetrievalChain")
In module: `chains.conversational_retrieval.base` Deprecated: 0.1.17 Removal: 0.3.0
Alternative: [create\_history\_aware\_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) together with [create\_retrieval\_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain) (see example in docstring)
#### create\_extraction\_chain\_pydantic[](#create_extraction_chain_pydantic "Direct link to create_extraction_chain_pydantic")
In module: `chains.openai_tools.extraction` Deprecated: 0.1.14 Removal: 0.3.0
Alternative: [with\_structured\_output](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling.
#### create\_openai\_fn\_runnable[](#create_openai_fn_runnable "Direct link to create_openai_fn_runnable")
In module: `chains.structured_output.base` Deprecated: 0.1.14 Removal: 0.3.0
Alternative: [with\_structured\_output](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling.
#### create\_structured\_output\_runnable[](#create_structured_output_runnable "Direct link to create_structured_output_runnable")
In module: `chains.structured_output.base` Deprecated: 0.1.17 Removal: 0.3.0
Alternative: [with\_structured\_output](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling.
#### create\_openai\_fn\_chain[](#create_openai_fn_chain "Direct link to create_openai_fn_chain")
In module: `chains.openai_functions.base` Deprecated: 0.1.1 Removal: 0.3.0
Alternative: create\_openai\_fn\_runnable
#### create\_structured\_output\_chain[](#create_structured_output_chain "Direct link to create_structured_output_chain")
In module: `chains.openai_functions.base` Deprecated: 0.1.1 Removal: 0.3.0
Alternative: ChatOpenAI.with\_structured\_output
#### create\_extraction\_chain[](#create_extraction_chain "Direct link to create_extraction_chain")
In module: `chains.openai_functions.extraction` Deprecated: 0.1.14 Removal: 0.3.0
Alternative: [with\_structured\_output](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling.
#### create\_extraction\_chain\_pydantic[](#create_extraction_chain_pydantic-1 "Direct link to create_extraction_chain_pydantic")
In module: `chains.openai_functions.extraction` Deprecated: 0.1.14 Removal: 0.3.0
Alternative: [with\_structured\_output](/v0.2/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/versions/v0_2/deprecations.mdx)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
astream\_events v2
](/v0.2/docs/versions/v0_2/migrating_astream_events/)[
Next
Security
](/v0.2/docs/security/)
* [Breaking changes](#breaking-changes)
* [@tool decorator](#tool-decorator)
* [Code that moved to another package](#code-that-moved-to-another-package)
* [Code targeted for removal](#code-targeted-for-removal)
* [astream events V1](#astream-events-v1)
* [langchain\_core](#langchain_core)
* [langchain](#langchain) | null |
https://python.langchain.com/v0.2/docs/how_to/document_loader_custom/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* Custom Document Loader
On this page
How to create a custom Document Loader
======================================
Overview[](#overview "Direct link to Overview")
------------------------------------------------
Applications based on LLMs frequently entail extracting data from databases or files, like PDFs, and converting it into a format that LLMs can utilize. In LangChain, this usually involves creating Document objects, which encapsulate the extracted text (`page_content`) along with metadata—a dictionary containing details about the document, such as the author's name or the date of publication.
`Document` objects are often formatted into prompts that are fed into an LLM, allowing the LLM to use the information in the `Document` to generate a desired response (e.g., summarizing the document). `Documents` can be either used immediately or indexed into a vectorstore for future retrieval and use.
The main abstractions for Document Loading are:
Component
Description
Document
Contains `text` and `metadata`
BaseLoader
Use to convert raw data into `Documents`
Blob
A representation of binary data that's located either in a file or in memory
BaseBlobParser
Logic to parse a `Blob` to yield `Document` objects
This guide will demonstrate how to write custom document loading and file parsing logic; specifically, we'll see how to:
1. Create a standard document Loader by sub-classing from `BaseLoader`.
2. Create a parser using `BaseBlobParser` and use it in conjunction with `Blob` and `BlobLoaders`. This is useful primarily when working with files.
Standard Document Loader[](#standard-document-loader "Direct link to Standard Document Loader")
------------------------------------------------------------------------------------------------
A document loader can be implemented by sub-classing from a `BaseLoader` which provides a standard interface for loading documents.
### Interface[](#interface "Direct link to Interface")
Method Name
Explanation
lazy\_load
Used to load documents one by one **lazily**. Use for production code.
alazy\_load
Async variant of `lazy_load`
load
Used to load all the documents into memory **eagerly**. Use for prototyping or interactive work.
aload
Used to load all the documents into memory **eagerly**. Use for prototyping or interactive work. **Added in 2024-04 to LangChain.**
* The `load` methods is a convenience method meant solely for prototyping work -- it just invokes `list(self.lazy_load())`.
* The `alazy_load` has a default implementation that will delegate to `lazy_load`. If you're using async, we recommend overriding the default implementation and providing a native async implementation.
::: {.callout-important} When implementing a document loader do **NOT** provide parameters via the `lazy_load` or `alazy_load` methods.
All configuration is expected to be passed through the initializer (**init**). This was a design choice made by LangChain to make sure that once a document loader has been instantiated it has all the information needed to load documents. :::
### Implementation[](#implementation "Direct link to Implementation")
Let's create an example of a standard document loader that loads a file and creates a document from each line in the file.
from typing import AsyncIterator, Iteratorfrom langchain_core.document_loaders import BaseLoaderfrom langchain_core.documents import Documentclass CustomDocumentLoader(BaseLoader): """An example document loader that reads a file line by line.""" def __init__(self, file_path: str) -> None: """Initialize the loader with a file path. Args: file_path: The path to the file to load. """ self.file_path = file_path def lazy_load(self) -> Iterator[Document]: # <-- Does not take any arguments """A lazy loader that reads a file line by line. When you're implementing lazy load methods, you should use a generator to yield documents one by one. """ with open(self.file_path, encoding="utf-8") as f: line_number = 0 for line in f: yield Document( page_content=line, metadata={"line_number": line_number, "source": self.file_path}, ) line_number += 1 # alazy_load is OPTIONAL. # If you leave out the implementation, a default implementation which delegates to lazy_load will be used! async def alazy_load( self, ) -> AsyncIterator[Document]: # <-- Does not take any arguments """An async lazy loader that reads a file line by line.""" # Requires aiofiles # Install with `pip install aiofiles` # https://github.com/Tinche/aiofiles import aiofiles async with aiofiles.open(self.file_path, encoding="utf-8") as f: line_number = 0 async for line in f: yield Document( page_content=line, metadata={"line_number": line_number, "source": self.file_path}, ) line_number += 1
**API Reference:**[BaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_core.document_loaders.base.BaseLoader.html) | [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html)
### Test 🧪[](#test- "Direct link to Test 🧪")
To test out the document loader, we need a file with some quality content.
with open("./meow.txt", "w", encoding="utf-8") as f: quality_content = "meow meow🐱 \n meow meow🐱 \n meow😻😻" f.write(quality_content)loader = CustomDocumentLoader("./meow.txt")
## Test out the lazy load interfacefor doc in loader.lazy_load(): print() print(type(doc)) print(doc)
<class 'langchain_core.documents.base.Document'>page_content='meow meow🐱 \n' metadata={'line_number': 0, 'source': './meow.txt'}<class 'langchain_core.documents.base.Document'>page_content=' meow meow🐱 \n' metadata={'line_number': 1, 'source': './meow.txt'}<class 'langchain_core.documents.base.Document'>page_content=' meow😻😻' metadata={'line_number': 2, 'source': './meow.txt'}
## Test out the async implementationasync for doc in loader.alazy_load(): print() print(type(doc)) print(doc)
<class 'langchain_core.documents.base.Document'>page_content='meow meow🐱 \n' metadata={'line_number': 0, 'source': './meow.txt'}<class 'langchain_core.documents.base.Document'>page_content=' meow meow🐱 \n' metadata={'line_number': 1, 'source': './meow.txt'}<class 'langchain_core.documents.base.Document'>page_content=' meow😻😻' metadata={'line_number': 2, 'source': './meow.txt'}
::: {.callout-tip}
`load()` can be helpful in an interactive environment such as a jupyter notebook.
Avoid using it for production code since eager loading assumes that all the content can fit into memory, which is not always the case, especially for enterprise data. :::
loader.load()
[Document(page_content='meow meow🐱 \n', metadata={'line_number': 0, 'source': './meow.txt'}), Document(page_content=' meow meow🐱 \n', metadata={'line_number': 1, 'source': './meow.txt'}), Document(page_content=' meow😻😻', metadata={'line_number': 2, 'source': './meow.txt'})]
Working with Files[](#working-with-files "Direct link to Working with Files")
------------------------------------------------------------------------------
Many document loaders invovle parsing files. The difference between such loaders usually stems from how the file is parsed rather than how the file is loaded. For example, you can use `open` to read the binary content of either a PDF or a markdown file, but you need different parsing logic to convert that binary data into text.
As a result, it can be helpful to decouple the parsing logic from the loading logic, which makes it easier to re-use a given parser regardless of how the data was loaded.
### BaseBlobParser[](#baseblobparser "Direct link to BaseBlobParser")
A `BaseBlobParser` is an interface that accepts a `blob` and outputs a list of `Document` objects. A `blob` is a representation of data that lives either in memory or in a file. LangChain python has a `Blob` primitive which is inspired by the [Blob WebAPI spec](https://developer.mozilla.org/en-US/docs/Web/API/Blob).
from langchain_core.document_loaders import BaseBlobParser, Blobclass MyParser(BaseBlobParser): """A simple parser that creates a document from each line.""" def lazy_parse(self, blob: Blob) -> Iterator[Document]: """Parse a blob into a document line by line.""" line_number = 0 with blob.as_bytes_io() as f: for line in f: line_number += 1 yield Document( page_content=line, metadata={"line_number": line_number, "source": blob.source}, )
**API Reference:**[BaseBlobParser](https://api.python.langchain.com/en/latest/document_loaders/langchain_core.document_loaders.base.BaseBlobParser.html) | [Blob](https://api.python.langchain.com/en/latest/document_loaders/langchain_core.document_loaders.blob_loaders.Blob.html)
blob = Blob.from_path("./meow.txt")parser = MyParser()
list(parser.lazy_parse(blob))
[Document(page_content='meow meow🐱 \n', metadata={'line_number': 1, 'source': './meow.txt'}), Document(page_content=' meow meow🐱 \n', metadata={'line_number': 2, 'source': './meow.txt'}), Document(page_content=' meow😻😻', metadata={'line_number': 3, 'source': './meow.txt'})]
Using the **blob** API also allows one to load content direclty from memory without having to read it from a file!
blob = Blob(data=b"some data from memory\nmeow")list(parser.lazy_parse(blob))
[Document(page_content='some data from memory\n', metadata={'line_number': 1, 'source': None}), Document(page_content='meow', metadata={'line_number': 2, 'source': None})]
### Blob[](#blob "Direct link to Blob")
Let's take a quick look through some of the Blob API.
blob = Blob.from_path("./meow.txt", metadata={"foo": "bar"})
blob.encoding
'utf-8'
blob.as_bytes()
b'meow meow\xf0\x9f\x90\xb1 \n meow meow\xf0\x9f\x90\xb1 \n meow\xf0\x9f\x98\xbb\xf0\x9f\x98\xbb'
blob.as_string()
'meow meow🐱 \n meow meow🐱 \n meow😻😻'
blob.as_bytes_io()
<contextlib._GeneratorContextManager at 0x743f34324450>
blob.metadata
{'foo': 'bar'}
blob.source
'./meow.txt'
### Blob Loaders[](#blob-loaders "Direct link to Blob Loaders")
While a parser encapsulates the logic needed to parse binary data into documents, _blob loaders_ encapsulate the logic that's necessary to load blobs from a given storage location.
A the moment, `LangChain` only supports `FileSystemBlobLoader`.
You can use the `FileSystemBlobLoader` to load blobs and then use the parser to parse them.
from langchain_community.document_loaders.blob_loaders import FileSystemBlobLoaderblob_loader = FileSystemBlobLoader(path=".", glob="*.mdx", show_progress=True)
**API Reference:**[FileSystemBlobLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html)
parser = MyParser()for blob in blob_loader.yield_blobs(): for doc in parser.lazy_parse(blob): print(doc) break
0%| | 0/8 [00:00<?, ?it/s]
page_content='# Microsoft Office\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}page_content='# Markdown\n' metadata={'line_number': 1, 'source': 'markdown.mdx'}page_content='# JSON\n' metadata={'line_number': 1, 'source': 'json.mdx'}page_content='---\n' metadata={'line_number': 1, 'source': 'pdf.mdx'}page_content='---\n' metadata={'line_number': 1, 'source': 'index.mdx'}page_content='# File Directory\n' metadata={'line_number': 1, 'source': 'file_directory.mdx'}page_content='# CSV\n' metadata={'line_number': 1, 'source': 'csv.mdx'}page_content='# HTML\n' metadata={'line_number': 1, 'source': 'html.mdx'}
### Generic Loader[](#generic-loader "Direct link to Generic Loader")
LangChain has a `GenericLoader` abstraction which composes a `BlobLoader` with a `BaseBlobParser`.
`GenericLoader` is meant to provide standardized classmethods that make it easy to use existing `BlobLoader` implementations. At the moment, only the `FileSystemBlobLoader` is supported.
from langchain_community.document_loaders.generic import GenericLoaderloader = GenericLoader.from_filesystem( path=".", glob="*.mdx", show_progress=True, parser=MyParser())for idx, doc in enumerate(loader.lazy_load()): if idx < 5: print(doc)print("... output truncated for demo purposes")
**API Reference:**[GenericLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.generic.GenericLoader.html)
0%| | 0/8 [00:00<?, ?it/s]
page_content='# Microsoft Office\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}page_content='\n' metadata={'line_number': 2, 'source': 'office_file.mdx'}page_content='>[The Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS.\n' metadata={'line_number': 3, 'source': 'office_file.mdx'}page_content='\n' metadata={'line_number': 4, 'source': 'office_file.mdx'}page_content='This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a document format that we can use downstream.\n' metadata={'line_number': 5, 'source': 'office_file.mdx'}... output truncated for demo purposes
#### Custom Generic Loader[](#custom-generic-loader "Direct link to Custom Generic Loader")
If you really like creating classes, you can sub-class and create a class to encapsulate the logic together.
You can sub-class from this class to load content using an existing loader.
from typing import Anyclass MyCustomLoader(GenericLoader): @staticmethod def get_parser(**kwargs: Any) -> BaseBlobParser: """Override this method to associate a default parser with the class.""" return MyParser()
loader = MyCustomLoader.from_filesystem(path=".", glob="*.mdx", show_progress=True)for idx, doc in enumerate(loader.lazy_load()): if idx < 5: print(doc)print("... output truncated for demo purposes")
0%| | 0/8 [00:00<?, ?it/s]
page_content='# Microsoft Office\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}page_content='\n' metadata={'line_number': 2, 'source': 'office_file.mdx'}page_content='>[The Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS.\n' metadata={'line_number': 3, 'source': 'office_file.mdx'}page_content='\n' metadata={'line_number': 4, 'source': 'office_file.mdx'}page_content='This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a document format that we can use downstream.\n' metadata={'line_number': 5, 'source': 'office_file.mdx'}... output truncated for demo purposes
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_custom.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How deal with high cardinality categoricals when doing query analysis
](/v0.2/docs/how_to/query_high_cardinality/)[
Next
How to split by HTML header
](/v0.2/docs/how_to/HTML_header_metadata_splitter/)
* [Overview](#overview)
* [Standard Document Loader](#standard-document-loader)
* [Interface](#interface)
* [Implementation](#implementation)
* [Test 🧪](#test-)
* [Working with Files](#working-with-files)
* [BaseBlobParser](#baseblobparser)
* [Blob](#blob)
* [Blob Loaders](#blob-loaders)
* [Generic Loader](#generic-loader) | null |
https://python.langchain.com/v0.2/docs/how_to/contextual_compression/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to do retrieval with contextual compression
On this page
How to do retrieval with contextual compression
===============================================
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.
To use the Contextual Compression Retriever, you'll need:
* a base retriever
* a Document Compressor
The Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether.
Get started[](#get-started "Direct link to Get started")
---------------------------------------------------------
# Helper function for printing docsdef pretty_print_docs(docs): print( f"\n{'-' * 100}\n".join( [f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)] ) )
Using a vanilla vector store retriever[](#using-a-vanilla-vector-store-retriever "Direct link to Using a vanilla vector store retriever")
------------------------------------------------------------------------------------------------------------------------------------------
Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them.
from langchain_community.document_loaders import TextLoaderfrom langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import CharacterTextSplitterdocuments = TextLoader("state_of_the_union.txt").load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()docs = retriever.invoke("What did the president say about Ketanji Brown Jackson")pretty_print_docs(docs)
**API Reference:**[TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html) | [FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html)
Document 1:Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.----------------------------------------------------------------------------------------------------Document 2:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.----------------------------------------------------------------------------------------------------Document 3:And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic.----------------------------------------------------------------------------------------------------Document 4:Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. That ends on my watch. Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. Let’s pass the Paycheck Fairness Act and paid leave. Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.
Adding contextual compression with an `LLMChainExtractor`[](#adding-contextual-compression-with-an-llmchainextractor "Direct link to adding-contextual-compression-with-an-llmchainextractor")
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll add an `LLMChainExtractor`, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.
from langchain.retrievers import ContextualCompressionRetrieverfrom langchain.retrievers.document_compressors import LLMChainExtractorfrom langchain_openai import OpenAIllm = OpenAI(temperature=0)compressor = LLMChainExtractor.from_llm(llm)compression_retriever = ContextualCompressionRetriever( base_compressor=compressor, base_retriever=retriever)compressed_docs = compression_retriever.invoke( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs)
**API Reference:**[ContextualCompressionRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.contextual_compression.ContextualCompressionRetriever.html) | [LLMChainExtractor](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor.html) | [OpenAI](https://api.python.langchain.com/en/latest/llms/langchain_openai.llms.base.OpenAI.html)
Document 1:I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson.
More built-in compressors: filters[](#more-built-in-compressors-filters "Direct link to More built-in compressors: filters")
-----------------------------------------------------------------------------------------------------------------------------
### `LLMChainFilter`[](#llmchainfilter "Direct link to llmchainfilter")
The `LLMChainFilter` is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents.
from langchain.retrievers.document_compressors import LLMChainFilter_filter = LLMChainFilter.from_llm(llm)compression_retriever = ContextualCompressionRetriever( base_compressor=_filter, base_retriever=retriever)compressed_docs = compression_retriever.invoke( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs)
**API Reference:**[LLMChainFilter](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.chain_filter.LLMChainFilter.html)
Document 1:Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
### `EmbeddingsFilter`[](#embeddingsfilter "Direct link to embeddingsfilter")
Making an extra LLM call over each retrieved document is expensive and slow. The `EmbeddingsFilter` provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query.
from langchain.retrievers.document_compressors import EmbeddingsFilterfrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)compression_retriever = ContextualCompressionRetriever( base_compressor=embeddings_filter, base_retriever=retriever)compressed_docs = compression_retriever.invoke( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs)
**API Reference:**[EmbeddingsFilter](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.embeddings_filter.EmbeddingsFilter.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
Document 1:Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.----------------------------------------------------------------------------------------------------Document 2:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
Stringing compressors and document transformers together[](#stringing-compressors-and-document-transformers-together "Direct link to Stringing compressors and document transformers together")
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Using the `DocumentCompressorPipeline` we can also easily combine multiple compressors in sequence. Along with compressors we can add `BaseDocumentTransformer`s to our pipeline, which don't perform any contextual compression but simply perform some transformation on a set of documents. For example `TextSplitter`s can be used as document transformers to split documents into smaller pieces, and the `EmbeddingsRedundantFilter` can be used to filter out redundant documents based on embedding similarity between documents.
Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query.
from langchain.retrievers.document_compressors import DocumentCompressorPipelinefrom langchain_community.document_transformers import EmbeddingsRedundantFilterfrom langchain_text_splitters import CharacterTextSplittersplitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=". ")redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)pipeline_compressor = DocumentCompressorPipeline( transformers=[splitter, redundant_filter, relevant_filter])
**API Reference:**[DocumentCompressorPipeline](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.base.DocumentCompressorPipeline.html) | [EmbeddingsRedundantFilter](https://api.python.langchain.com/en/latest/document_transformers/langchain_community.document_transformers.embeddings_redundant_filter.EmbeddingsRedundantFilter.html) | [CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html)
compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline_compressor, base_retriever=retriever)compressed_docs = compression_retriever.invoke( "What did the president say about Ketanji Jackson Brown")pretty_print_docs(compressed_docs)
Document 1:One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson----------------------------------------------------------------------------------------------------Document 2:As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year----------------------------------------------------------------------------------------------------Document 3:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder----------------------------------------------------------------------------------------------------Document 4:Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/contextual_compression.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to split code
](/v0.2/docs/how_to/code_splitter/)[
Next
How to create custom callback handlers
](/v0.2/docs/how_to/custom_callbacks/)
* [Get started](#get-started)
* [Using a vanilla vector store retriever](#using-a-vanilla-vector-store-retriever)
* [Adding contextual compression with an `LLMChainExtractor`](#adding-contextual-compression-with-an-llmchainextractor)
* [More built-in compressors: filters](#more-built-in-compressors-filters)
* [`LLMChainFilter`](#llmchainfilter)
* [`EmbeddingsFilter`](#embeddingsfilter)
* [Stringing compressors and document transformers together](#stringing-compressors-and-document-transformers-together) | null |
https://python.langchain.com/v0.2/docs/security/ | * [](/v0.2/)
* Security
On this page
Security
========
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
Best practices[](#best-practices "Direct link to Best practices")
------------------------------------------------------------------
When building such applications developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it’s safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_\(computing\)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It’s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
Example scenarios with mitigation strategies:
* A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container.
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
Reporting a vulnerability[](#reporting-a-vulnerability "Direct link to Reporting a vulnerability")
---------------------------------------------------------------------------------------------------
Please report security vulnerabilities by email to [[email protected].](mailto:[email protected].) This will ensure the issue is promptly triaged and acted upon as needed.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/security.md)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Changes
](/v0.2/docs/versions/v0_2/deprecations/)
* [Best practices](#best-practices)
* [Reporting a vulnerability](#reporting-a-vulnerability) | null |
https://python.langchain.com/v0.2/docs/how_to/HTML_header_metadata_splitter/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to split by HTML header
On this page
How to split by HTML header
===========================
Description and motivation[](#description-and-motivation "Direct link to Description and motivation")
------------------------------------------------------------------------------------------------------
[HTMLHeaderTextSplitter](https://api.python.langchain.com/en/latest/html/langchain_text_splitters.html.HTMLHeaderTextSplitter.html) is a "structure-aware" chunker that splits text at the HTML element level and adds metadata for each header "relevant" to any given chunk. It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures. It can be used with other text splitters as part of a chunking pipeline.
It is analogous to the [MarkdownHeaderTextSplitter](/v0.2/docs/how_to/markdown_header_metadata_splitter/) for markdown files.
To specify what headers to split on, specify `headers_to_split_on` when instantiating `HTMLHeaderTextSplitter` as shown below.
Usage examples[](#usage-examples "Direct link to Usage examples")
------------------------------------------------------------------
### 1) How to split HTML strings:[](#1-how-to-split-html-strings "Direct link to 1) How to split HTML strings:")
%pip install -qU langchain-text-splitters
from langchain_text_splitters import HTMLHeaderTextSplitterhtml_string = """<!DOCTYPE html><html><body> <div> <h1>Foo</h1> <p>Some intro text about Foo.</p> <div> <h2>Bar main section</h2> <p>Some intro text about Bar.</p> <h3>Bar subsection 1</h3> <p>Some text about the first subtopic of Bar.</p> <h3>Bar subsection 2</h3> <p>Some text about the second subtopic of Bar.</p> </div> <div> <h2>Baz</h2> <p>Some text about Baz</p> </div> <br> <p>Some concluding text about Foo</p> </div></body></html>"""headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)html_header_splits = html_splitter.split_text(html_string)html_header_splits
**API Reference:**[HTMLHeaderTextSplitter](https://api.python.langchain.com/en/latest/html/langchain_text_splitters.html.HTMLHeaderTextSplitter.html)
[Document(page_content='Foo'), Document(page_content='Some intro text about Foo. \nBar main section Bar subsection 1 Bar subsection 2', metadata={'Header 1': 'Foo'}), Document(page_content='Some intro text about Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section'}), Document(page_content='Some text about the first subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 1'}), Document(page_content='Some text about the second subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 2'}), Document(page_content='Baz', metadata={'Header 1': 'Foo'}), Document(page_content='Some text about Baz', metadata={'Header 1': 'Foo', 'Header 2': 'Baz'}), Document(page_content='Some concluding text about Foo', metadata={'Header 1': 'Foo'})]
To return each element together with their associated headers, specify `return_each_element=True` when instantiating `HTMLHeaderTextSplitter`:
html_splitter = HTMLHeaderTextSplitter( headers_to_split_on, return_each_element=True,)html_header_splits_elements = html_splitter.split_text(html_string)
Comparing with the above, where elements are aggregated by their headers:
for element in html_header_splits[:2]: print(element)
page_content='Foo'page_content='Some intro text about Foo. \nBar main section Bar subsection 1 Bar subsection 2' metadata={'Header 1': 'Foo'}
Now each element is returned as a distinct `Document`:
for element in html_header_splits_elements[:3]: print(element)
page_content='Foo'page_content='Some intro text about Foo.' metadata={'Header 1': 'Foo'}page_content='Bar main section Bar subsection 1 Bar subsection 2' metadata={'Header 1': 'Foo'}
#### 2) How to split from a URL or HTML file:[](#2-how-to-split-from-a-url-or-html-file "Direct link to 2) How to split from a URL or HTML file:")
To read directly from a URL, pass the URL string into the `split_text_from_url` method.
Similarly, a local HTML file can be passed to the `split_text_from_file` method.
url = "https://plato.stanford.edu/entries/goedel/"headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"), ("h4", "Header 4"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)# for local file use html_splitter.split_text_from_file(<path_to_file>)html_header_splits = html_splitter.split_text_from_url(url)
### 2) How to constrain chunk sizes:[](#2-how-to-constrain-chunk-sizes "Direct link to 2) How to constrain chunk sizes:")
`HTMLHeaderTextSplitter`, which splits based on HTML headers, can be composed with another splitter which constrains splits based on character lengths, such as `RecursiveCharacterTextSplitter`.
This can be done using the `.split_documents` method of the second splitter:
from langchain_text_splitters import RecursiveCharacterTextSplitterchunk_size = 500chunk_overlap = 30text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap)# Splitsplits = text_splitter.split_documents(html_header_splits)splits[80:85]
**API Reference:**[RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
[Document(page_content='We see that Gödel first tried to reduce the consistency problem for analysis to that of arithmetic. This seemed to require a truth definition for arithmetic, which in turn led to paradoxes, such as the Liar paradox (“This sentence is false”) and Berry’s paradox (“The least number not defined by an expression consisting of just fourteen English words”). Gödel then noticed that such paradoxes would not necessarily arise if truth were replaced by provability. But this means that arithmetic truth', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='means that arithmetic truth and arithmetic provability are not co-extensive — whence the First Incompleteness Theorem.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='This account of Gödel’s discovery was told to Hao Wang very much after the fact; but in Gödel’s contemporary correspondence with Bernays and Zermelo, essentially the same description of his path to the theorems is given. (See Gödel 2003a and Gödel 2003b respectively.) From those accounts we see that the undefinability of truth in arithmetic, a result credited to Tarski, was likely obtained in some form by Gödel by 1931. But he neither publicized nor published the result; the biases logicians', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='result; the biases logicians had expressed at the time concerning the notion of truth, biases which came vehemently to the fore when Tarski announced his results on the undefinability of truth in formal systems 1935, may have served as a deterrent to Gödel’s publication of that theorem.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}), Document(page_content='We now describe the proof of the two theorems, formulating Gödel’s results in Peano arithmetic. Gödel himself used a system related to that defined in Principia Mathematica, but containing Peano arithmetic. In our presentation of the First and Second Incompleteness Theorems we refer to Peano arithmetic as P, following Gödel’s notation.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.2 The proof of the First Incompleteness Theorem'})]
Limitations[](#limitations "Direct link to Limitations")
---------------------------------------------------------
There can be quite a bit of structural variation from one HTML document to another, and while `HTMLHeaderTextSplitter` will attempt to attach all "relevant" headers to any given chunk, it can sometimes miss certain headers. For example, the algorithm assumes an informational hierarchy in which headers are always at nodes "above" associated text, i.e. prior siblings, ancestors, and combinations thereof. In the following news article (as of the writing of this document), the document is structured such that the text of the top-level headline, while tagged "h1", is in a _distinct_ subtree from the text elements that we'd expect it to be _"above"_—so we can observe that the "h1" element and its associated text do not show up in the chunk metadata (but, where applicable, we do see "h2" and its associated text):
url = "https://www.cnn.com/2023/09/25/weather/el-nino-winter-us-climate/index.html"headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"),]html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)html_header_splits = html_splitter.split_text_from_url(url)print(html_header_splits[1].page_content[:500])
No two El Niño winters are the same, but many have temperature and precipitation trends in common. Average conditions during an El Niño winter across the continental US. One of the major reasons is the position of the jet stream, which often shifts south during an El Niño winter. This shift typically brings wetter and cooler weather to the South while the North becomes drier and warmer, according to NOAA. Because the jet stream is essentially a river of air that storms flow through, they c
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/HTML_header_metadata_splitter.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Custom Document Loader
](/v0.2/docs/how_to/document_loader_custom/)[
Next
How to split by HTML sections
](/v0.2/docs/how_to/HTML_section_aware_splitter/)
* [Description and motivation](#description-and-motivation)
* [Usage examples](#usage-examples)
* [1) How to split HTML strings:](#1-how-to-split-html-strings)
* [2) How to constrain chunk sizes:](#2-how-to-constrain-chunk-sizes)
* [Limitations](#limitations) | null |
https://python.langchain.com/v0.2/docs/how_to/HTML_section_aware_splitter/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to split by HTML sections
On this page
How to split by HTML sections
=============================
Description and motivation[](#description-and-motivation "Direct link to Description and motivation")
------------------------------------------------------------------------------------------------------
Similar in concept to the [HTMLHeaderTextSplitter](/v0.2/docs/how_to/HTML_header_metadata_splitter/), the `HTMLSectionSplitter` is a "structure-aware" chunker that splits text at the element level and adds metadata for each header "relevant" to any given chunk.
It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures.
Use `xslt_path` to provide an absolute path to transform the HTML so that it can detect sections based on provided tags. The default is to use the `converting_to_header.xslt` file in the `data_connection/document_transformers` directory. This is for converting the html to a format/layout that is easier to detect sections. For example, `span` based on their font size can be converted to header tags to be detected as a section.
Usage examples[](#usage-examples "Direct link to Usage examples")
------------------------------------------------------------------
### 1) How to split HTML strings:[](#1-how-to-split-html-strings "Direct link to 1) How to split HTML strings:")
from langchain_text_splitters import HTMLSectionSplitterhtml_string = """ <!DOCTYPE html> <html> <body> <div> <h1>Foo</h1> <p>Some intro text about Foo.</p> <div> <h2>Bar main section</h2> <p>Some intro text about Bar.</p> <h3>Bar subsection 1</h3> <p>Some text about the first subtopic of Bar.</p> <h3>Bar subsection 2</h3> <p>Some text about the second subtopic of Bar.</p> </div> <div> <h2>Baz</h2> <p>Some text about Baz</p> </div> <br> <p>Some concluding text about Foo</p> </div> </body> </html>"""headers_to_split_on = [("h1", "Header 1"), ("h2", "Header 2")]html_splitter = HTMLSectionSplitter(headers_to_split_on)html_header_splits = html_splitter.split_text(html_string)html_header_splits
**API Reference:**[HTMLSectionSplitter](https://api.python.langchain.com/en/latest/html/langchain_text_splitters.html.HTMLSectionSplitter.html)
[Document(page_content='Foo \n Some intro text about Foo.', metadata={'Header 1': 'Foo'}), Document(page_content='Bar main section \n Some intro text about Bar. \n Bar subsection 1 \n Some text about the first subtopic of Bar. \n Bar subsection 2 \n Some text about the second subtopic of Bar.', metadata={'Header 2': 'Bar main section'}), Document(page_content='Baz \n Some text about Baz \n \n \n Some concluding text about Foo', metadata={'Header 2': 'Baz'})]
### 2) How to constrain chunk sizes:[](#2-how-to-constrain-chunk-sizes "Direct link to 2) How to constrain chunk sizes:")
`HTMLSectionSplitter` can be used with other text splitters as part of a chunking pipeline. Internally, it uses the `RecursiveCharacterTextSplitter` when the section size is larger than the chunk size. It also considers the font size of the text to determine whether it is a section or not based on the determined font size threshold.
from langchain_text_splitters import RecursiveCharacterTextSplitterhtml_string = """ <!DOCTYPE html> <html> <body> <div> <h1>Foo</h1> <p>Some intro text about Foo.</p> <div> <h2>Bar main section</h2> <p>Some intro text about Bar.</p> <h3>Bar subsection 1</h3> <p>Some text about the first subtopic of Bar.</p> <h3>Bar subsection 2</h3> <p>Some text about the second subtopic of Bar.</p> </div> <div> <h2>Baz</h2> <p>Some text about Baz</p> </div> <br> <p>Some concluding text about Foo</p> </div> </body> </html>"""headers_to_split_on = [ ("h1", "Header 1"), ("h2", "Header 2"), ("h3", "Header 3"), ("h4", "Header 4"),]html_splitter = HTMLSectionSplitter(headers_to_split_on)html_header_splits = html_splitter.split_text(html_string)chunk_size = 500chunk_overlap = 30text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap)# Splitsplits = text_splitter.split_documents(html_header_splits)splits
**API Reference:**[RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
[Document(page_content='Foo \n Some intro text about Foo.', metadata={'Header 1': 'Foo'}), Document(page_content='Bar main section \n Some intro text about Bar.', metadata={'Header 2': 'Bar main section'}), Document(page_content='Bar subsection 1 \n Some text about the first subtopic of Bar.', metadata={'Header 3': 'Bar subsection 1'}), Document(page_content='Bar subsection 2 \n Some text about the second subtopic of Bar.', metadata={'Header 3': 'Bar subsection 2'}), Document(page_content='Baz \n Some text about Baz \n \n \n Some concluding text about Foo', metadata={'Header 2': 'Baz'})]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/HTML_section_aware_splitter.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to split by HTML header
](/v0.2/docs/how_to/HTML_header_metadata_splitter/)[
Next
How to use the MultiQueryRetriever
](/v0.2/docs/how_to/MultiQueryRetriever/)
* [Description and motivation](#description-and-motivation)
* [Usage examples](#usage-examples)
* [1) How to split HTML strings:](#1-how-to-split-html-strings)
* [2) How to constrain chunk sizes:](#2-how-to-constrain-chunk-sizes) | null |
https://python.langchain.com/v0.2/docs/how_to/custom_callbacks/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create custom callback handlers
On this page
How to create custom callback handlers
======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Callbacks](/v0.2/docs/concepts/#callbacks)
LangChain has some built-in callback handlers, but you will often want to create your own handlers with custom logic.
To create a custom callback handler, we need to determine the [event(s)](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) we want our callback handler to handle as well as what we want our callback handler to do when the event is triggered. Then all we need to do is attach the callback handler to the object, for example via [the constructor](/v0.2/docs/how_to/callbacks_constructor/) or [at runtime](/v0.2/docs/how_to/callbacks_runtime/).
In the example below, we'll implement streaming with a custom handler.
In our custom callback handler `MyCustomHandler`, we implement the `on_llm_new_token` handler to print the token we have just received. We then attach our custom handler to the model object as a constructor callback.
from langchain_anthropic import ChatAnthropicfrom langchain_core.callbacks import BaseCallbackHandlerfrom langchain_core.prompts import ChatPromptTemplateclass MyCustomHandler(BaseCallbackHandler): def on_llm_new_token(self, token: str, **kwargs) -> None: print(f"My custom handler, token: {token}")prompt = ChatPromptTemplate.from_messages(["Tell me a joke about {animal}"])# To enable streaming, we pass in `streaming=True` to the ChatModel constructor# Additionally, we pass in our custom handler as a list to the callbacks parametermodel = ChatAnthropic( model="claude-3-sonnet-20240229", streaming=True, callbacks=[MyCustomHandler()])chain = prompt | modelresponse = chain.invoke({"animal": "bears"})
**API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
My custom handler, token: HereMy custom handler, token: 'sMy custom handler, token: aMy custom handler, token: bearMy custom handler, token: jokeMy custom handler, token: forMy custom handler, token: youMy custom handler, token: :My custom handler, token: WhyMy custom handler, token: diMy custom handler, token: d theMy custom handler, token: bearMy custom handler, token: dissolMy custom handler, token: veMy custom handler, token: inMy custom handler, token: waterMy custom handler, token: ?My custom handler, token: BecauseMy custom handler, token: itMy custom handler, token: wasMy custom handler, token: aMy custom handler, token: polarMy custom handler, token: bearMy custom handler, token: !
You can see [this reference page](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) for a list of events you can handle. Note that the `handle_chain_*` events run for most LCEL runnables.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to create your own custom callback handlers.
Next, check out the other how-to guides in this section, such as [how to attach callbacks to a runnable](/v0.2/docs/how_to/callbacks_attach/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/custom_callbacks.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to do retrieval with contextual compression
](/v0.2/docs/how_to/contextual_compression/)[
Next
How to create a custom chat model class
](/v0.2/docs/how_to/custom_chat_model/)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/custom_llm/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create a custom LLM class
On this page
How to create a custom LLM class
================================
This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.
Wrapping your LLM with the standard `LLM` interface allow you to use your LLM in existing LangChain programs with minimal code modifications!
As an bonus, your LLM will automatically become a LangChain `Runnable` and will benefit from some optimizations out of the box, async support, the `astream_events` API, etc.
Implementation[](#implementation "Direct link to Implementation")
------------------------------------------------------------------
There are only two required things that a custom LLM needs to implement:
Method
Description
`_call`
Takes in a string and some optional stop words, and returns a string. Used by `invoke`.
`_llm_type`
A property that returns a string, used for logging purposes only.
Optional implementations:
Method
Description
`_identifying_params`
Used to help with identifying the model and printing the LLM; should return a dictionary. This is a **@property**.
`_acall`
Provides an async native implementation of `_call`, used by `ainvoke`.
`_stream`
Method to stream the output token by token.
`_astream`
Provides an async native implementation of `_stream`; in newer LangChain versions, defaults to `_stream`.
Let's implement a simple custom LLM that just returns the first n characters of the input.
from typing import Any, Dict, Iterator, List, Mapping, Optionalfrom langchain_core.callbacks.manager import CallbackManagerForLLMRunfrom langchain_core.language_models.llms import LLMfrom langchain_core.outputs import GenerationChunkclass CustomLLM(LLM): """A custom chat model that echoes the first `n` characters of the input. When contributing an implementation to LangChain, carefully document the model including the initialization parameters, include an example of how to initialize the model and include any relevant links to the underlying models documentation or API. Example: .. code-block:: python model = CustomChatModel(n=2) result = model.invoke([HumanMessage(content="hello")]) result = model.batch([[HumanMessage(content="hello")], [HumanMessage(content="world")]]) """ n: int """The number of characters from the last message of the prompt to be echoed.""" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> str: """Run the LLM on the given input. Override this method to implement the LLM logic. Args: prompt: The prompt to generate from. stop: Stop words to use when generating. Model output is cut off at the first occurrence of any of the stop substrings. If stop tokens are not supported consider raising NotImplementedError. run_manager: Callback manager for the run. **kwargs: Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns: The model output as a string. Actual completions SHOULD NOT include the prompt. """ if stop is not None: raise ValueError("stop kwargs are not permitted.") return prompt[: self.n] def _stream( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> Iterator[GenerationChunk]: """Stream the LLM on the given prompt. This method should be overridden by subclasses that support streaming. If not implemented, the default behavior of calls to stream will be to fallback to the non-streaming version of the model and return the output as a single chunk. Args: prompt: The prompt to generate from. stop: Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. run_manager: Callback manager for the run. **kwargs: Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns: An iterator of GenerationChunks. """ for char in prompt[: self.n]: chunk = GenerationChunk(text=char) if run_manager: run_manager.on_llm_new_token(chunk.text, chunk=chunk) yield chunk @property def _identifying_params(self) -> Dict[str, Any]: """Return a dictionary of identifying parameters.""" return { # The model name allows users to specify custom token counting # rules in LLM monitoring applications (e.g., in LangSmith users # can provide per token pricing for their model and monitor # costs for the given LLM.) "model_name": "CustomChatModel", } @property def _llm_type(self) -> str: """Get the type of language model used by this chat model. Used for logging purposes only.""" return "custom"
**API Reference:**[CallbackManagerForLLMRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForLLMRun.html) | [LLM](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.llms.LLM.html) | [GenerationChunk](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.GenerationChunk.html)
### Let's test it 🧪[](#lets-test-it- "Direct link to Let's test it 🧪")
This LLM will implement the standard `Runnable` interface of LangChain which many of the LangChain abstractions support!
llm = CustomLLM(n=5)print(llm)
[1mCustomLLM[0mParams: {'model_name': 'CustomChatModel'}
llm.invoke("This is a foobar thing")
'This '
await llm.ainvoke("world")
'world'
llm.batch(["woof woof woof", "meow meow meow"])
['woof ', 'meow ']
await llm.abatch(["woof woof woof", "meow meow meow"])
['woof ', 'meow ']
async for token in llm.astream("hello"): print(token, end="|", flush=True)
h|e|l|l|o|
Let's confirm that in integrates nicely with other `LangChain` APIs.
from langchain_core.prompts import ChatPromptTemplate
**API Reference:**[ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
prompt = ChatPromptTemplate.from_messages( [("system", "you are a bot"), ("human", "{input}")])
llm = CustomLLM(n=7)chain = prompt | llm
idx = 0async for event in chain.astream_events({"input": "hello there!"}, version="v1"): print(event) idx += 1 if idx > 7: # Truncate break
{'event': 'on_chain_start', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'name': 'RunnableSequence', 'tags': [], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}}}{'event': 'on_prompt_start', 'name': 'ChatPromptTemplate', 'run_id': '7e996251-a926-4344-809e-c425a9846d21', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}}}{'event': 'on_prompt_end', 'name': 'ChatPromptTemplate', 'run_id': '7e996251-a926-4344-809e-c425a9846d21', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}, 'output': ChatPromptValue(messages=[SystemMessage(content='you are a bot'), HumanMessage(content='hello there!')])}}{'event': 'on_llm_start', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'input': {'prompts': ['System: you are a bot\nHuman: hello there!']}}}{'event': 'on_llm_stream', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': 'S'}}{'event': 'on_chain_stream', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'tags': [], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': 'S'}}{'event': 'on_llm_stream', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': 'y'}}{'event': 'on_chain_stream', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'tags': [], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': 'y'}}
Contributing[](#contributing "Direct link to Contributing")
------------------------------------------------------------
We appreciate all chat model integration contributions.
Here's a checklist to help make sure your contribution gets added to LangChain:
Documentation:
* The model contains doc-strings for all initialization arguments, as these will be surfaced in the [APIReference](https://api.python.langchain.com/en/stable/langchain_api_reference.html).
* The class doc-string for the model contains a link to the model API if the model is powered by a service.
Tests:
* Add unit or integration tests to the overridden methods. Verify that `invoke`, `ainvoke`, `batch`, `stream` work if you've over-ridden the corresponding code.
Streaming (if you're implementing it):
* Make sure to invoke the `on_llm_new_token` callback
* `on_llm_new_token` is invoked BEFORE yielding the chunk
Stop Token Behavior:
* Stop token should be respected
* Stop token should be INCLUDED as part of the response
Secret API Keys:
* If your model connects to an API it will likely accept API keys as part of its initialization. Use Pydantic's `SecretStr` type for secrets, so they don't get accidentally printed out when folks print the model.
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/custom_llm.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create a custom chat model class
](/v0.2/docs/how_to/custom_chat_model/)[
Next
Custom Retriever
](/v0.2/docs/how_to/custom_retriever/)
* [Implementation](#implementation)
* [Let's test it 🧪](#lets-test-it-)
* [Contributing](#contributing) | null |
https://python.langchain.com/v0.2/docs/how_to/MultiQueryRetriever/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use the MultiQueryRetriever
On this page
How to use the MultiQueryRetriever
==================================
Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on a distance metric. But, retrieval may produce different results with subtle changes in query wording, or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious.
The [MultiQueryRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html) automates the process of prompt tuning by using an LLM to generate multiple queries from different perspectives for a given user input query. For each query, it retrieves a set of relevant documents and takes the unique union across all queries to get a larger set of potentially relevant documents. By generating multiple perspectives on the same question, the `MultiQueryRetriever` can mitigate some of the limitations of the distance-based retrieval and get a richer set of results.
Let's build a vectorstore using the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng from the [RAG tutorial](/v0.2/docs/tutorials/rag/):
# Build a sample vectorDBfrom langchain_chroma import Chromafrom langchain_community.document_loaders import WebBaseLoaderfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitter# Load blog postloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()# Splittext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)splits = text_splitter.split_documents(data)# VectorDBembedding = OpenAIEmbeddings()vectordb = Chroma.from_documents(documents=splits, embedding=embedding)
**API Reference:**[WebBaseLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
#### Simple usage[](#simple-usage "Direct link to Simple usage")
Specify the LLM to use for query generation, and the retriever will do the rest.
from langchain.retrievers.multi_query import MultiQueryRetrieverfrom langchain_openai import ChatOpenAIquestion = "What are the approaches to Task Decomposition?"llm = ChatOpenAI(temperature=0)retriever_from_llm = MultiQueryRetriever.from_llm( retriever=vectordb.as_retriever(), llm=llm)
**API Reference:**[MultiQueryRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
# Set logging for the queriesimport logginglogging.basicConfig()logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
unique_docs = retriever_from_llm.invoke(question)len(unique_docs)
INFO:langchain.retrievers.multi_query:Generated queries: ['1. How can Task Decomposition be achieved through different methods?', '2. What strategies are commonly used for Task Decomposition?', '3. What are the various techniques for breaking down tasks in Task Decomposition?']
5
Note that the underlying queries generated by the retriever are logged at the `INFO` level.
#### Supplying your own prompt[](#supplying-your-own-prompt "Direct link to Supplying your own prompt")
Under the hood, `MultiQueryRetriever` generates queries using a specific [prompt](https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/multi_query.html#MultiQueryRetriever). To customize this prompt:
1. Make a [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) with an input variable for the question;
2. Implement an [output parser](/v0.2/docs/concepts/#output-parsers) like the one below to split the result into a list of queries.
The prompt and output parser together must support the generation of a list of queries.
from typing import Listfrom langchain_core.output_parsers import BaseOutputParserfrom langchain_core.prompts import PromptTemplatefrom langchain_core.pydantic_v1 import BaseModel, Field# Output parser will split the LLM result into a list of queriesclass LineListOutputParser(BaseOutputParser[List[str]]): """Output parser for a list of lines.""" def parse(self, text: str) -> List[str]: lines = text.strip().split("\n") return linesoutput_parser = LineListOutputParser()QUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an AI language model assistant. Your task is to generate five different versions of the given user question to retrieve relevant documents from a vector database. By generating multiple perspectives on the user question, your goal is to help the user overcome some of the limitations of the distance-based similarity search. Provide these alternative questions separated by newlines. Original question: {question}""",)llm = ChatOpenAI(temperature=0)# Chainllm_chain = QUERY_PROMPT | llm | output_parser# Other inputsquestion = "What are the approaches to Task Decomposition?"
**API Reference:**[BaseOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.base.BaseOutputParser.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html)
# Runretriever = MultiQueryRetriever( retriever=vectordb.as_retriever(), llm_chain=llm_chain, parser_key="lines") # "lines" is the key (attribute name) of the parsed output# Resultsunique_docs = retriever.invoke("What does the course say about regression?")len(unique_docs)
INFO:langchain.retrievers.multi_query:Generated queries: ['1. Can you provide insights on regression from the course material?', '2. How is regression discussed in the course content?', '3. What information does the course offer about regression?', '4. In what way is regression covered in the course?', '5. What are the teachings of the course regarding regression?']
9
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/MultiQueryRetriever.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to split by HTML sections
](/v0.2/docs/how_to/HTML_section_aware_splitter/)[
Next
How to add scores to retriever results
](/v0.2/docs/how_to/add_scores_retriever/) | null |
https://python.langchain.com/v0.2/docs/how_to/custom_chat_model/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create a custom chat model class
On this page
How to create a custom chat model class
=======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
In this guide, we'll learn how to create a custom chat model using LangChain abstractions.
Wrapping your LLM with the standard [`BaseChatModel`](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface allow you to use your LLM in existing LangChain programs with minimal code modifications!
As an bonus, your LLM will automatically become a LangChain `Runnable` and will benefit from some optimizations out of the box (e.g., batch via a threadpool), async support, the `astream_events` API, etc.
Inputs and outputs[](#inputs-and-outputs "Direct link to Inputs and outputs")
------------------------------------------------------------------------------
First, we need to talk about **messages**, which are the inputs and outputs of chat models.
### Messages[](#messages "Direct link to Messages")
Chat models take messages as inputs and return a message as output.
LangChain has a few [built-in message types](/v0.2/docs/concepts/#message-types):
Message Type
Description
`SystemMessage`
Used for priming AI behavior, usually passed in as the first of a sequence of input messages.
`HumanMessage`
Represents a message from a person interacting with the chat model.
`AIMessage`
Represents a message from the chat model. This can be either text or a request to invoke a tool.
`FunctionMessage` / `ToolMessage`
Message for passing the results of tool invocation back to the model.
`AIMessageChunk` / `HumanMessageChunk` / ...
Chunk variant of each type of message.
::: {.callout-note} `ToolMessage` and `FunctionMessage` closely follow OpenAI's `function` and `tool` roles.
This is a rapidly developing field and as more models add function calling capabilities. Expect that there will be additions to this schema. :::
from langchain_core.messages import ( AIMessage, BaseMessage, FunctionMessage, HumanMessage, SystemMessage, ToolMessage,)
**API Reference:**[AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) | [BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [FunctionMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.function.FunctionMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [SystemMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html) | [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html)
### Streaming Variant[](#streaming-variant "Direct link to Streaming Variant")
All the chat messages have a streaming variant that contains `Chunk` in the name.
from langchain_core.messages import ( AIMessageChunk, FunctionMessageChunk, HumanMessageChunk, SystemMessageChunk, ToolMessageChunk,)
**API Reference:**[AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) | [FunctionMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.function.FunctionMessageChunk.html) | [HumanMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessageChunk.html) | [SystemMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessageChunk.html) | [ToolMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessageChunk.html)
These chunks are used when streaming output from chat models, and they all define an additive property!
AIMessageChunk(content="Hello") + AIMessageChunk(content=" World!")
AIMessageChunk(content='Hello World!')
Base Chat Model[](#base-chat-model "Direct link to Base Chat Model")
---------------------------------------------------------------------
Let's implement a chat model that echoes back the first `n` characetrs of the last message in the prompt!
To do so, we will inherit from `BaseChatModel` and we'll need to implement the following:
Method/Property
Description
Required/Optional
`_generate`
Use to generate a chat result from a prompt
Required
`_llm_type` (property)
Used to uniquely identify the type of the model. Used for logging.
Required
`_identifying_params` (property)
Represent model parameterization for tracing purposes.
Optional
`_stream`
Use to implement streaming.
Optional
`_agenerate`
Use to implement a native async method.
Optional
`_astream`
Use to implement async version of `_stream`.
Optional
tip
The `_astream` implementation uses `run_in_executor` to launch the sync `_stream` in a separate thread if `_stream` is implemented, otherwise it fallsback to use `_agenerate`.
You can use this trick if you want to reuse the `_stream` implementation, but if you're able to implement code that's natively async that's a better solution since that code will run with less overhead.
### Implementation[](#implementation "Direct link to Implementation")
from typing import Any, AsyncIterator, Dict, Iterator, List, Optionalfrom langchain_core.callbacks import ( AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun,)from langchain_core.language_models import BaseChatModel, SimpleChatModelfrom langchain_core.messages import AIMessageChunk, BaseMessage, HumanMessagefrom langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResultfrom langchain_core.runnables import run_in_executorclass CustomChatModelAdvanced(BaseChatModel): """A custom chat model that echoes the first `n` characters of the input. When contributing an implementation to LangChain, carefully document the model including the initialization parameters, include an example of how to initialize the model and include any relevant links to the underlying models documentation or API. Example: .. code-block:: python model = CustomChatModel(n=2) result = model.invoke([HumanMessage(content="hello")]) result = model.batch([[HumanMessage(content="hello")], [HumanMessage(content="world")]]) """ model_name: str """The name of the model""" n: int """The number of characters from the last message of the prompt to be echoed.""" def _generate( self, messages: List[BaseMessage], stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> ChatResult: """Override the _generate method to implement the chat model logic. This can be a call to an API, a call to a local model, or any other implementation that generates a response to the input prompt. Args: messages: the prompt composed of a list of messages. stop: a list of strings on which the model should stop generating. If generation stops due to a stop token, the stop token itself SHOULD BE INCLUDED as part of the output. This is not enforced across models right now, but it's a good practice to follow since it makes it much easier to parse the output of the model downstream and understand why generation stopped. run_manager: A run manager with callbacks for the LLM. """ # Replace this with actual logic to generate a response from a list # of messages. last_message = messages[-1] tokens = last_message.content[: self.n] message = AIMessage( content=tokens, additional_kwargs={}, # Used to add additional payload (e.g., function calling request) response_metadata={ # Use for response metadata "time_in_seconds": 3, }, ) ## generation = ChatGeneration(message=message) return ChatResult(generations=[generation]) def _stream( self, messages: List[BaseMessage], stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> Iterator[ChatGenerationChunk]: """Stream the output of the model. This method should be implemented if the model can generate output in a streaming fashion. If the model does not support streaming, do not implement it. In that case streaming requests will be automatically handled by the _generate method. Args: messages: the prompt composed of a list of messages. stop: a list of strings on which the model should stop generating. If generation stops due to a stop token, the stop token itself SHOULD BE INCLUDED as part of the output. This is not enforced across models right now, but it's a good practice to follow since it makes it much easier to parse the output of the model downstream and understand why generation stopped. run_manager: A run manager with callbacks for the LLM. """ last_message = messages[-1] tokens = last_message.content[: self.n] for token in tokens: chunk = ChatGenerationChunk(message=AIMessageChunk(content=token)) if run_manager: # This is optional in newer versions of LangChain # The on_llm_new_token will be called automatically run_manager.on_llm_new_token(token, chunk=chunk) yield chunk # Let's add some other information (e.g., response metadata) chunk = ChatGenerationChunk( message=AIMessageChunk(content="", response_metadata={"time_in_sec": 3}) ) if run_manager: # This is optional in newer versions of LangChain # The on_llm_new_token will be called automatically run_manager.on_llm_new_token(token, chunk=chunk) yield chunk @property def _llm_type(self) -> str: """Get the type of language model used by this chat model.""" return "echoing-chat-model-advanced" @property def _identifying_params(self) -> Dict[str, Any]: """Return a dictionary of identifying parameters. This information is used by the LangChain callback system, which is used for tracing purposes make it possible to monitor LLMs. """ return { # The model name allows users to specify custom token counting # rules in LLM monitoring applications (e.g., in LangSmith users # can provide per token pricing for their model and monitor # costs for the given LLM.) "model_name": self.model_name, }
**API Reference:**[AsyncCallbackManagerForLLMRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForLLMRun.html) | [CallbackManagerForLLMRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForLLMRun.html) | [BaseChatModel](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) | [SimpleChatModel](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.SimpleChatModel.html) | [AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) | [BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [ChatGeneration](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGeneration.html) | [ChatGenerationChunk](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGenerationChunk.html) | [ChatResult](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_result.ChatResult.html) | [run\_in\_executor](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.run_in_executor.html)
### Let's test it 🧪[](#lets-test-it- "Direct link to Let's test it 🧪")
The chat model will implement the standard `Runnable` interface of LangChain which many of the LangChain abstractions support!
model = CustomChatModelAdvanced(n=3, model_name="my_custom_model")model.invoke( [ HumanMessage(content="hello!"), AIMessage(content="Hi there human!"), HumanMessage(content="Meow!"), ])
AIMessage(content='Meo', response_metadata={'time_in_seconds': 3}, id='run-ddb42bd6-4fdd-4bd2-8be5-e11b67d3ac29-0')
model.invoke("hello")
AIMessage(content='hel', response_metadata={'time_in_seconds': 3}, id='run-4d3cc912-44aa-454b-977b-ca02be06c12e-0')
model.batch(["hello", "goodbye"])
[AIMessage(content='hel', response_metadata={'time_in_seconds': 3}, id='run-9620e228-1912-4582-8aa1-176813afec49-0'), AIMessage(content='goo', response_metadata={'time_in_seconds': 3}, id='run-1ce8cdf8-6f75-448e-82f7-1bb4a121df93-0')]
for chunk in model.stream("cat"): print(chunk.content, end="|")
c|a|t||
Please see the implementation of `_astream` in the model! If you do not implement it, then no output will stream.!
async for chunk in model.astream("cat"): print(chunk.content, end="|")
c|a|t||
Let's try to use the astream events API which will also help double check that all the callbacks were implemented!
async for event in model.astream_events("cat", version="v1"): print(event)
{'event': 'on_chat_model_start', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'name': 'CustomChatModelAdvanced', 'tags': [], 'metadata': {}, 'data': {'input': 'cat'}}{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='c', id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='a', id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='t', id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='', response_metadata={'time_in_sec': 3}, id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}{'event': 'on_chat_model_end', 'name': 'CustomChatModelAdvanced', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'data': {'output': AIMessageChunk(content='cat', response_metadata={'time_in_sec': 3}, id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}``````output/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future. warn_beta(
Contributing[](#contributing "Direct link to Contributing")
------------------------------------------------------------
We appreciate all chat model integration contributions.
Here's a checklist to help make sure your contribution gets added to LangChain:
Documentation:
* The model contains doc-strings for all initialization arguments, as these will be surfaced in the [APIReference](https://api.python.langchain.com/en/stable/langchain_api_reference.html).
* The class doc-string for the model contains a link to the model API if the model is powered by a service.
Tests:
* Add unit or integration tests to the overridden methods. Verify that `invoke`, `ainvoke`, `batch`, `stream` work if you've over-ridden the corresponding code.
Streaming (if you're implementing it):
* Implement the \_stream method to get streaming working
Stop Token Behavior:
* Stop token should be respected
* Stop token should be INCLUDED as part of the response
Secret API Keys:
* If your model connects to an API it will likely accept API keys as part of its initialization. Use Pydantic's `SecretStr` type for secrets, so they don't get accidentally printed out when folks print the model.
Identifying Params:
* Include a `model_name` in identifying params
Optimizations:
Consider providing native async support to reduce the overhead from the model!
* Provided a native async of `_agenerate` (used by `ainvoke`)
* Provided a native async of `_astream` (used by `astream`)
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to create your own custom chat models.
Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output/) or [how to track chat model token usage](/v0.2/docs/how_to/chat_token_usage_tracking/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/custom_chat_model.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create custom callback handlers
](/v0.2/docs/how_to/custom_callbacks/)[
Next
How to create a custom LLM class
](/v0.2/docs/how_to/custom_llm/)
* [Inputs and outputs](#inputs-and-outputs)
* [Messages](#messages)
* [Streaming Variant](#streaming-variant)
* [Base Chat Model](#base-chat-model)
* [Implementation](#implementation)
* [Let's test it 🧪](#lets-test-it-)
* [Contributing](#contributing)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/custom_retriever/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* Custom Retriever
On this page
How to create a custom Retriever
================================
Overview[](#overview "Direct link to Overview")
------------------------------------------------
Many LLM applications involve retrieving information from external data sources using a `Retriever`.
A retriever is responsible for retrieving a list of relevant `Documents` to a given user `query`.
The retrieved documents are often formatted into prompts that are fed into an LLM, allowing the LLM to use the information in the to generate an appropriate response (e.g., answering a user question based on a knowledge base).
Interface[](#interface "Direct link to Interface")
---------------------------------------------------
To create your own retriever, you need to extend the `BaseRetriever` class and implement the following methods:
Method
Description
Required/Optional
`_get_relevant_documents`
Get documents relevant to a query.
Required
`_aget_relevant_documents`
Implement to provide async native support.
Optional
The logic inside of `_get_relevant_documents` can involve arbitrary calls to a database or to the web using requests.
tip
By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/v0.2/docs/concepts/#interface) and will gain the standard `Runnable` functionality out of the box!
info
You can use a `RunnableLambda` or `RunnableGenerator` to implement a retriever.
The main benefit of implementing a retriever as a `BaseRetriever` vs. a `RunnableLambda` (a custom [runnable function](/v0.2/docs/how_to/functions/)) is that a `BaseRetriever` is a well known LangChain entity so some tooling for monitoring may implement specialized behavior for retrievers. Another difference is that a `BaseRetriever` will behave slightly differently from `RunnableLambda` in some APIs; e.g., the `start` event in `astream_events` API will be `on_retriever_start` instead of `on_chain_start`.
Example[](#example "Direct link to Example")
---------------------------------------------
Let's implement a toy retriever that returns all documents whose text contains the text in the user query.
from typing import Listfrom langchain_core.callbacks import CallbackManagerForRetrieverRunfrom langchain_core.documents import Documentfrom langchain_core.retrievers import BaseRetrieverclass ToyRetriever(BaseRetriever): """A toy retriever that contains the top k documents that contain the user query. This retriever only implements the sync method _get_relevant_documents. If the retriever were to involve file access or network access, it could benefit from a native async implementation of `_aget_relevant_documents`. As usual, with Runnables, there's a default async implementation that's provided that delegates to the sync implementation running on another thread. """ documents: List[Document] """List of documents to retrieve from.""" k: int """Number of top results to return""" def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun ) -> List[Document]: """Sync implementations for retriever.""" matching_documents = [] for document in documents: if len(matching_documents) > self.k: return matching_documents if query.lower() in document.page_content.lower(): matching_documents.append(document) return matching_documents # Optional: Provide a more efficient native implementation by overriding # _aget_relevant_documents # async def _aget_relevant_documents( # self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun # ) -> List[Document]: # """Asynchronously get documents relevant to a query. # Args: # query: String to find relevant documents for # run_manager: The callbacks handler to use # Returns: # List of relevant documents # """
**API Reference:**[CallbackManagerForRetrieverRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForRetrieverRun.html) | [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) | [BaseRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html)
Test it 🧪[](#test-it- "Direct link to Test it 🧪")
----------------------------------------------------
documents = [ Document( page_content="Dogs are great companions, known for their loyalty and friendliness.", metadata={"type": "dog", "trait": "loyalty"}, ), Document( page_content="Cats are independent pets that often enjoy their own space.", metadata={"type": "cat", "trait": "independence"}, ), Document( page_content="Goldfish are popular pets for beginners, requiring relatively simple care.", metadata={"type": "fish", "trait": "low maintenance"}, ), Document( page_content="Parrots are intelligent birds capable of mimicking human speech.", metadata={"type": "bird", "trait": "intelligence"}, ), Document( page_content="Rabbits are social animals that need plenty of space to hop around.", metadata={"type": "rabbit", "trait": "social"}, ),]retriever = ToyRetriever(documents=documents, k=3)
retriever.invoke("that")
[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'type': 'cat', 'trait': 'independence'}), Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'type': 'rabbit', 'trait': 'social'})]
It's a **runnable** so it'll benefit from the standard Runnable Interface! 🤩
await retriever.ainvoke("that")
[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'type': 'cat', 'trait': 'independence'}), Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'type': 'rabbit', 'trait': 'social'})]
retriever.batch(["dog", "cat"])
[[Document(page_content='Dogs are great companions, known for their loyalty and friendliness.', metadata={'type': 'dog', 'trait': 'loyalty'})], [Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'type': 'cat', 'trait': 'independence'})]]
async for event in retriever.astream_events("bar", version="v1"): print(event)
{'event': 'on_retriever_start', 'run_id': 'f96f268d-8383-4921-b175-ca583924d9ff', 'name': 'ToyRetriever', 'tags': [], 'metadata': {}, 'data': {'input': 'bar'}}{'event': 'on_retriever_stream', 'run_id': 'f96f268d-8383-4921-b175-ca583924d9ff', 'tags': [], 'metadata': {}, 'name': 'ToyRetriever', 'data': {'chunk': []}}{'event': 'on_retriever_end', 'name': 'ToyRetriever', 'run_id': 'f96f268d-8383-4921-b175-ca583924d9ff', 'tags': [], 'metadata': {}, 'data': {'output': []}}
Contributing[](#contributing "Direct link to Contributing")
------------------------------------------------------------
We appreciate contributions of interesting retrievers!
Here's a checklist to help make sure your contribution gets added to LangChain:
Documentation:
* The retriever contains doc-strings for all initialization arguments, as these will be surfaced in the [API Reference](https://api.python.langchain.com/en/stable/langchain_api_reference.html).
* The class doc-string for the model contains a link to any relevant APIs used for the retriever (e.g., if the retriever is retrieving from wikipedia, it'll be good to link to the wikipedia API!)
Tests:
* Add unit or integration tests to verify that `invoke` and `ainvoke` work.
Optimizations:
If the retriever is connecting to external data sources (e.g., an API or a file), it'll almost certainly benefit from an async native optimization!
* Provide a native async implementation of `_aget_relevant_documents` (used by `ainvoke`)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/custom_retriever.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create a custom LLM class
](/v0.2/docs/how_to/custom_llm/)[
Next
How to create custom tools
](/v0.2/docs/how_to/custom_tools/)
* [Overview](#overview)
* [Interface](#interface)
* [Example](#example)
* [Test it 🧪](#test-it-)
* [Contributing](#contributing) | null |
https://python.langchain.com/v0.2/docs/how_to/add_scores_retriever/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add scores to retriever results
On this page
How to add scores to retriever results
======================================
Retrievers will return sequences of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects, which by default include no information about the process that retrieved them (e.g., a similarity score against a query). Here we demonstrate how to add retrieval scores to the `.metadata` of documents:
1. From [vectorstore retrievers](/v0.2/docs/how_to/vectorstore_retriever/);
2. From higher-order LangChain retrievers, such as [SelfQueryRetriever](/v0.2/docs/how_to/self_query/) or [MultiVectorRetriever](/v0.2/docs/how_to/multi_vector/).
For (1), we will implement a short wrapper function around the corresponding vector store. For (2), we will update a method of the corresponding class.
Create vector store[](#create-vector-store "Direct link to Create vector store")
---------------------------------------------------------------------------------
First we populate a vector store with some data. We will use a [PineconeVectorStore](https://api.python.langchain.com/en/latest/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html), but this guide is compatible with any LangChain vector store that implements a `.similarity_search_with_score` method.
from langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsfrom langchain_pinecone import PineconeVectorStoredocs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "thriller", "rating": 9.9, }, ),]vectorstore = PineconeVectorStore.from_documents( docs, index_name="sample", embedding=OpenAIEmbeddings())
**API Reference:**[Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [PineconeVectorStore](https://api.python.langchain.com/en/latest/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html)
Retriever[](#retriever "Direct link to Retriever")
---------------------------------------------------
To obtain scores from a vector store retriever, we wrap the underlying vector store's `.similarity_search_with_score` method in a short function that packages scores into the associated document's metadata.
We add a `@chain` decorator to the function to create a [Runnable](/v0.2/docs/concepts/#langchain-expression-language) that can be used similarly to a typical retriever.
from typing import Listfrom langchain_core.documents import Documentfrom langchain_core.runnables import chain@chaindef retriever(query: str) -> List[Document]: docs, scores = zip(*vectorstore.similarity_search_with_score(query)) for doc, score in zip(docs, scores): doc.metadata["score"] = score return docs
**API Reference:**[Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) | [chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html)
result = retriever.invoke("dinosaur")result
(Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993.0, 'score': 0.84429127}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0, 'score': 0.792038262}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': 'thriller', 'rating': 9.9, 'year': 1979.0, 'score': 0.751571238}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0, 'score': 0.747471571}))
Note that similarity scores from the retrieval step are included in the metadata of the above documents.
SelfQueryRetriever[](#selfqueryretriever "Direct link to SelfQueryRetriever")
------------------------------------------------------------------------------
`SelfQueryRetriever` will use a LLM to generate a query that is potentially structured-- for example, it can construct filters for the retrieval on top of the usual semantic-similarity driven selection. See [this guide](/v0.2/docs/how_to/self_query/) for more detail.
`SelfQueryRetriever` includes a short (1 - 2 line) method `_get_docs_with_query` that executes the `vectorstore` search. We can subclass `SelfQueryRetriever` and override this method to propagate similarity scores.
First, following the [how-to guide](/v0.2/docs/how_to/self_query/), we will need to establish some metadata on which to filter:
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import ChatOpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie. One of ['science fiction', 'comedy', 'drama', 'thriller', 'romance', 'action', 'animated']", type="string", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = ChatOpenAI(temperature=0)
**API Reference:**[AttributeInfo](https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.schema.AttributeInfo.html) | [SelfQueryRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.self_query.base.SelfQueryRetriever.html) | [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)
We then override the `_get_docs_with_query` to use the `similarity_search_with_score` method of the underlying vector store:
from typing import Any, Dictclass CustomSelfQueryRetriever(SelfQueryRetriever): def _get_docs_with_query( self, query: str, search_kwargs: Dict[str, Any] ) -> List[Document]: """Get docs, adding score information.""" docs, scores = zip( *vectorstore.similarity_search_with_score(query, **search_kwargs) ) for doc, score in zip(docs, scores): doc.metadata["score"] = score return docs
Invoking this retriever will now include similarity scores in the document metadata. Note that the underlying structured-query capabilities of `SelfQueryRetriever` are retained.
retriever = CustomSelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info,)result = retriever.invoke("dinosaur movie with rating less than 8")result
(Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993.0, 'score': 0.84429127}),)
MultiVectorRetriever[](#multivectorretriever "Direct link to MultiVectorRetriever")
------------------------------------------------------------------------------------
`MultiVectorRetriever` allows you to associate multiple vectors with a single document. This can be useful in a number of applications. For example, we can index small chunks of a larger document and run the retrieval on the chunks, but return the larger "parent" document when invoking the retriever. [ParentDocumentRetriever](/v0.2/docs/how_to/parent_document_retriever/), a subclass of `MultiVectorRetriever`, includes convenience methods for populating a vector store to support this. Further applications are detailed in this [how-to guide](/v0.2/docs/how_to/multi_vector/).
To propagate similarity scores through this retriever, we can again subclass `MultiVectorRetriever` and override a method. This time we will override `_get_relevant_documents`.
First, we prepare some fake data. We generate fake "whole documents" and store them in a document store; here we will use a simple [InMemoryStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryBaseStore.html).
from langchain.storage import InMemoryStorefrom langchain_text_splitters import RecursiveCharacterTextSplitter# The storage layer for the parent documentsdocstore = InMemoryStore()fake_whole_documents = [ ("fake_id_1", Document(page_content="fake whole document 1")), ("fake_id_2", Document(page_content="fake whole document 2")),]docstore.mset(fake_whole_documents)
**API Reference:**[InMemoryStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryStore.html) | [RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html)
Next we will add some fake "sub-documents" to our vector store. We can link these sub-documents to the parent documents by populating the `"doc_id"` key in its metadata.
docs = [ Document( page_content="A snippet from a larger document discussing cats.", metadata={"doc_id": "fake_id_1"}, ), Document( page_content="A snippet from a larger document discussing discourse.", metadata={"doc_id": "fake_id_1"}, ), Document( page_content="A snippet from a larger document discussing chocolate.", metadata={"doc_id": "fake_id_2"}, ),]vectorstore.add_documents(docs)
['62a85353-41ff-4346-bff7-be6c8ec2ed89', '5d4a0e83-4cc5-40f1-bc73-ed9cbad0ee15', '8c1d9a56-120f-45e4-ba70-a19cd19a38f4']
To propagate the scores, we subclass `MultiVectorRetriever` and override its `_get_relevant_documents` method. Here we will make two changes:
1. We will add similarity scores to the metadata of the corresponding "sub-documents" using the `similarity_search_with_score` method of the underlying vector store as above;
2. We will include a list of these sub-documents in the metadata of the retrieved parent document. This surfaces what snippets of text were identified by the retrieval, together with their corresponding similarity scores.
from collections import defaultdictfrom langchain.retrievers import MultiVectorRetrieverfrom langchain_core.callbacks import CallbackManagerForRetrieverRunclass CustomMultiVectorRetriever(MultiVectorRetriever): def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun ) -> List[Document]: """Get documents relevant to a query. Args: query: String to find relevant documents for run_manager: The callbacks handler to use Returns: List of relevant documents """ results = self.vectorstore.similarity_search_with_score( query, **self.search_kwargs ) # Map doc_ids to list of sub-documents, adding scores to metadata id_to_doc = defaultdict(list) for doc, score in results: doc_id = doc.metadata.get("doc_id") if doc_id: doc.metadata["score"] = score id_to_doc[doc_id].append(doc) # Fetch documents corresponding to doc_ids, retaining sub_docs in metadata docs = [] for _id, sub_docs in id_to_doc.items(): docstore_docs = self.docstore.mget([_id]) if docstore_docs: if doc := docstore_docs[0]: doc.metadata["sub_docs"] = sub_docs docs.append(doc) return docs
**API Reference:**[MultiVectorRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_vector.MultiVectorRetriever.html) | [CallbackManagerForRetrieverRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForRetrieverRun.html)
Invoking this retriever, we can see that it identifies the correct parent document, including the relevant snippet from the sub-document with similarity score.
retriever = CustomMultiVectorRetriever(vectorstore=vectorstore, docstore=docstore)retriever.invoke("cat")
[Document(page_content='fake whole document 1', metadata={'sub_docs': [Document(page_content='A snippet from a larger document discussing cats.', metadata={'doc_id': 'fake_id_1', 'score': 0.831276655})]})]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/add_scores_retriever.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use the MultiQueryRetriever
](/v0.2/docs/how_to/MultiQueryRetriever/)[
Next
Caching
](/v0.2/docs/how_to/caching_embeddings/)
* [Create vector store](#create-vector-store)
* [Retriever](#retriever)
* [SelfQueryRetriever](#selfqueryretriever)
* [MultiVectorRetriever](#multivectorretriever) | null |
https://python.langchain.com/v0.2/docs/how_to/debugging/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to debug your LLM apps
On this page
How to debug your LLM apps
==========================
Like building any type of software, at some point you'll need to debug when building with LLMs. A model call will fail, or model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created.
There are three main methods for debugging:
* Verbose Mode: This adds print statements for "important" events in your chain.
* Debug Mode: This add logging statements for ALL events in your chain.
* LangSmith Tracing: This logs events to [LangSmith](https://docs.smith.langchain.com/) to allow for visualization there.
Verbose Mode
Debug Mode
LangSmith Tracing
Free
✅
✅
✅
UI
❌
❌
✅
Persisted
❌
❌
✅
See all events
❌
✅
✅
See "important" events
✅
❌
✅
Runs Locally
✅
✅
❌
Tracing[](#tracing "Direct link to Tracing")
---------------------------------------------
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
After you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..."
Or, if in a notebook, you can set them with:
import getpassimport osos.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
Let's suppose we have an agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
from langchain.agents import AgentExecutor, create_tool_calling_agentfrom langchain_community.tools.tavily_search import TavilySearchResultsfrom langchain_core.prompts import ChatPromptTemplatetools = [TavilySearchResults(max_results=1)]prompt = ChatPromptTemplate.from_messages( [ ( "system", "You are a helpful assistant.", ), ("placeholder", "{chat_history}"), ("human", "{input}"), ("placeholder", "{agent_scratchpad}"), ])# Construct the Tools agentagent = create_tool_calling_agent(llm, tools, prompt)# Create an agent executor by passing in the agent and toolsagent_executor = AgentExecutor(agent=agent, tools=tools)agent_executor.invoke( {"input": "Who directed the 2023 film Oppenheimer and what is their age in days?"})
**API Reference:**[AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html) | [create\_tool\_calling\_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html) | [TavilySearchResults](https://api.python.langchain.com/en/latest/tools/langchain_community.tools.tavily_search.tool.TavilySearchResults.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
{'input': 'Who directed the 2023 film Oppenheimer and what is their age in days?', 'output': 'The 2023 film "Oppenheimer" was directed by Christopher Nolan.\n\nTo calculate Christopher Nolan\'s age in days, we first need his birthdate, which is July 30, 1970. Let\'s calculate his age in days from his birthdate to today\'s date, December 7, 2023.\n\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\n2. Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\n3. From July 30, 2023, to December 7, 2023, is 130 days.\n\nNow, calculate the total days:\n- 53 years = 53 x 365 = 19,345 days\n- Adding leap years from 1970 to 2023: There are 13 leap years (1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020). So, add 13 days.\n- Total days from years and leap years = 19,345 + 13 = 19,358 days\n- Add the days from July 30, 2023, to December 7, 2023 = 130 days\n\nTotal age in days = 19,358 + 130 = 19,488 days\n\nChristopher Nolan is 19,488 days old as of December 7, 2023.'}
We don't get much output, but since we set up LangSmith we can easily see what happened under the hood:
[https://smith.langchain.com/public/a89ff88f-9ddc-4757-a395-3a1b365655bf/r](https://smith.langchain.com/public/a89ff88f-9ddc-4757-a395-3a1b365655bf/r)
`set_debug` and `set_verbose`[](#set_debug-and-set_verbose "Direct link to set_debug-and-set_verbose")
-------------------------------------------------------------------------------------------------------
If you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a chain run.
There are a number of ways to enable printing at varying degrees of verbosity.
Note: These still work even with LangSmith enabled, so you can have both turned on and running at the same time
### `set_verbose(True)`[](#set_verbosetrue "Direct link to set_verbosetrue")
Setting the `verbose` flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application logic.
from langchain.globals import set_verboseset_verbose(True)agent_executor = AgentExecutor(agent=agent, tools=tools)agent_executor.invoke( {"input": "Who directed the 2023 film Oppenheimer and what is their age in days?"})
**API Reference:**[set\_verbose](https://api.python.langchain.com/en/latest/globals/langchain.globals.set_verbose.html)
[1m> Entering new AgentExecutor chain...[0m[32;1m[1;3mInvoking: `tavily_search_results_json` with `{'query': 'director of the 2023 film Oppenheimer'}`[0m[36;1m[1;3m[{'url': 'https://m.imdb.com/title/tt15398776/', 'content': 'Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.'}][0m[32;1m[1;3mInvoking: `tavily_search_results_json` with `{'query': 'birth date of Christopher Nolan'}`[0m[36;1m[1;3m[{'url': 'https://m.imdb.com/name/nm0634240/bio/', 'content': 'Christopher Nolan. Writer: Tenet. Best known for his cerebral, often nonlinear, storytelling, acclaimed Academy Award winner writer/director/producer Sir Christopher Nolan CBE was born in London, England. Over the course of more than 25 years of filmmaking, Nolan has gone from low-budget independent films to working on some of the biggest blockbusters ever made and became one of the most ...'}][0m[32;1m[1;3mInvoking: `tavily_search_results_json` with `{'query': 'Christopher Nolan birth date'}`responded: The 2023 film **Oppenheimer** was directed by **Christopher Nolan**.To calculate Christopher Nolan's age in days, I need his exact birth date. Let me find that information for you.[0m[36;1m[1;3m[{'url': 'https://m.imdb.com/name/nm0634240/bio/', 'content': 'Christopher Nolan. Writer: Tenet. Best known for his cerebral, often nonlinear, storytelling, acclaimed Academy Award winner writer/director/producer Sir Christopher Nolan CBE was born in London, England. Over the course of more than 25 years of filmmaking, Nolan has gone from low-budget independent films to working on some of the biggest blockbusters ever made and became one of the most ...'}][0m[32;1m[1;3mInvoking: `tavily_search_results_json` with `{'query': 'Christopher Nolan date of birth'}`responded: It appears that I need to refine my search to get the exact birth date of Christopher Nolan. Let me try again to find that specific information.[0m[36;1m[1;3m[{'url': 'https://m.imdb.com/name/nm0634240/bio/', 'content': 'Christopher Nolan. Writer: Tenet. Best known for his cerebral, often nonlinear, storytelling, acclaimed Academy Award winner writer/director/producer Sir Christopher Nolan CBE was born in London, England. Over the course of more than 25 years of filmmaking, Nolan has gone from low-budget independent films to working on some of the biggest blockbusters ever made and became one of the most ...'}][0m[32;1m[1;3mI am currently unable to retrieve the exact birth date of Christopher Nolan from the sources available. However, it is widely known that he was born on July 30, 1970. Using this date, I can calculate his age in days as of today.Let's calculate:- Christopher Nolan's birth date: July 30, 1970.- Today's date: December 7, 2023.The number of days between these two dates can be calculated as follows:1. From July 30, 1970, to July 30, 2023, is 53 years.2. From July 30, 2023, to December 7, 2023, is 130 days.Calculating the total days for 53 years (considering leap years):- 53 years × 365 days/year = 19,345 days- Adding leap years (1972, 1976, ..., 2020, 2024 - 13 leap years): 13 daysTotal days from birth until July 30, 2023: 19,345 + 13 = 19,358 daysAdding the days from July 30, 2023, to December 7, 2023: 130 daysTotal age in days as of December 7, 2023: 19,358 + 130 = 19,488 days.Therefore, Christopher Nolan is 19,488 days old as of December 7, 2023.[0m[1m> Finished chain.[0m
{'input': 'Who directed the 2023 film Oppenheimer and what is their age in days?', 'output': "I am currently unable to retrieve the exact birth date of Christopher Nolan from the sources available. However, it is widely known that he was born on July 30, 1970. Using this date, I can calculate his age in days as of today.\n\nLet's calculate:\n\n- Christopher Nolan's birth date: July 30, 1970.\n- Today's date: December 7, 2023.\n\nThe number of days between these two dates can be calculated as follows:\n\n1. From July 30, 1970, to July 30, 2023, is 53 years.\n2. From July 30, 2023, to December 7, 2023, is 130 days.\n\nCalculating the total days for 53 years (considering leap years):\n- 53 years × 365 days/year = 19,345 days\n- Adding leap years (1972, 1976, ..., 2020, 2024 - 13 leap years): 13 days\n\nTotal days from birth until July 30, 2023: 19,345 + 13 = 19,358 days\nAdding the days from July 30, 2023, to December 7, 2023: 130 days\n\nTotal age in days as of December 7, 2023: 19,358 + 130 = 19,488 days.\n\nTherefore, Christopher Nolan is 19,488 days old as of December 7, 2023."}
### `set_debug(True)`[](#set_debugtrue "Direct link to set_debugtrue")
Setting the global `debug` flag will cause all LangChain components with callback support (chains, models, agents, tools, retrievers) to print the inputs they receive and outputs they generate. This is the most verbose setting and will fully log raw inputs and outputs.
from langchain.globals import set_debugset_debug(True)set_verbose(False)agent_executor = AgentExecutor(agent=agent, tools=tools)agent_executor.invoke( {"input": "Who directed the 2023 film Oppenheimer and what is their age in days?"})
**API Reference:**[set\_debug](https://api.python.langchain.com/en/latest/globals/langchain.globals.set_debug.html)
[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor] Entering Chain run with input:[0m{ "input": "Who directed the 2023 film Oppenheimer and what is their age in days?"}[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence] Entering Chain run with input:[0m{ "input": ""}[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad>] Entering Chain run with input:[0m{ "input": ""}[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad>] Entering Chain run with input:[0m{ "input": ""}[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad> > 5:chain:RunnableLambda] Entering Chain run with input:[0m{ "input": ""}[36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad> > 5:chain:RunnableLambda] [1ms] Exiting Chain run with output:[0m{ "output": []}[36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad>] [2ms] Exiting Chain run with output:[0m{ "agent_scratchpad": []}[36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad>] [5ms] Exiting Chain run with output:[0m{ "input": "Who directed the 2023 film Oppenheimer and what is their age in days?", "intermediate_steps": [], "agent_scratchpad": []}[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 6:prompt:ChatPromptTemplate] Entering Prompt run with input:[0m{ "input": "Who directed the 2023 film Oppenheimer and what is their age in days?", "intermediate_steps": [], "agent_scratchpad": []}[36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 6:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:[0m[outputs][32;1m[1;3m[llm/start][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 7:llm:ChatOpenAI] Entering LLM run with input:[0m{ "prompts": [ "System: You are a helpful assistant.\nHuman: Who directed the 2023 film Oppenheimer and what is their age in days?" ]}[36;1m[1;3m[llm/end][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 7:llm:ChatOpenAI] [3.17s] Exiting LLM run with output:[0m{ "generations": [ [ { "text": "", "generation_info": { "finish_reason": "tool_calls" }, "type": "ChatGenerationChunk", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessageChunk" ], "kwargs": { "content": "", "example": false, "additional_kwargs": { "tool_calls": [ { "index": 0, "id": "call_fnfq6GjSQED4iF6lo4rxkUup", "function": { "arguments": "{\"query\": \"director of the 2023 film Oppenheimer\"}", "name": "tavily_search_results_json" }, "type": "function" }, { "index": 1, "id": "call_mwhVi6pk49f4OIo5rOWrr4TD", "function": { "arguments": "{\"query\": \"birth date of Christopher Nolan\"}", "name": "tavily_search_results_json" }, "type": "function" } ] }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"query\": \"director of the 2023 film Oppenheimer\"}", "id": "call_fnfq6GjSQED4iF6lo4rxkUup", "index": 0 }, { "name": "tavily_search_results_json", "args": "{\"query\": \"birth date of Christopher Nolan\"}", "id": "call_mwhVi6pk49f4OIo5rOWrr4TD", "index": 1 } ], "response_metadata": { "finish_reason": "tool_calls" }, "id": "run-6e160323-15f9-491d-aadf-b5d337e9e2a1", "tool_calls": [ { "name": "tavily_search_results_json", "args": { "query": "director of the 2023 film Oppenheimer" }, "id": "call_fnfq6GjSQED4iF6lo4rxkUup" }, { "name": "tavily_search_results_json", "args": { "query": "birth date of Christopher Nolan" }, "id": "call_mwhVi6pk49f4OIo5rOWrr4TD" } ], "invalid_tool_calls": [] } } } ] ], "llm_output": null, "run": null}[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 8:parser:ToolsAgentOutputParser] Entering Parser run with input:[0m[inputs][36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 8:parser:ToolsAgentOutputParser] [1ms] Exiting Parser run with output:[0m[outputs][36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 2:chain:RunnableSequence] [3.18s] Exiting Chain run with output:[0m[outputs][32;1m[1;3m[tool/start][0m [1m[1:chain:AgentExecutor > 9:tool:tavily_search_results_json] Entering Tool run with input:[0m"{'query': 'director of the 2023 film Oppenheimer'}"``````outputError in ConsoleCallbackHandler.on_tool_end callback: AttributeError("'list' object has no attribute 'strip'")``````output[32;1m[1;3m[tool/start][0m [1m[1:chain:AgentExecutor > 10:tool:tavily_search_results_json] Entering Tool run with input:[0m"{'query': 'birth date of Christopher Nolan'}"``````outputError in ConsoleCallbackHandler.on_tool_end callback: AttributeError("'list' object has no attribute 'strip'")``````output[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence] Entering Chain run with input:[0m{ "input": ""}[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad>] Entering Chain run with input:[0m{ "input": ""}[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad>] Entering Chain run with input:[0m{ "input": ""}[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad> > 14:chain:RunnableLambda] Entering Chain run with input:[0m{ "input": ""}[36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad> > 14:chain:RunnableLambda] [1ms] Exiting Chain run with output:[0m[outputs][36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad>] [4ms] Exiting Chain run with output:[0m[outputs][36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad>] [8ms] Exiting Chain run with output:[0m[outputs][32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 15:prompt:ChatPromptTemplate] Entering Prompt run with input:[0m[inputs][36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 15:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:[0m[outputs][32;1m[1;3m[llm/start][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 16:llm:ChatOpenAI] Entering LLM run with input:[0m{ "prompts": [ "System: You are a helpful assistant.\nHuman: Who directed the 2023 film Oppenheimer and what is their age in days?\nAI: \nTool: [{\"url\": \"https://m.imdb.com/title/tt15398776/fullcredits/\", \"content\": \"Oppenheimer (2023) cast and crew credits, including actors, actresses, directors, writers and more. Menu. ... director of photography: behind-the-scenes Jason Gary ... best boy grip ... film loader Luc Poullain ... aerial coordinator\"}]\nTool: [{\"url\": \"https://en.wikipedia.org/wiki/Christopher_Nolan\", \"content\": \"In early 2003, Nolan approached Warner Bros. with the idea of making a new Batman film, based on the character's origin story.[58] Nolan was fascinated by the notion of grounding it in a more realistic world than a comic-book fantasy.[59] He relied heavily on traditional stunts and miniature effects during filming, with minimal use of computer-generated imagery (CGI).[60] Batman Begins (2005), the biggest project Nolan had undertaken to that point,[61] was released to critical acclaim and commercial success.[62][63] Starring Christian Bale as Bruce Wayne / Batman—along with Michael Caine, Gary Oldman, Morgan Freeman and Liam Neeson—Batman Begins revived the franchise.[64][65] Batman Begins was 2005's ninth-highest-grossing film and was praised for its psychological depth and contemporary relevance;[63][66] it is cited as one of the most influential films of the 2000s.[67] Film author Ian Nathan wrote that within five years of his career, Nolan \\\"[went] from unknown to indie darling to gaining creative control over one of the biggest properties in Hollywood, and (perhaps unwittingly) fomenting the genre that would redefine the entire industry\\\".[68]\\nNolan directed, co-wrote and produced The Prestige (2006), an adaptation of the Christopher Priest novel about two rival 19th-century magicians.[69] He directed, wrote and edited the short film Larceny (1996),[19] which was filmed over a weekend in black and white with limited equipment and a small cast and crew.[12][20] Funded by Nolan and shot with the UCL Union Film society's equipment, it appeared at the Cambridge Film Festival in 1996 and is considered one of UCL's best shorts.[21] For unknown reasons, the film has since been removed from public view.[19] Nolan filmed a third short, Doodlebug (1997), about a man seemingly chasing an insect with his shoe, only to discover that it is a miniature of himself.[14][22] Nolan and Thomas first attempted to make a feature in the mid-1990s with Larry Mahoney, which they scrapped.[23] During this period in his career, Nolan had little to no success getting his projects off the ground, facing several rejections; he added, \\\"[T]here's a very limited pool of finance in the UK. Philosophy professor David Kyle Johnson wrote that \\\"Inception became a classic almost as soon as it was projected on silver screens\\\", praising its exploration of philosophical ideas, including leap of faith and allegory of the cave.[97] The film grossed over $836 million worldwide.[98] Nominated for eight Academy Awards—including Best Picture and Best Original Screenplay—it won Best Cinematography, Best Sound Mixing, Best Sound Editing and Best Visual Effects.[99] Nolan was nominated for a BAFTA Award and a Golden Globe Award for Best Director, among other accolades.[40]\\nAround the release of The Dark Knight Rises (2012), Nolan's third and final Batman film, Joseph Bevan of the British Film Institute wrote a profile on him: \\\"In the space of just over a decade, Christopher Nolan has shot from promising British indie director to undisputed master of a new brand of intelligent escapism. He further wrote that Nolan's body of work reflect \\\"a heterogeneity of conditions of products\\\" extending from low-budget films to lucrative blockbusters, \\\"a wide range of genres and settings\\\" and \\\"a diversity of styles that trumpet his versatility\\\".[193]\\nDavid Bordwell, a film theorist, wrote that Nolan has been able to blend his \\\"experimental impulses\\\" with the demands of mainstream entertainment, describing his oeuvre as \\\"experiments with cinematic time by means of techniques of subjective viewpoint and crosscutting\\\".[194] Nolan's use of practical, in-camera effects, miniatures and models, as well as shooting on celluloid film, has been highly influential in early 21st century cinema.[195][196] IndieWire wrote in 2019 that, Nolan \\\"kept a viable alternate model of big-budget filmmaking alive\\\", in an era where blockbuster filmmaking has become \\\"a largely computer-generated art form\\\".[196] Initially reluctant to make a sequel, he agreed after Warner Bros. repeatedly insisted.[78] Nolan wanted to expand on the noir quality of the first film by broadening the canvas and taking on \\\"the dynamic of a story of the city, a large crime story ... where you're looking at the police, the justice system, the vigilante, the poor people, the rich people, the criminals\\\".[79] Continuing to minimalise the use of CGI, Nolan employed high-resolution IMAX cameras, making it the first major motion picture to use this technology.[80][81]\"}]" ]}[36;1m[1;3m[llm/end][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 16:llm:ChatOpenAI] [20.22s] Exiting LLM run with output:[0m{ "generations": [ [ { "text": "The 2023 film \"Oppenheimer\" was directed by Christopher Nolan.\n\nTo calculate Christopher Nolan's age in days, we first need his birth date, which is July 30, 1970. Let's calculate his age in days from his birth date to today's date, December 7, 2023.\n\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\n3. From July 30, 2023, to December 7, 2023, is 130 days.\n\nNow, calculate the total days for 53 years:\n- Each year has 365 days, so 53 years × 365 days/year = 19,345 days.\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\n- Total days from leap years: 14 days.\n\nAdding all together:\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\n\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old.", "generation_info": { "finish_reason": "stop" }, "type": "ChatGenerationChunk", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessageChunk" ], "kwargs": { "content": "The 2023 film \"Oppenheimer\" was directed by Christopher Nolan.\n\nTo calculate Christopher Nolan's age in days, we first need his birth date, which is July 30, 1970. Let's calculate his age in days from his birth date to today's date, December 7, 2023.\n\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\n3. From July 30, 2023, to December 7, 2023, is 130 days.\n\nNow, calculate the total days for 53 years:\n- Each year has 365 days, so 53 years × 365 days/year = 19,345 days.\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\n- Total days from leap years: 14 days.\n\nAdding all together:\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\n\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old.", "example": false, "additional_kwargs": {}, "tool_call_chunks": [], "response_metadata": { "finish_reason": "stop" }, "id": "run-1c08a44f-db70-4836-935b-417caaf422a5", "tool_calls": [], "invalid_tool_calls": [] } } } ] ], "llm_output": null, "run": null}[32;1m[1;3m[chain/start][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 17:parser:ToolsAgentOutputParser] Entering Parser run with input:[0m[inputs][36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 17:parser:ToolsAgentOutputParser] [2ms] Exiting Parser run with output:[0m[outputs][36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor > 11:chain:RunnableSequence] [20.27s] Exiting Chain run with output:[0m[outputs][36;1m[1;3m[chain/end][0m [1m[1:chain:AgentExecutor] [26.37s] Exiting Chain run with output:[0m{ "output": "The 2023 film \"Oppenheimer\" was directed by Christopher Nolan.\n\nTo calculate Christopher Nolan's age in days, we first need his birth date, which is July 30, 1970. Let's calculate his age in days from his birth date to today's date, December 7, 2023.\n\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\n3. From July 30, 2023, to December 7, 2023, is 130 days.\n\nNow, calculate the total days for 53 years:\n- Each year has 365 days, so 53 years × 365 days/year = 19,345 days.\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\n- Total days from leap years: 14 days.\n\nAdding all together:\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\n\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old."}
{'input': 'Who directed the 2023 film Oppenheimer and what is their age in days?', 'output': 'The 2023 film "Oppenheimer" was directed by Christopher Nolan.\n\nTo calculate Christopher Nolan\'s age in days, we first need his birth date, which is July 30, 1970. Let\'s calculate his age in days from his birth date to today\'s date, December 7, 2023.\n\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\n3. From July 30, 2023, to December 7, 2023, is 130 days.\n\nNow, calculate the total days for 53 years:\n- Each year has 365 days, so 53 years × 365 days/year = 19,345 days.\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\n- Total days from leap years: 14 days.\n\nAdding all together:\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\n\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old.'}
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/debugging.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create custom tools
](/v0.2/docs/how_to/custom_tools/)[
Next
How to load CSVs
](/v0.2/docs/how_to/document_loader_csv/)
* [Tracing](#tracing)
* [`set_debug` and `set_verbose`](#set_debug-and-set_verbose)
* [`set_verbose(True)`](#set_verbosetrue)
* [`set_debug(True)`](#set_debugtrue) | null |
https://python.langchain.com/v0.2/docs/how_to/caching_embeddings/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* Caching
On this page
Caching
=======
Embeddings can be stored or temporarily cached to avoid needing to recompute them.
Caching embeddings can be done using a `CacheBackedEmbeddings`. The cache backed embedder is a wrapper around an embedder that caches embeddings in a key-value store. The text is hashed and the hash is used as the key in the cache.
The main supported way to initialize a `CacheBackedEmbeddings` is `from_bytes_store`. It takes the following parameters:
* underlying\_embedder: The embedder to use for embedding.
* document\_embedding\_cache: Any [`ByteStore`](/v0.2/docs/integrations/stores/) for caching document embeddings.
* batch\_size: (optional, defaults to `None`) The number of documents to embed between store updates.
* namespace: (optional, defaults to `""`) The namespace to use for document cache. This namespace is used to avoid collisions with other caches. For example, set it to the name of the embedding model used.
* query\_embedding\_cache: (optional, defaults to `None` or not caching) A [`ByteStore`](/v0.2/docs/integrations/stores/) for caching query embeddings, or `True` to use the same store as `document_embedding_cache`.
**Attention**:
* Be sure to set the `namespace` parameter to avoid collisions of the same text embedded using different embeddings models.
* `CacheBackedEmbeddings` does not cache query embeddings by default. To enable query caching, one need to specify a `query_embedding_cache`.
from langchain.embeddings import CacheBackedEmbeddings
**API Reference:**[CacheBackedEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cache.CacheBackedEmbeddings.html)
Using with a Vector Store[](#using-with-a-vector-store "Direct link to Using with a Vector Store")
---------------------------------------------------------------------------------------------------
First, let's see an example that uses the local file system for storing embeddings and uses FAISS vector store for retrieval.
%pip install --upgrade --quiet langchain-openai faiss-cpu
from langchain.storage import LocalFileStorefrom langchain_community.document_loaders import TextLoaderfrom langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import CharacterTextSplitterunderlying_embeddings = OpenAIEmbeddings()store = LocalFileStore("./cache/")cached_embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, store, namespace=underlying_embeddings.model)
**API Reference:**[LocalFileStore](https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html) | [TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html) | [FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html) | [CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html)
The cache is empty prior to embedding:
list(store.yield_keys())
[]
Load the document, split it into chunks, embed each chunk and load it into the vector store.
raw_documents = TextLoader("state_of_the_union.txt").load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)
Create the vector store:
%%timedb = FAISS.from_documents(documents, cached_embedder)
CPU times: user 218 ms, sys: 29.7 ms, total: 248 msWall time: 1.02 s
If we try to create the vector store again, it'll be much faster since it does not need to re-compute any embeddings.
%%timedb2 = FAISS.from_documents(documents, cached_embedder)
CPU times: user 15.7 ms, sys: 2.22 ms, total: 18 msWall time: 17.2 ms
And here are some of the embeddings that got created:
list(store.yield_keys())[:5]
['text-embedding-ada-00217a6727d-8916-54eb-b196-ec9c9d6ca472', 'text-embedding-ada-0025fc0d904-bd80-52da-95c9-441015bfb438', 'text-embedding-ada-002e4ad20ef-dfaa-5916-9459-f90c6d8e8159', 'text-embedding-ada-002ed199159-c1cd-5597-9757-f80498e8f17b', 'text-embedding-ada-0021297d37a-2bc1-5e19-bf13-6c950f075062']
Swapping the `ByteStore`
========================
In order to use a different `ByteStore`, just use it when creating your `CacheBackedEmbeddings`. Below, we create an equivalent cached embeddings object, except using the non-persistent `InMemoryByteStore` instead:
from langchain.embeddings import CacheBackedEmbeddingsfrom langchain.storage import InMemoryByteStorestore = InMemoryByteStore()cached_embedder = CacheBackedEmbeddings.from_bytes_store( underlying_embeddings, store, namespace=underlying_embeddings.model)
**API Reference:**[CacheBackedEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cache.CacheBackedEmbeddings.html) | [InMemoryByteStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryByteStore.html)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/caching_embeddings.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add scores to retriever results
](/v0.2/docs/how_to/add_scores_retriever/)[
Next
How to use callbacks in async environments
](/v0.2/docs/how_to/callbacks_async/)
* [Using with a Vector Store](#using-with-a-vector-store) | null |
https://python.langchain.com/v0.2/docs/how_to/custom_tools/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create custom tools
On this page
How to create custom tools
==========================
When constructing an agent, you will need to provide it with a list of `Tool`s that it can use. Besides the actual function that is called, the Tool consists of several components:
Attribute
Type
Description
name
str
Must be unique within a set of tools provided to an LLM or agent.
description
str
Describes what the tool does. Used as context by the LLM or agent.
args\_schema
Pydantic BaseModel
Optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters
return\_direct
boolean
Only relevant for agents. When True, after invoking the given tool, the agent will stop and return the result direcly to the user.
LangChain provides 3 ways to create tools:
1. Using [@tool decorator](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html#langchain_core.tools.tool) -- the simplest way to define a custom tool.
2. Using [StructuredTool.from\_function](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.StructuredTool.html#langchain_core.tools.StructuredTool.from_function) class method -- this is similar to the `@tool` decorator, but allows more configuration and specification of both sync and async implementations.
3. By sub-classing from [BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.
The `@tool` or the `StructuredTool.from_function` class method should be sufficient for most use cases.
tip
Models will perform better if the tools have well chosen names, descriptions and JSON schemas.
@tool decorator[](#tool-decorator "Direct link to @tool decorator")
--------------------------------------------------------------------
This `@tool` decorator is the simplest way to define a custom tool. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function's docstring as the tool's description - so a docstring MUST be provided.
from langchain_core.tools import tool@tooldef multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b# Let's inspect some of the attributes associated with the tool.print(multiply.name)print(multiply.description)print(multiply.args)
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
multiplymultiply(a: int, b: int) -> int - Multiply two numbers.{'a': {'title': 'A', 'type': 'integer'}, 'b': {'title': 'B', 'type': 'integer'}}
Or create an **async** implementation, like this:
from langchain_core.tools import tool@toolasync def amultiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b
**API Reference:**[tool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html)
You can also customize the tool name and JSON args by passing them into the tool decorator.
from langchain.pydantic_v1 import BaseModel, Fieldclass CalculatorInput(BaseModel): a: int = Field(description="first number") b: int = Field(description="second number")@tool("multiplication-tool", args_schema=CalculatorInput, return_direct=True)def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b# Let's inspect some of the attributes associated with the tool.print(multiply.name)print(multiply.description)print(multiply.args)print(multiply.return_direct)
multiplication-toolmultiplication-tool(a: int, b: int) -> int - Multiply two numbers.{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}True
StructuredTool[](#structuredtool "Direct link to StructuredTool")
------------------------------------------------------------------
The `StrurcturedTool.from_function` class method provides a bit more configurability than the `@tool` decorator, without requiring much additional code.
from langchain_core.tools import StructuredTooldef multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * basync def amultiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * bcalculator = StructuredTool.from_function(func=multiply, coroutine=amultiply)print(calculator.invoke({"a": 2, "b": 3}))print(await calculator.ainvoke({"a": 2, "b": 5}))
**API Reference:**[StructuredTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.StructuredTool.html)
610
To configure it:
class CalculatorInput(BaseModel): a: int = Field(description="first number") b: int = Field(description="second number")def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * bcalculator = StructuredTool.from_function( func=multiply, name="Calculator", description="multiply numbers", args_schema=CalculatorInput, return_direct=True, # coroutine= ... <- you can specify an async method if desired as well)print(calculator.invoke({"a": 2, "b": 3}))print(calculator.name)print(calculator.description)print(calculator.args)
6CalculatorCalculator(a: int, b: int) -> int - multiply numbers{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}
Subclass BaseTool[](#subclass-basetool "Direct link to Subclass BaseTool")
---------------------------------------------------------------------------
You can define a custom tool by sub-classing from `BaseTool`. This provides maximal control over the tool definition, but requires writing more code.
from typing import Optional, Typefrom langchain.pydantic_v1 import BaseModelfrom langchain_core.callbacks import ( AsyncCallbackManagerForToolRun, CallbackManagerForToolRun,)from langchain_core.tools import BaseToolclass CalculatorInput(BaseModel): a: int = Field(description="first number") b: int = Field(description="second number")class CustomCalculatorTool(BaseTool): name = "Calculator" description = "useful for when you need to answer questions about math" args_schema: Type[BaseModel] = CalculatorInput return_direct: bool = True def _run( self, a: int, b: int, run_manager: Optional[CallbackManagerForToolRun] = None ) -> str: """Use the tool.""" return a * b async def _arun( self, a: int, b: int, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool asynchronously.""" # If the calculation is cheap, you can just delegate to the sync implementation # as shown below. # If the sync calculation is expensive, you should delete the entire _arun method. # LangChain will automatically provide a better implementation that will # kick off the task in a thread to make sure it doesn't block other async code. return self._run(a, b, run_manager=run_manager.get_sync())
**API Reference:**[AsyncCallbackManagerForToolRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForToolRun.html) | [CallbackManagerForToolRun](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForToolRun.html) | [BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html)
multiply = CustomCalculatorTool()print(multiply.name)print(multiply.description)print(multiply.args)print(multiply.return_direct)print(multiply.invoke({"a": 2, "b": 3}))print(await multiply.ainvoke({"a": 2, "b": 3}))
Calculatoruseful for when you need to answer questions about math{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}True66
How to create async tools[](#how-to-create-async-tools "Direct link to How to create async tools")
---------------------------------------------------------------------------------------------------
LangChain Tools implement the [Runnable interface 🏃](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html).
All Runnables expose the `invoke` and `ainvoke` methods (as well as other methods like `batch`, `abatch`, `astream` etc).
So even if you only provide an `sync` implementation of a tool, you could still use the `ainvoke` interface, but there are some important things to know:
* LangChain's by default provides an async implementation that assumes that the function is expensive to compute, so it'll delegate execution to another thread.
* If you're working in an async codebase, you should create async tools rather than sync tools, to avoid incuring a small overhead due to that thread.
* If you need both sync and async implementations, use `StructuredTool.from_function` or sub-class from `BaseTool`.
* If implementing both sync and async, and the sync code is fast to run, override the default LangChain async implementation and simply call the sync code.
* You CANNOT and SHOULD NOT use the sync `invoke` with an `async` tool.
from langchain_core.tools import StructuredTooldef multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * bcalculator = StructuredTool.from_function(func=multiply)print(calculator.invoke({"a": 2, "b": 3}))print( await calculator.ainvoke({"a": 2, "b": 5})) # Uses default LangChain async implementation incurs small overhead
**API Reference:**[StructuredTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.StructuredTool.html)
610
from langchain_core.tools import StructuredTooldef multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * basync def amultiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * bcalculator = StructuredTool.from_function(func=multiply, coroutine=amultiply)print(calculator.invoke({"a": 2, "b": 3}))print( await calculator.ainvoke({"a": 2, "b": 5})) # Uses use provided amultiply without additional overhead
**API Reference:**[StructuredTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.StructuredTool.html)
610
You should not and cannot use `.invoke` when providing only an async definition.
@toolasync def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * btry: multiply.invoke({"a": 2, "b": 3})except NotImplementedError: print("Raised not implemented error. You should not be doing this.")
Raised not implemented error. You should not be doing this.
Handling Tool Errors[](#handling-tool-errors "Direct link to Handling Tool Errors")
------------------------------------------------------------------------------------
If you're using tools with agents, you will likely need an error handling strategy, so the agent can recover from the error and continue execution.
A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_error`.
When the error handler is specified, the exception will be caught and the error handler will decide which output to return from the tool.
You can set `handle_tool_error` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value.
Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_error` of the tool because its default value is `False`.
from langchain_core.tools import ToolExceptiondef get_weather(city: str) -> int: """Get weather for the given city.""" raise ToolException(f"Error: There is no city by the name of {city}.")
**API Reference:**[ToolException](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.ToolException.html)
Here's an example with the default `handle_tool_error=True` behavior.
get_weather_tool = StructuredTool.from_function( func=get_weather, handle_tool_error=True,)get_weather_tool.invoke({"city": "foobar"})
'Error: There is no city by the name of foobar.'
We can set `handle_tool_error` to a string that will always be returned.
get_weather_tool = StructuredTool.from_function( func=get_weather, handle_tool_error="There is no such city, but it's probably above 0K there!",)get_weather_tool.invoke({"city": "foobar"})
"There is no such city, but it's probably above 0K there!"
Handling the error using a function:
def _handle_error(error: ToolException) -> str: return f"The following errors occurred during tool execution: `{error.args[0]}`"get_weather_tool = StructuredTool.from_function( func=get_weather, handle_tool_error=_handle_error,)get_weather_tool.invoke({"city": "foobar"})
'The following errors occurred during tool execution: `Error: There is no city by the name of foobar.`'
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/custom_tools.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Custom Retriever
](/v0.2/docs/how_to/custom_retriever/)[
Next
How to debug your LLM apps
](/v0.2/docs/how_to/debugging/)
* [@tool decorator](#tool-decorator)
* [StructuredTool](#structuredtool)
* [Subclass BaseTool](#subclass-basetool)
* [How to create async tools](#how-to-create-async-tools)
* [Handling Tool Errors](#handling-tool-errors) | null |
https://python.langchain.com/v0.2/docs/how_to/callbacks_async/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use callbacks in async environments
On this page
How to use callbacks in async environments
==========================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Callbacks](/v0.2/docs/concepts/#callbacks)
* [Custom callback handlers](/v0.2/docs/how_to/custom_callbacks/)
If you are planning to use the async APIs, it is recommended to use and extend [`AsyncCallbackHandler`](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) to avoid blocking the event.
danger
If you use a sync `CallbackHandler` while using an async method to run your LLM / Chain / Tool / Agent, it will still work. However, under the hood, it will be called with [`run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) which can cause issues if your `CallbackHandler` is not thread-safe.
danger
If you're on `python<=3.10`, you need to remember to propagate `config` or `callbacks` when invoking other `runnable` from within a `RunnableLambda`, `RunnableGenerator` or `@tool`. If you do not do this, the callbacks will not be propagated to the child runnables being invoked.
import asynciofrom typing import Any, Dict, Listfrom langchain_anthropic import ChatAnthropicfrom langchain_core.callbacks import AsyncCallbackHandler, BaseCallbackHandlerfrom langchain_core.messages import HumanMessagefrom langchain_core.outputs import LLMResultclass MyCustomSyncHandler(BaseCallbackHandler): def on_llm_new_token(self, token: str, **kwargs) -> None: print(f"Sync handler being called in a `thread_pool_executor`: token: {token}")class MyCustomAsyncHandler(AsyncCallbackHandler): """Async callback handler that can be used to handle callbacks from langchain.""" async def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> None: """Run when chain starts running.""" print("zzzz....") await asyncio.sleep(0.3) class_name = serialized["name"] print("Hi! I just woke up. Your llm is starting") async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: """Run when chain ends running.""" print("zzzz....") await asyncio.sleep(0.3) print("Hi! I just woke up. Your llm is ending")# To enable streaming, we pass in `streaming=True` to the ChatModel constructor# Additionally, we pass in a list with our custom handlerchat = ChatAnthropic( model="claude-3-sonnet-20240229", max_tokens=25, streaming=True, callbacks=[MyCustomSyncHandler(), MyCustomAsyncHandler()],)await chat.agenerate([[HumanMessage(content="Tell me a joke")]])
**API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [AsyncCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) | [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) | [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) | [LLMResult](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html)
zzzz....Hi! I just woke up. Your llm is startingSync handler being called in a `thread_pool_executor`: token: HereSync handler being called in a `thread_pool_executor`: token: 'sSync handler being called in a `thread_pool_executor`: token: aSync handler being called in a `thread_pool_executor`: token: littleSync handler being called in a `thread_pool_executor`: token: jokeSync handler being called in a `thread_pool_executor`: token: forSync handler being called in a `thread_pool_executor`: token: youSync handler being called in a `thread_pool_executor`: token: :Sync handler being called in a `thread_pool_executor`: token: WhySync handler being called in a `thread_pool_executor`: token: canSync handler being called in a `thread_pool_executor`: token: 'tSync handler being called in a `thread_pool_executor`: token: aSync handler being called in a `thread_pool_executor`: token: bicycleSync handler being called in a `thread_pool_executor`: token: stanSync handler being called in a `thread_pool_executor`: token: d upSync handler being called in a `thread_pool_executor`: token: bySync handler being called in a `thread_pool_executor`: token: itselfSync handler being called in a `thread_pool_executor`: token: ?Sync handler being called in a `thread_pool_executor`: token: BecauseSync handler being called in a `thread_pool_executor`: token: itSync handler being called in a `thread_pool_executor`: token: 'sSync handler being called in a `thread_pool_executor`: token: twoSync handler being called in a `thread_pool_executor`: token: -Sync handler being called in a `thread_pool_executor`: token: tirezzzz....Hi! I just woke up. Your llm is ending
LLMResult(generations=[[ChatGeneration(text="Here's a little joke for you:\n\nWhy can't a bicycle stand up by itself? Because it's two-tire", message=AIMessage(content="Here's a little joke for you:\n\nWhy can't a bicycle stand up by itself? Because it's two-tire", id='run-8afc89e8-02c0-4522-8480-d96977240bd4-0'))]], llm_output={}, run=[RunInfo(run_id=UUID('8afc89e8-02c0-4522-8480-d96977240bd4'))])
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to create your own custom callback handlers.
Next, check out the other how-to guides in this section, such as [how to attach callbacks to a runnable](/v0.2/docs/how_to/callbacks_attach/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/callbacks_async.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Caching
](/v0.2/docs/how_to/caching_embeddings/)[
Next
How to attach callbacks to a runnable
](/v0.2/docs/how_to/callbacks_attach/)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/document_loader_csv/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to load CSVs
On this page
How to load CSVs
================
A [comma-separated values (CSV)](https://en.wikipedia.org/wiki/Comma-separated_values) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.
LangChain implements a [CSV Loader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.csv_loader.CSVLoader.html) that will load CSV files into a sequence of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects. Each row of the CSV file is translated to one document.
from langchain_community.document_loaders.csv_loader import CSVLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv")loader = CSVLoader(file_path=file_path)data = loader.load()for record in data[:2]: print(record)
**API Reference:**[CSVLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.csv_loader.CSVLoader.html)
page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 0}page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 1}
Customizing the CSV parsing and loading[](#customizing-the-csv-parsing-and-loading "Direct link to Customizing the CSV parsing and loading")
---------------------------------------------------------------------------------------------------------------------------------------------
`CSVLoader` will accept a `csv_args` kwarg that supports customization of arguments passed to Python's `csv.DictReader`. See the [csv module](https://docs.python.org/3/library/csv.html) documentation for more information of what csv args are supported.
loader = CSVLoader( file_path=file_path, csv_args={ "delimiter": ",", "quotechar": '"', "fieldnames": ["MLB Team", "Payroll in millions", "Wins"], },)data = loader.load()for record in data[:2]: print(record)
page_content='MLB Team: Team\nPayroll in millions: "Payroll (millions)"\nWins: "Wins"' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 0}page_content='MLB Team: Nationals\nPayroll in millions: 81.34\nWins: 98' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 1}
Specify a column to identify the document source[](#specify-a-column-to-identify-the-document-source "Direct link to Specify a column to identify the document source")
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The `"source"` key on [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) metadata can be set using a column of the CSV. Use the `source_column` argument to specify a source for the document created from each row. Otherwise `file_path` will be used as the source for all documents created from the CSV file.
This is useful when using documents loaded from CSV files for chains that answer questions using sources.
loader = CSVLoader(file_path=file_path, source_column="Team")data = loader.load()for record in data[:2]: print(record)
page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98' metadata={'source': 'Nationals', 'row': 0}page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97' metadata={'source': 'Reds', 'row': 1}
Load from a string[](#load-from-a-string "Direct link to Load from a string")
------------------------------------------------------------------------------
Python's `tempfile` can be used when working with CSV strings directly.
import tempfilefrom io import StringIOstring_data = """"Team", "Payroll (millions)", "Wins""Nationals", 81.34, 98"Reds", 82.20, 97"Yankees", 197.96, 95"Giants", 117.62, 94""".strip()with tempfile.NamedTemporaryFile(delete=False, mode="w+") as temp_file: temp_file.write(string_data) temp_file_path = temp_file.nameloader = CSVLoader(file_path=temp_file_path)loader.load()for record in data[:2]: print(record)
page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98' metadata={'source': 'Nationals', 'row': 0}page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97' metadata={'source': 'Reds', 'row': 1}
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_csv.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to debug your LLM apps
](/v0.2/docs/how_to/debugging/)[
Next
How to load documents from a directory
](/v0.2/docs/how_to/document_loader_directory/)
* [Customizing the CSV parsing and loading](#customizing-the-csv-parsing-and-loading)
* [Specify a column to identify the document source](#specify-a-column-to-identify-the-document-source)
* [Load from a string](#load-from-a-string) | null |
https://python.langchain.com/v0.2/docs/how_to/document_loader_directory/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to load documents from a directory
On this page
How to load documents from a directory
======================================
LangChain's [DirectoryLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.directory.DirectoryLoader.html) implements functionality for reading files from disk into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects. Here we demonstrate:
* How to load from a filesystem, including use of wildcard patterns;
* How to use multithreading for file I/O;
* How to use custom loader classes to parse specific file types (e.g., code);
* How to handle errors, such as those due to decoding.
from langchain_community.document_loaders import DirectoryLoader
**API Reference:**[DirectoryLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.directory.DirectoryLoader.html)
`DirectoryLoader` accepts a `loader_cls` kwarg, which defaults to [UnstructuredLoader](/v0.2/docs/integrations/document_loaders/unstructured_file/). [Unstructured](https://unstructured-io.github.io/unstructured/) supports parsing for a number of formats, such as PDF and HTML. Here we use it to read in a markdown (.md) file.
We can use the `glob` parameter to control which files to load. Note that here it doesn't load the `.rst` file or the `.html` files.
loader = DirectoryLoader("../", glob="**/*.md")docs = loader.load()len(docs)
20
print(docs[0].page_content[:100])
SecurityLangChain has a large ecosystem of integrations with various external resources like local
Show a progress bar[](#show-a-progress-bar "Direct link to Show a progress bar")
---------------------------------------------------------------------------------
By default a progress bar will not be shown. To show a progress bar, install the `tqdm` library (e.g. `pip install tqdm`), and set the `show_progress` parameter to `True`.
loader = DirectoryLoader("../", glob="**/*.md", show_progress=True)docs = loader.load()
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 54.56it/s]
Use multithreading[](#use-multithreading "Direct link to Use multithreading")
------------------------------------------------------------------------------
By default the loading happens in one thread. In order to utilize several threads set the `use_multithreading` flag to true.
loader = DirectoryLoader("../", glob="**/*.md", use_multithreading=True)docs = loader.load()
Change loader class[](#change-loader-class "Direct link to Change loader class")
---------------------------------------------------------------------------------
By default this uses the `UnstructuredLoader` class. To customize the loader, specify the loader class in the `loader_cls` kwarg. Below we show an example using [TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html):
from langchain_community.document_loaders import TextLoaderloader = DirectoryLoader("../", glob="**/*.md", loader_cls=TextLoader)docs = loader.load()
**API Reference:**[TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html)
print(docs[0].page_content[:100])
# SecurityLangChain has a large ecosystem of integrations with various external resources like loc
Notice that while the `UnstructuredLoader` parses Markdown headers, `TextLoader` does not.
If you need to load Python source code files, use the `PythonLoader`:
from langchain_community.document_loaders import PythonLoaderloader = DirectoryLoader("../../../../../", glob="**/*.py", loader_cls=PythonLoader)
**API Reference:**[PythonLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.python.PythonLoader.html)
Auto-detect file encodings with TextLoader[](#auto-detect-file-encodings-with-textloader "Direct link to Auto-detect file encodings with TextLoader")
------------------------------------------------------------------------------------------------------------------------------------------------------
`DirectoryLoader` can help manage errors due to variations in file encodings. Below we will attempt to load in a collection of files, one of which includes non-UTF8 encodings.
path = "../../../../libs/langchain/tests/unit_tests/examples/"loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader)
### A. Default Behavior[](#a-default-behavior "Direct link to A. Default Behavior")
By default we raise an error:
loader.load()
Error loading file ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt
---------------------------------------------------------------------------``````outputUnicodeDecodeError Traceback (most recent call last)``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/text.py:43, in TextLoader.lazy_load(self) 42 with open(self.file_path, encoding=self.encoding) as f:---> 43 text = f.read() 44 except UnicodeDecodeError as e:``````outputFile ~/.pyenv/versions/3.10.4/lib/python3.10/codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final) 321 data = self.buffer + input--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final) 323 # keep undecoded input until the next call``````outputUnicodeDecodeError: 'utf-8' codec can't decode byte 0xca in position 0: invalid continuation byte``````outputThe above exception was the direct cause of the following exception:``````outputRuntimeError Traceback (most recent call last)``````outputCell In[10], line 1----> 1 loader.load()``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:117, in DirectoryLoader.load(self) 115 def load(self) -> List[Document]: 116 """Load documents."""--> 117 return list(self.lazy_load())``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:182, in DirectoryLoader.lazy_load(self) 180 else: 181 for i in items:--> 182 yield from self._lazy_load_file(i, p, pbar) 184 if pbar: 185 pbar.close()``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:220, in DirectoryLoader._lazy_load_file(self, item, path, pbar) 218 else: 219 logger.error(f"Error loading file {str(item)}")--> 220 raise e 221 finally: 222 if pbar:``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:210, in DirectoryLoader._lazy_load_file(self, item, path, pbar) 208 loader = self.loader_cls(str(item), **self.loader_kwargs) 209 try:--> 210 for subdoc in loader.lazy_load(): 211 yield subdoc 212 except NotImplementedError:``````outputFile ~/repos/langchain/libs/community/langchain_community/document_loaders/text.py:56, in TextLoader.lazy_load(self) 54 continue 55 else:---> 56 raise RuntimeError(f"Error loading {self.file_path}") from e 57 except Exception as e: 58 raise RuntimeError(f"Error loading {self.file_path}") from e``````outputRuntimeError: Error loading ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt
The file `example-non-utf8.txt` uses a different encoding, so the `load()` function fails with a helpful message indicating which file failed decoding.
With the default behavior of `TextLoader` any failure to load any of the documents will fail the whole loading process and no documents are loaded.
### B. Silent fail[](#b-silent-fail "Direct link to B. Silent fail")
We can pass the parameter `silent_errors` to the `DirectoryLoader` to skip the files which could not be loaded and continue the load process.
loader = DirectoryLoader( path, glob="**/*.txt", loader_cls=TextLoader, silent_errors=True)docs = loader.load()
Error loading file ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt: Error loading ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt
doc_sources = [doc.metadata["source"] for doc in docs]doc_sources
['../../../../libs/langchain/tests/unit_tests/examples/example-utf8.txt']
### C. Auto detect encodings[](#c-auto-detect-encodings "Direct link to C. Auto detect encodings")
We can also ask `TextLoader` to auto detect the file encoding before failing, by passing the `autodetect_encoding` to the loader class.
text_loader_kwargs = {"autodetect_encoding": True}loader = DirectoryLoader( path, glob="**/*.txt", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)docs = loader.load()
doc_sources = [doc.metadata["source"] for doc in docs]doc_sources
['../../../../libs/langchain/tests/unit_tests/examples/example-utf8.txt', '../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt']
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_directory.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to load CSVs
](/v0.2/docs/how_to/document_loader_csv/)[
Next
How to load HTML
](/v0.2/docs/how_to/document_loader_html/)
* [Show a progress bar](#show-a-progress-bar)
* [Use multithreading](#use-multithreading)
* [Change loader class](#change-loader-class)
* [Auto-detect file encodings with TextLoader](#auto-detect-file-encodings-with-textloader)
* [A. Default Behavior](#a-default-behavior)
* [B. Silent fail](#b-silent-fail)
* [C. Auto detect encodings](#c-auto-detect-encodings) | null |
https://python.langchain.com/v0.2/docs/how_to/callbacks_attach/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to attach callbacks to a runnable
On this page
How to attach callbacks to a runnable
=====================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Callbacks](/v0.2/docs/concepts/#callbacks)
* [Custom callback handlers](/v0.2/docs/how_to/custom_callbacks/)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
* [Attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding/)
If you are composing a chain of runnables and want to reuse callbacks across multiple executions, you can attach callbacks with the [`.with_config()`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_config) method. This saves you the need to pass callbacks in each time you invoke the chain.
info
`with_config()` binds a configuration which will be interpreted as **runtime** configuration. So these callbacks will propagate to all child components.
Here's an example:
from typing import Any, Dict, Listfrom langchain_anthropic import ChatAnthropicfrom langchain_core.callbacks import BaseCallbackHandlerfrom langchain_core.messages import BaseMessagefrom langchain_core.outputs import LLMResultfrom langchain_core.prompts import ChatPromptTemplateclass LoggingHandler(BaseCallbackHandler): def on_chat_model_start( self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs ) -> None: print("Chat model started") def on_llm_end(self, response: LLMResult, **kwargs) -> None: print(f"Chat model ended, response: {response}") def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs ) -> None: print(f"Chain {serialized.get('name')} started") def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None: print(f"Chain ended, outputs: {outputs}")callbacks = [LoggingHandler()]llm = ChatAnthropic(model="claude-3-sonnet-20240229")prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")chain = prompt | llmchain_with_callbacks = chain.with_config(callbacks=callbacks)chain_with_callbacks.invoke({"number": "2"})
**API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) | [BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [LLMResult](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
Chain RunnableSequence startedChain ChatPromptTemplate startedChain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?')]Chat model startedChat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0'))]] llm_output={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=NoneChain ended, outputs: content='1 + 2 = 3' response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0'
AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0')
The bound callbacks will run for all nested module runs.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to attach callbacks to a chain.
Next, check out the other how-to guides in this section, such as how to [pass callbacks in at runtime](/v0.2/docs/how_to/callbacks_runtime/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/callbacks_attach.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use callbacks in async environments
](/v0.2/docs/how_to/callbacks_async/)[
Next
How to propagate callbacks constructor
](/v0.2/docs/how_to/callbacks_constructor/)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/callbacks_constructor/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to propagate callbacks constructor
On this page
How to propagate callbacks constructor
======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Callbacks](/v0.2/docs/concepts/#callbacks)
* [Custom callback handlers](/v0.2/docs/how_to/custom_callbacks/)
Most LangChain modules allow you to pass `callbacks` directly into the constructor (i.e., initializer). In this case, the callbacks will only be called for that instance (and any nested runs).
danger
Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children of the object. This can lead to confusing behavior, and it's generally better to pass callbacks as a run time argument.
Here's an example:
from typing import Any, Dict, Listfrom langchain_anthropic import ChatAnthropicfrom langchain_core.callbacks import BaseCallbackHandlerfrom langchain_core.messages import BaseMessagefrom langchain_core.outputs import LLMResultfrom langchain_core.prompts import ChatPromptTemplateclass LoggingHandler(BaseCallbackHandler): def on_chat_model_start( self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs ) -> None: print("Chat model started") def on_llm_end(self, response: LLMResult, **kwargs) -> None: print(f"Chat model ended, response: {response}") def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs ) -> None: print(f"Chain {serialized.get('name')} started") def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None: print(f"Chain ended, outputs: {outputs}")callbacks = [LoggingHandler()]llm = ChatAnthropic(model="claude-3-sonnet-20240229", callbacks=callbacks)prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")chain = prompt | llmchain.invoke({"number": "2"})
**API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) | [BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [LLMResult](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
Chat model startedChat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-2d7fdf2a-7405-4e17-97c0-67e6b2a65305-0'))]] llm_output={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=None
AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-2d7fdf2a-7405-4e17-97c0-67e6b2a65305-0')
You can see that we only see events from the chat model run - no chain events from the prompt or broader chain.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to pass callbacks into a constructor.
Next, check out the other how-to guides in this section, such as how to [pass callbacks at runtime](/v0.2/docs/how_to/callbacks_runtime/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/callbacks_constructor.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to attach callbacks to a runnable
](/v0.2/docs/how_to/callbacks_attach/)[
Next
How to pass callbacks in at runtime
](/v0.2/docs/how_to/callbacks_runtime/)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/document_loader_html/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to load HTML
On this page
How to load HTML
================
The HyperText Markup Language or [HTML](https://en.wikipedia.org/wiki/HTML) is the standard markup language for documents designed to be displayed in a web browser.
This covers how to load `HTML` documents into a LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects that we can use downstream.
Parsing HTML files often requires specialized tools. Here we demonstrate parsing via [Unstructured](https://unstructured-io.github.io/unstructured/) and [BeautifulSoup4](https://beautiful-soup-4.readthedocs.io/en/latest/), which can be installed via pip. Head over to the integrations page to find integrations with additional services, such as [Azure AI Document Intelligence](/v0.2/docs/integrations/document_loaders/azure_document_intelligence/) or [FireCrawl](/v0.2/docs/integrations/document_loaders/firecrawl/).
Loading HTML with Unstructured[](#loading-html-with-unstructured "Direct link to Loading HTML with Unstructured")
------------------------------------------------------------------------------------------------------------------
%pip install "unstructured[html]"
from langchain_community.document_loaders import UnstructuredHTMLLoaderfile_path = "../../../docs/integrations/document_loaders/example_data/fake-content.html"loader = UnstructuredHTMLLoader(file_path)data = loader.load()print(data)
**API Reference:**[UnstructuredHTMLLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.html.UnstructuredHTMLLoader.html)
[Document(page_content='My First Heading\n\nMy first paragraph.', metadata={'source': '../../../docs/integrations/document_loaders/example_data/fake-content.html'})]
Loading HTML with BeautifulSoup4[](#loading-html-with-beautifulsoup4 "Direct link to Loading HTML with BeautifulSoup4")
------------------------------------------------------------------------------------------------------------------------
We can also use `BeautifulSoup4` to load HTML documents using the `BSHTMLLoader`. This will extract the text from the HTML into `page_content`, and the page title as `title` into `metadata`.
%pip install bs4
from langchain_community.document_loaders import BSHTMLLoaderloader = BSHTMLLoader(file_path)data = loader.load()print(data)
**API Reference:**[BSHTMLLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.html_bs.BSHTMLLoader.html)
[Document(page_content='\nTest Title\n\n\nMy First Heading\nMy first paragraph.\n\n\n', metadata={'source': '../../../docs/integrations/document_loaders/example_data/fake-content.html', 'title': 'Test Title'})]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_html.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to load documents from a directory
](/v0.2/docs/how_to/document_loader_directory/)[
Next
How to load JSON
](/v0.2/docs/how_to/document_loader_json/)
* [Loading HTML with Unstructured](#loading-html-with-unstructured)
* [Loading HTML with BeautifulSoup4](#loading-html-with-beautifulsoup4) | null |
https://python.langchain.com/v0.2/docs/how_to/document_loader_markdown/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to load Markdown
On this page
How to load Markdown
====================
[Markdown](https://en.wikipedia.org/wiki/Markdown) is a lightweight markup language for creating formatted text using a plain-text editor.
Here we cover how to load `Markdown` documents into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects that we can use downstream.
We will cover:
* Basic usage;
* Parsing of Markdown into elements such as titles, list items, and text.
LangChain implements an [UnstructuredMarkdownLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.markdown.UnstructuredMarkdownLoader.html) object which requires the [Unstructured](https://unstructured-io.github.io/unstructured/) package. First we install it:
# !pip install "unstructured[md]"
Basic usage will ingest a Markdown file to a single document. Here we demonstrate on LangChain's readme:
from langchain_community.document_loaders import UnstructuredMarkdownLoaderfrom langchain_core.documents import Documentmarkdown_path = "../../../../README.md"loader = UnstructuredMarkdownLoader(markdown_path)data = loader.load()assert len(data) == 1assert isinstance(data[0], Document)readme_content = data[0].page_contentprint(readme_content[:250])
**API Reference:**[UnstructuredMarkdownLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.markdown.UnstructuredMarkdownLoader.html) | [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html)
🦜️🔗 LangChain⚡ Build context-aware reasoning applications ⚡Looking for the JS/TS library? Check out LangChain.js.To help you ship LangChain apps to production faster, check out LangSmith. LangSmith is a unified developer platform for building,
Retain Elements[](#retain-elements "Direct link to Retain Elements")
---------------------------------------------------------------------
Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode="elements"`.
loader = UnstructuredMarkdownLoader(markdown_path, mode="elements")data = loader.load()print(f"Number of documents: {len(data)}\n")for document in data[:2]: print(f"{document}\n")
Number of documents: 65page_content='🦜️🔗 LangChain' metadata={'source': '../../../../README.md', 'last_modified': '2024-04-29T13:40:19', 'page_number': 1, 'languages': ['eng'], 'filetype': 'text/markdown', 'file_directory': '../../../..', 'filename': 'README.md', 'category': 'Title'}page_content='⚡ Build context-aware reasoning applications ⚡' metadata={'source': '../../../../README.md', 'last_modified': '2024-04-29T13:40:19', 'page_number': 1, 'languages': ['eng'], 'parent_id': 'c3223b6f7100be08a78f1e8c0c28fde1', 'filetype': 'text/markdown', 'file_directory': '../../../..', 'filename': 'README.md', 'category': 'NarrativeText'}
Note that in this case we recover three distinct element types:
print(set(document.metadata["category"] for document in data))
{'Title', 'NarrativeText', 'ListItem'}
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_markdown.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to load JSON
](/v0.2/docs/how_to/document_loader_json/)[
Next
How to load Microsoft Office files
](/v0.2/docs/how_to/document_loader_office_file/)
* [Retain Elements](#retain-elements) | null |
https://python.langchain.com/v0.2/docs/how_to/document_loader_json/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to load JSON
On this page
How to load JSON
================
[JSON (JavaScript Object Notation)](https://en.wikipedia.org/wiki/JSON) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
[JSON Lines](https://jsonlines.org/) is a file format where each line is a valid JSON value.
LangChain implements a [JSONLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html) to convert JSON and JSONL data into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects. It uses a specified [jq schema](https://en.wikipedia.org/wiki/Jq_\(programming_language\)) to parse the JSON files, allowing for the extraction of specific fields into the content and metadata of the LangChain Document.
It uses the `jq` python package. Check out this [manual](https://stedolan.github.io/jq/manual/#Basicfilters) for a detailed documentation of the `jq` syntax.
Here we will demonstrate:
* How to load JSON and JSONL data into the content of a LangChain `Document`;
* How to load JSON and JSONL data into metadata associated with a `Document`.
#!pip install jq
from langchain_community.document_loaders import JSONLoader
**API Reference:**[JSONLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html)
import jsonfrom pathlib import Pathfrom pprint import pprintfile_path='./example_data/facebook_chat.json'data = json.loads(Path(file_path).read_text())
pprint(data)
{'image': {'creation_timestamp': 1675549016, 'uri': 'image_of_the_chat.jpg'}, 'is_still_participant': True, 'joinable_mode': {'link': '', 'mode': 1}, 'magic_words': [], 'messages': [{'content': 'Bye!', 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}, {'content': 'Oh no worries! Bye', 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}, {'content': 'No Im sorry it was my mistake, the blue one is not ' 'for sale', 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}, {'content': 'I thought you were selling the blue one!', 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}, {'content': 'Im not interested in this bag. Im interested in the ' 'blue one!', 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}, {'content': 'Here is $129', 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}, {'photos': [{'creation_timestamp': 1675595059, 'uri': 'url_of_some_picture.jpg'}], 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}, {'content': 'Online is at least $100', 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}, {'content': 'How much do you want?', 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}, {'content': 'Goodmorning! $50 is too low.', 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}, {'content': 'Hi! Im interested in your bag. Im offering $50. Let ' 'me know if you are interested. Thanks!', 'sender_name': 'User 1', 'timestamp_ms': 1675549022673}], 'participants': [{'name': 'User 1'}, {'name': 'User 2'}], 'thread_path': 'inbox/User 1 and User 2 chat', 'title': 'User 1 and User 2 chat'}
Using `JSONLoader`[](#using-jsonloader "Direct link to using-jsonloader")
--------------------------------------------------------------------------
Suppose we are interested in extracting the values under the `content` field within the `messages` key of the JSON data. This can easily be done through the `JSONLoader` as shown below.
### JSON file[](#json-file "Direct link to JSON file")
loader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[].content', text_content=False)data = loader.load()
pprint(data)
[Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1}), Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3}), Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5}), Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6}), Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7}), Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8}), Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11})]
### JSON Lines file[](#json-lines-file "Direct link to JSON Lines file")
If you want to load documents from a JSON Lines file, you pass `json_lines=True` and specify `jq_schema` to extract `page_content` from a single JSON object.
file_path = './example_data/facebook_chat_messages.jsonl'pprint(Path(file_path).read_text())
('{"sender_name": "User 2", "timestamp_ms": 1675597571851, "content": "Bye!"}\n' '{"sender_name": "User 1", "timestamp_ms": 1675597435669, "content": "Oh no ' 'worries! Bye"}\n' '{"sender_name": "User 2", "timestamp_ms": 1675596277579, "content": "No Im ' 'sorry it was my mistake, the blue one is not for sale"}\n')
loader = JSONLoader( file_path='./example_data/facebook_chat_messages.jsonl', jq_schema='.content', text_content=False, json_lines=True)data = loader.load()
pprint(data)
[Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})]
Another option is set `jq_schema='.'` and provide `content_key`:
loader = JSONLoader( file_path='./example_data/facebook_chat_messages.jsonl', jq_schema='.', content_key='sender_name', json_lines=True)data = loader.load()
pprint(data)
[Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), Document(page_content='User 1', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})]
### JSON file with jq schema `content_key`[](#json-file-with-jq-schema-content_key "Direct link to json-file-with-jq-schema-content_key")
To load documents from a JSON file using the content\_key within the jq schema, set is\_content\_key\_jq\_parsable=True. Ensure that content\_key is compatible and can be parsed using the jq schema.
file_path = './sample.json'pprint(Path(file_path).read_text())
{"data": [ {"attributes": { "message": "message1", "tags": [ "tag1"]}, "id": "1"}, {"attributes": { "message": "message2", "tags": [ "tag2"]}, "id": "2"}]}
loader = JSONLoader( file_path=file_path, jq_schema=".data[]", content_key=".attributes.message", is_content_key_jq_parsable=True,)data = loader.load()
pprint(data)
[Document(page_content='message1', metadata={'source': '/path/to/sample.json', 'seq_num': 1}), Document(page_content='message2', metadata={'source': '/path/to/sample.json', 'seq_num': 2})]
Extracting metadata[](#extracting-metadata "Direct link to Extracting metadata")
---------------------------------------------------------------------------------
Generally, we want to include metadata available in the JSON file into the documents that we create from the content.
The following demonstrates how metadata can be extracted using the `JSONLoader`.
There are some key changes to be noted. In the previous example where we didn't collect the metadata, we managed to directly specify in the schema where the value for the `page_content` can be extracted from.
.messages[].content
In the current example, we have to tell the loader to iterate over the records in the `messages` field. The jq\_schema then has to be:
.messages[]
This allows us to pass the records (dict) into the `metadata_func` that has to be implemented. The `metadata_func` is responsible for identifying which pieces of information in the record should be included in the metadata stored in the final `Document` object.
Additionally, we now have to explicitly specify in the loader, via the `content_key` argument, the key from the record where the value for the `page_content` needs to be extracted from.
# Define the metadata extraction function.def metadata_func(record: dict, metadata: dict) -> dict: metadata["sender_name"] = record.get("sender_name") metadata["timestamp_ms"] = record.get("timestamp_ms") return metadataloader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[]', content_key="content", metadata_func=metadata_func)data = loader.load()
pprint(data)
[Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]
Now, you will see that the documents contain the metadata associated with the content we extracted.
The `metadata_func`[](#the-metadata_func "Direct link to the-metadata_func")
-----------------------------------------------------------------------------
As shown above, the `metadata_func` accepts the default metadata generated by the `JSONLoader`. This allows full control to the user with respect to how the metadata is formatted.
For example, the default metadata contains the `source` and the `seq_num` keys. However, it is possible that the JSON data contain these keys as well. The user can then exploit the `metadata_func` to rename the default keys and use the ones from the JSON data.
The example below shows how we can modify the `source` to only contain information of the file source relative to the `langchain` directory.
# Define the metadata extraction function.def metadata_func(record: dict, metadata: dict) -> dict: metadata["sender_name"] = record.get("sender_name") metadata["timestamp_ms"] = record.get("timestamp_ms") if "source" in metadata: source = metadata["source"].split("/") source = source[source.index("langchain"):] metadata["source"] = "/".join(source) return metadataloader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[]', content_key="content", metadata_func=metadata_func)data = loader.load()
pprint(data)
[Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), Document(page_content='I thought you were selling the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), Document(page_content='Here is $129', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), Document(page_content='', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), Document(page_content='Online is at least $100', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), Document(page_content='How much do you want?', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]
Common JSON structures with jq schema[](#common-json-structures-with-jq-schema "Direct link to Common JSON structures with jq schema")
---------------------------------------------------------------------------------------------------------------------------------------
The list below provides a reference to the possible `jq_schema` the user can use to extract content from the JSON data depending on the structure.
JSON -> [{"text": ...}, {"text": ...}, {"text": ...}]jq_schema -> ".[].text"JSON -> {"key": [{"text": ...}, {"text": ...}, {"text": ...}]}jq_schema -> ".key[].text"JSON -> ["...", "...", "..."]jq_schema -> ".[]"
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_json.mdx)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to load HTML
](/v0.2/docs/how_to/document_loader_html/)[
Next
How to load Markdown
](/v0.2/docs/how_to/document_loader_markdown/)
* [Using `JSONLoader`](#using-jsonloader)
* [JSON file](#json-file)
* [JSON Lines file](#json-lines-file)
* [JSON file with jq schema `content_key`](#json-file-with-jq-schema-content_key)
* [Extracting metadata](#extracting-metadata)
* [The `metadata_func`](#the-metadata_func)
* [Common JSON structures with jq schema](#common-json-structures-with-jq-schema) | null |
https://python.langchain.com/v0.2/docs/how_to/character_text_splitter/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to split by character
How to split by character
=========================
This is the simplest method. This splits based on a given character sequence, which defaults to `"\n\n"`. Chunk length is measured by number of characters.
1. How the text is split: by single character separator.
2. How the chunk size is measured: by number of characters.
To obtain the string content directly, use `.split_text`.
To create LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects (e.g., for use in downstream tasks), use `.create_documents`.
%pip install -qU langchain-text-splitters
from langchain_text_splitters import CharacterTextSplitter# Load an example documentwith open("state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter( separator="\n\n", chunk_size=1000, chunk_overlap=200, length_function=len, is_separator_regex=False,)texts = text_splitter.create_documents([state_of_the_union])print(texts[0])
**API Reference:**[CharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.CharacterTextSplitter.html)
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'
Use `.create_documents` to propagate metadata associated with each document to the output chunks:
metadatas = [{"document": 1}, {"document": 2}]documents = text_splitter.create_documents( [state_of_the_union, state_of_the_union], metadatas=metadatas)print(documents[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' metadata={'document': 1}
Use `.split_text` to obtain the string content directly:
text_splitter.split_text(state_of_the_union)[0]
'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/character_text_splitter.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to pass callbacks in at runtime
](/v0.2/docs/how_to/callbacks_runtime/)[
Next
How to cache chat model responses
](/v0.2/docs/how_to/chat_model_caching/) | null |
https://python.langchain.com/v0.2/docs/how_to/callbacks_runtime/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to pass callbacks in at runtime
On this page
How to pass callbacks in at runtime
===================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Callbacks](/v0.2/docs/concepts/#callbacks)
* [Custom callback handlers](/v0.2/docs/how_to/custom_callbacks/)
In many cases, it is advantageous to pass in handlers instead when running the object. When we pass through [`CallbackHandlers`](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) using the `callbacks` keyword arg when executing an run, those callbacks will be issued by all nested objects involved in the execution. For example, when a handler is passed through to an Agent, it will be used for all callbacks related to the agent and all the objects involved in the agent's execution, in this case, the Tools and LLM.
This prevents us from having to manually attach the handlers to each individual nested object. Here's an example:
from typing import Any, Dict, Listfrom langchain_anthropic import ChatAnthropicfrom langchain_core.callbacks import BaseCallbackHandlerfrom langchain_core.messages import BaseMessagefrom langchain_core.outputs import LLMResultfrom langchain_core.prompts import ChatPromptTemplateclass LoggingHandler(BaseCallbackHandler): def on_chat_model_start( self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs ) -> None: print("Chat model started") def on_llm_end(self, response: LLMResult, **kwargs) -> None: print(f"Chat model ended, response: {response}") def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs ) -> None: print(f"Chain {serialized.get('name')} started") def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None: print(f"Chain ended, outputs: {outputs}")callbacks = [LoggingHandler()]llm = ChatAnthropic(model="claude-3-sonnet-20240229")prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")chain = prompt | llmchain.invoke({"number": "2"}, config={"callbacks": callbacks})
**API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) | [BaseMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.base.BaseMessage.html) | [LLMResult](https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html)
Chain RunnableSequence startedChain ChatPromptTemplate startedChain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?')]Chat model startedChat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0'))]] llm_output={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=NoneChain ended, outputs: content='1 + 2 = 3' response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0'
AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0')
If there are already existing callbacks associated with a module, these will run in addition to any passed in at runtime.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to pass callbacks at runtime.
Next, check out the other how-to guides in this section, such as how to [pass callbacks into a module constructor](/v0.2/docs/how_to/custom_callbacks/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/callbacks_runtime.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to propagate callbacks constructor
](/v0.2/docs/how_to/callbacks_constructor/)[
Next
How to split by character
](/v0.2/docs/how_to/character_text_splitter/)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/chat_model_caching/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to cache chat model responses
On this page
How to cache chat model responses
=================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [LLMs](/v0.2/docs/concepts/#llms)
LangChain provides an optional caching layer for chat models. This is useful for two main reasons:
* It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. This is especially useful during app development.
* It can speed up your application by reducing the number of API calls you make to the LLM provider.
This guide will walk you through how to enable this in your apps.
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
# <!-- ruff: noqa: F821 -->from langchain.globals import set_llm_cache
**API Reference:**[set\_llm\_cache](https://api.python.langchain.com/en/latest/globals/langchain.globals.set_llm_cache.html)
In Memory Cache[](#in-memory-cache "Direct link to In Memory Cache")
---------------------------------------------------------------------
This is an ephemeral cache that stores model calls in memory. It will be wiped when your environment restarts, and is not shared across processes.
%%timefrom langchain.cache import InMemoryCacheset_llm_cache(InMemoryCache())# The first time, it is not yet in cache, so it should take longerllm.invoke("Tell me a joke")
**API Reference:**[InMemoryCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.InMemoryCache.html)
CPU times: user 645 ms, sys: 214 ms, total: 859 msWall time: 829 ms
AIMessage(content="Why don't scientists trust atoms?\n\nBecause they make up everything!", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-b6836bdd-8c30-436b-828f-0ac5fc9ab50e-0')
%%time# The second time it is, so it goes fasterllm.invoke("Tell me a joke")
CPU times: user 822 µs, sys: 288 µs, total: 1.11 msWall time: 1.06 ms
AIMessage(content="Why don't scientists trust atoms?\n\nBecause they make up everything!", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-b6836bdd-8c30-436b-828f-0ac5fc9ab50e-0')
SQLite Cache[](#sqlite-cache "Direct link to SQLite Cache")
------------------------------------------------------------
This cache implementation uses a `SQLite` database to store responses, and will last across process restarts.
!rm .langchain.db
# We can do the same thing with a SQLite cachefrom langchain_community.cache import SQLiteCacheset_llm_cache(SQLiteCache(database_path=".langchain.db"))
**API Reference:**[SQLiteCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.SQLiteCache.html)
%%time# The first time, it is not yet in cache, so it should take longerllm.invoke("Tell me a joke")
CPU times: user 9.91 ms, sys: 7.68 ms, total: 17.6 msWall time: 657 ms
AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field!', response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 11, 'total_tokens': 28}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-39d9e1e8-7766-4970-b1d8-f50213fd94c5-0')
%%time# The second time it is, so it goes fasterllm.invoke("Tell me a joke")
CPU times: user 52.2 ms, sys: 60.5 ms, total: 113 msWall time: 127 ms
AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field!', id='run-39d9e1e8-7766-4970-b1d8-f50213fd94c5-0')
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to cache model responses to save time and money.
Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output/) or [how to create your own custom chat model](/v0.2/docs/how_to/custom_chat_model/).
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/chat_model_caching.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to split by character
](/v0.2/docs/how_to/character_text_splitter/)[
Next
How to init any model in one line
](/v0.2/docs/how_to/chat_models_universal_init/)
* [In Memory Cache](#in-memory-cache)
* [SQLite Cache](#sqlite-cache)
* [Next steps](#next-steps) | null |
https://python.langchain.com/v0.2/docs/how_to/chat_models_universal_init/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to init any model in one line
On this page
How to init any model in one line
=================================
Many LLM applications let end users specify what model provider and model they want the application to be powered by. This requires writing some logic to initialize different ChatModels based on some user configuration. The `init_chat_model()` helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names.
Supported models
See the [init\_chat\_model()](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html) API reference for a full list of supported integrations.
Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have `langchain-openai` installed to init an OpenAI model.
%pip install -qU langchain langchain-openai langchain-anthropic langchain-google-vertexai
Basic usage[](#basic-usage "Direct link to Basic usage")
---------------------------------------------------------
from langchain.chat_models import init_chat_model# Returns a langchain_openai.ChatOpenAI instance.gpt_4o = init_chat_model("gpt-4o", model_provider="openai", temperature=0)# Returns a langchain_anthropic.ChatAnthropic instance.claude_opus = init_chat_model( "claude-3-opus-20240229", model_provider="anthropic", temperature=0)# Returns a langchain_google_vertexai.ChatVertexAI instance.gemini_15 = init_chat_model( "gemini-1.5-pro", model_provider="google_vertexai", temperature=0)# Since all model integrations implement the ChatModel interface, you can use them in the same way.print("GPT-4o: " + gpt_4o.invoke("what's your name").content + "\n")print("Claude Opus: " + claude_opus.invoke("what's your name").content + "\n")print("Gemini 1.5: " + gemini_15.invoke("what's your name").content + "\n")
**API Reference:**[init\_chat\_model](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html)
GPT-4o: I'm an AI created by OpenAI, and I don't have a personal name. You can call me Assistant! How can I help you today?Claude Opus: My name is Claude. It's nice to meet you!Gemini 1.5: I am a large language model, trained by Google. I do not have a name.
Simple config example[](#simple-config-example "Direct link to Simple config example")
---------------------------------------------------------------------------------------
user_config = { "model": "...user-specified...", "model_provider": "...user-specified...", "temperature": 0, "max_tokens": 1000,}llm = init_chat_model(**user_config)llm.invoke("what's your name")
Inferring model provider[](#inferring-model-provider "Direct link to Inferring model provider")
------------------------------------------------------------------------------------------------
For common and distinct model names `init_chat_model()` will attempt to infer the model provider. See the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html) for a full list of inference behavior. E.g. any model that starts with `gpt-3...` or `gpt-4...` will be inferred as using model provider `openai`.
gpt_4o = init_chat_model("gpt-4o", temperature=0)claude_opus = init_chat_model("claude-3-opus-20240229", temperature=0)gemini_15 = init_chat_model("gemini-1.5-pro", temperature=0)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/chat_models_universal_init.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to cache chat model responses
](/v0.2/docs/how_to/chat_model_caching/)[
Next
How to track token usage in ChatModels
](/v0.2/docs/how_to/chat_token_usage_tracking/)
* [Basic usage](#basic-usage)
* [Simple config example](#simple-config-example)
* [Inferring model provider](#inferring-model-provider) | null |
https://python.langchain.com/v0.2/docs/how_to/document_loader_pdf/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to load PDFs
On this page
How to load PDFs
================
[Portable Document Format (PDF)](https://en.wikipedia.org/wiki/PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.
This guide covers how to load `PDF` documents into the LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) format that we use downstream.
LangChain integrates with a host of PDF parsers. Some are simple and relatively low-level; others will support OCR and image-processing, or perform advanced document layout analysis. The right choice will depend on your application. Below we enumerate the possibilities.
Using PyPDF[](#using-pypdf "Direct link to Using PyPDF")
---------------------------------------------------------
Here we load a PDF using `pypdf` into array of documents, where each document contains the page content and metadata with `page` number.
%pip install pypdf
from langchain_community.document_loaders import PyPDFLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = PyPDFLoader(file_path)pages = loader.load_and_split()pages[0]
**API Reference:**[PyPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html)
Document(page_content='LayoutParser : A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1( \x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1Allen Institute for AI\[email protected]\n2Brown University\nruochen [email protected]\n3Harvard University\n{melissadell,jacob carlson }@fas.harvard.edu\n4University of Washington\[email protected]\n5University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser , an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io .\nKeywords: Document Image Analysis ·Deep Learning ·Layout Analysis\n·Character Recognition ·Open Source library ·Toolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [ 11,arXiv:2103.15348v2 [cs.CV] 21 Jun 2021', metadata={'source': '../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf', 'page': 0})
An advantage of this approach is that documents can be retrieved with page numbers.
### Vector search over PDFs[](#vector-search-over-pdfs "Direct link to Vector search over PDFs")
Once we have loaded PDFs into LangChain `Document` objects, we can index them (e.g., a RAG application) in the usual way:
%pip install faiss-cpu # use `pip install faiss-gpu` for CUDA GPU support
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
from langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsfaiss_index = FAISS.from_documents(pages, OpenAIEmbeddings())docs = faiss_index.similarity_search("What is LayoutParser?", k=2)for doc in docs: print(str(doc.metadata["page"]) + ":", doc.page_content[:300])
**API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
13: 14 Z. Shen et al.6 ConclusionLayoutParser provides a comprehensive toolkit for deep learning-based documentimage analysis. The off-the-shelf library is easy to install, and can be used tobuild flexible and accurate pipelines for processing documents with complicatedstructures. It also supports hi0: LayoutParser : A Unified Toolkit for DeepLearning Based Document Image AnalysisZejiang Shen1( ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles GermainLee4, Jacob Carlson3, and Weining Li51Allen Institute for [email protected] Universityruochen [email protected] University
### Extract text from images[](#extract-text-from-images "Direct link to Extract text from images")
Some PDFs contain images of text-- e.g., within scanned documents, or figures. Using the `rapidocr-onnxruntime` package we can extract images as text as well:
%pip install rapidocr-onnxruntime
loader = PyPDFLoader("https://arxiv.org/pdf/2103.15348.pdf", extract_images=True)pages = loader.load()pages[4].page_content
'LayoutParser : A Unified Toolkit for DL-Based DIA 5\nTable 1: Current layout detection models in the LayoutParser model zoo\nDataset Base Model1Large Model Notes\nPubLayNet [38] F / M M Layouts of modern scientific documents\nPRImA [3] M - Layouts of scanned modern magazines and scientific reports\nNewspaper [17] F - Layouts of scanned US newspapers from the 20th century\nTableBank [18] F F Table region on modern scientific and business document\nHJDataset [31] F / M - Layouts of history Japanese documents\n1For each dataset, we train several models of different sizes for different needs (the trade-off between accuracy\nvs. computational cost). For “base model” and “large model”, we refer to using the ResNet 50 or ResNet 101\nbackbones [ 13], respectively. One can train models of different architectures, like Faster R-CNN [ 28] (F) and Mask\nR-CNN [ 12] (M). For example, an F in the Large Model column indicates it has a Faster R-CNN model trained\nusing the ResNet 101 backbone. The platform is maintained and a number of additions will be made to the model\nzoo in coming months.\nlayout data structures , which are optimized for efficiency and versatility. 3) When\nnecessary, users can employ existing or customized OCR models via the unified\nAPI provided in the OCR module . 4)LayoutParser comes with a set of utility\nfunctions for the visualization and storage of the layout data. 5) LayoutParser\nis also highly customizable, via its integration with functions for layout data\nannotation and model training . We now provide detailed descriptions for each\ncomponent.\n3.1 Layout Detection Models\nInLayoutParser , a layout model takes a document image as an input and\ngenerates a list of rectangular boxes for the target content regions. Different\nfrom traditional methods, it relies on deep convolutional neural networks rather\nthan manually curated rules to identify content regions. It is formulated as an\nobject detection problem and state-of-the-art models like Faster R-CNN [ 28] and\nMask R-CNN [ 12] are used. This yields prediction results of high accuracy and\nmakes it possible to build a concise, generalized interface for layout detection.\nLayoutParser , built upon Detectron2 [ 35], provides a minimal API that can\nperform layout detection with only four lines of code in Python:\n1import layoutparser as lp\n2image = cv2. imread (" image_file ") # load images\n3model = lp. Detectron2LayoutModel (\n4 "lp :// PubLayNet / faster_rcnn_R_50_FPN_3x / config ")\n5layout = model . detect ( image )\nLayoutParser provides a wealth of pre-trained model weights using various\ndatasets covering different languages, time periods, and document types. Due to\ndomain shift [ 7], the prediction performance can notably drop when models are ap-\nplied to target samples that are significantly different from the training dataset. As\ndocument structures and layouts vary greatly in different domains, it is important\nto select models trained on a dataset similar to the test samples. A semantic syntax\nis used for initializing the model weights in LayoutParser , using both the dataset\nname and model name lp://<dataset-name>/<model-architecture-name> .'
Using PyMuPDF[](#using-pymupdf "Direct link to Using PyMuPDF")
---------------------------------------------------------------
This is the fastest of the PDF parsing options, and contains detailed metadata about the PDF and its pages, as well as returns one document per page.
from langchain_community.document_loaders import PyMuPDFLoaderloader = PyMuPDFLoader("example_data/layout-parser-paper.pdf")data = loader.load()data[0]
**API Reference:**[PyMuPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PyMuPDFLoader.html)
Additionally, you can pass along any of the options from the [PyMuPDF documentation](https://pymupdf.readthedocs.io/en/latest/app1.html#plain-text/) as keyword arguments in the `load` call, and it will be pass along to the `get_text()` call.
Using MathPix[](#using-mathpix "Direct link to Using MathPix")
---------------------------------------------------------------
Inspired by Daniel Gross's [https://gist.github.com/danielgross/3ab4104e14faccc12b49200843adab21](https://gist.github.com/danielgross/3ab4104e14faccc12b49200843adab21)
from langchain_community.document_loaders import MathpixPDFLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = MathpixPDFLoader(file_path)data = loader.load()
**API Reference:**[MathpixPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.MathpixPDFLoader.html)
Using Unstructured[](#using-unstructured "Direct link to Using Unstructured")
------------------------------------------------------------------------------
[Unstructured](https://unstructured-io.github.io/unstructured/) supports a common interface for working with unstructured or semi-structured file formats, such as Markdown or PDF. LangChain's [UnstructuredPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.UnstructuredPDFLoader.html) integrates with Unstructured to parse PDF documents into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects.
from langchain_community.document_loaders import UnstructuredPDFLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = UnstructuredPDFLoader(file_path)data = loader.load()
**API Reference:**[UnstructuredPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.UnstructuredPDFLoader.html)
### Retain Elements[](#retain-elements "Direct link to Retain Elements")
Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode="elements"`.
file_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = UnstructuredPDFLoader(file_path, mode="elements")data = loader.load()data[0]
Document(page_content='1 2 0 2', metadata={'source': '../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 213.36), (16.34, 253.36), (36.34, 253.36), (36.34, 213.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': '../../../docs/integrations/document_loaders/example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-03-18T13:22:22', 'page_number': 1, 'filetype': 'application/pdf', 'category': 'UncategorizedText'})
See the full set of element types for this particular document:
set(doc.metadata["category"] for doc in data)
{'ListItem', 'NarrativeText', 'Title', 'UncategorizedText'}
### Fetching remote PDFs using Unstructured[](#fetching-remote-pdfs-using-unstructured "Direct link to Fetching remote PDFs using Unstructured")
This covers how to load online PDFs into a document format that we can use downstream. This can be used for various online PDF sites such as [https://open.umn.edu/opentextbooks/textbooks/](https://open.umn.edu/opentextbooks/textbooks/) and [https://arxiv.org/archive/](https://arxiv.org/archive/)
Note: all other PDF loaders can also be used to fetch remote PDFs, but `OnlinePDFLoader` is a legacy function, and works specifically with `UnstructuredPDFLoader`.
from langchain_community.document_loaders import OnlinePDFLoaderloader = OnlinePDFLoader("https://arxiv.org/pdf/2302.03803.pdf")data = loader.load()
**API Reference:**[OnlinePDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.OnlinePDFLoader.html)
Using PyPDFium2[](#using-pypdfium2 "Direct link to Using PyPDFium2")
---------------------------------------------------------------------
from langchain_community.document_loaders import PyPDFium2Loaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = PyPDFium2Loader(file_path)data = loader.load()
**API Reference:**[PyPDFium2Loader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PyPDFium2Loader.html)
Using PDFMiner[](#using-pdfminer "Direct link to Using PDFMiner")
------------------------------------------------------------------
from langchain_community.document_loaders import PDFMinerLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = PDFMinerLoader(file_path)data = loader.load()
**API Reference:**[PDFMinerLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PDFMinerLoader.html)
### Using PDFMiner to generate HTML text[](#using-pdfminer-to-generate-html-text "Direct link to Using PDFMiner to generate HTML text")
This can be helpful for chunking texts semantically into sections as the output html content can be parsed via `BeautifulSoup` to get more structured and rich information about font size, page numbers, PDF headers/footers, etc.
from langchain_community.document_loaders import PDFMinerPDFasHTMLLoaderfile_path = ( "../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf")loader = PDFMinerPDFasHTMLLoader(file_path)data = loader.load()[0]
**API Reference:**[PDFMinerPDFasHTMLLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PDFMinerPDFasHTMLLoader.html)
from bs4 import BeautifulSoupsoup = BeautifulSoup(data.page_content, "html.parser")content = soup.find_all("div")
import recur_fs = Nonecur_text = ""snippets = [] # first collect all snippets that have the same font sizefor c in content: sp = c.find("span") if not sp: continue st = sp.get("style") if not st: continue fs = re.findall("font-size:(\d+)px", st) if not fs: continue fs = int(fs[0]) if not cur_fs: cur_fs = fs if fs == cur_fs: cur_text += c.text else: snippets.append((cur_text, cur_fs)) cur_fs = fs cur_text = c.textsnippets.append((cur_text, cur_fs))# Note: The above logic is very straightforward. One can also add more strategies such as removing duplicate snippets (as# headers/footers in a PDF appear on multiple pages so if we find duplicates it's safe to assume that it is redundant info)
from langchain_core.documents import Documentcur_idx = -1semantic_snippets = []# Assumption: headings have higher font size than their respective contentfor s in snippets: # if current snippet's font size > previous section's heading => it is a new heading if ( not semantic_snippets or s[1] > semantic_snippets[cur_idx].metadata["heading_font"] ): metadata = {"heading": s[0], "content_font": 0, "heading_font": s[1]} metadata.update(data.metadata) semantic_snippets.append(Document(page_content="", metadata=metadata)) cur_idx += 1 continue # if current snippet's font size <= previous section's content => content belongs to the same section (one can also create # a tree like structure for sub sections if needed but that may require some more thinking and may be data specific) if ( not semantic_snippets[cur_idx].metadata["content_font"] or s[1] <= semantic_snippets[cur_idx].metadata["content_font"] ): semantic_snippets[cur_idx].page_content += s[0] semantic_snippets[cur_idx].metadata["content_font"] = max( s[1], semantic_snippets[cur_idx].metadata["content_font"] ) continue # if current snippet's font size > previous section's content but less than previous section's heading than also make a new # section (e.g. title of a PDF will have the highest font size but we don't want it to subsume all sections) metadata = {"heading": s[0], "content_font": 0, "heading_font": s[1]} metadata.update(data.metadata) semantic_snippets.append(Document(page_content="", metadata=metadata)) cur_idx += 1
**API Reference:**[Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html)
semantic_snippets[4]
Document(page_content='Recently, various DL models and datasets have been developed for layout analysis\ntasks. The dhSegment [22] utilizes fully convolutional networks [20] for segmen-\ntation tasks on historical documents. Object detection-based methods like Faster\nR-CNN [28] and Mask R-CNN [12] are used for identifying document elements [38]\nand detecting tables [30, 26]. Most recently, Graph Neural Networks [29] have also\nbeen used in table detection [27]. However, these models are usually implemented\nindividually and there is no unified framework to load and use such models.\nThere has been a surge of interest in creating open-source tools for document\nimage processing: a search of document image analysis in Github leads to 5M\nrelevant code pieces 6; yet most of them rely on traditional rule-based methods\nor provide limited functionalities. The closest prior research to our work is the\nOCR-D project7, which also tries to build a complete toolkit for DIA. However,\nsimilar to the platform developed by Neudecker et al. [21], it is designed for\nanalyzing historical documents, and provides no supports for recent DL models.\nThe DocumentLayoutAnalysis project8 focuses on processing born-digital PDF\ndocuments via analyzing the stored PDF data. Repositories like DeepLayout9\nand Detectron2-PubLayNet10 are individual deep learning models trained on\nlayout analysis datasets without support for the full DIA pipeline. The Document\nAnalysis and Exploitation (DAE) platform [15] and the DeepDIVA project [2]\naim to improve the reproducibility of DIA methods (or DL models), yet they\nare not actively maintained. OCR engines like Tesseract [14], easyOCR11 and\npaddleOCR12 usually do not come with comprehensive functionalities for other\nDIA tasks like layout analysis.\nRecent years have also seen numerous efforts to create libraries for promoting\nreproducibility and reusability in the field of DL. Libraries like Dectectron2 [35],\n6 The number shown is obtained by specifying the search type as ‘code’.\n7 https://ocr-d.de/en/about\n8 https://github.com/BobLd/DocumentLayoutAnalysis\n9 https://github.com/leonlulu/DeepLayout\n10 https://github.com/hpanwar08/detectron2\n11 https://github.com/JaidedAI/EasyOCR\n12 https://github.com/PaddlePaddle/PaddleOCR\n4\nZ. Shen et al.\nFig. 1: The overall architecture of LayoutParser. For an input document image,\nthe core LayoutParser library provides a set of off-the-shelf tools for layout\ndetection, OCR, visualization, and storage, backed by a carefully designed layout\ndata structure. LayoutParser also supports high level customization via efficient\nlayout annotation and model training functions. These improve model accuracy\non the target samples. The community platform enables the easy sharing of DIA\nmodels and whole digitization pipelines to promote reusability and reproducibility.\nA collection of detailed documentation, tutorials and exemplar projects make\nLayoutParser easy to learn and use.\nAllenNLP [8] and transformers [34] have provided the community with complete\nDL-based support for developing and deploying models for general computer\nvision and natural language processing problems. LayoutParser, on the other\nhand, specializes specifically in DIA tasks. LayoutParser is also equipped with a\ncommunity platform inspired by established model hubs such as Torch Hub [23]\nand TensorFlow Hub [1]. It enables the sharing of pretrained models as well as\nfull document processing pipelines that are unique to DIA tasks.\nThere have been a variety of document data collections to facilitate the\ndevelopment of DL models. Some examples include PRImA [3](magazine layouts),\nPubLayNet [38](academic paper layouts), Table Bank [18](tables in academic\npapers), Newspaper Navigator Dataset [16, 17](newspaper figure layouts) and\nHJDataset [31](historical Japanese document layouts). A spectrum of models\ntrained on these datasets are currently available in the LayoutParser model zoo\nto support different use cases.\n', metadata={'heading': '2 Related Work\n', 'content_font': 9, 'heading_font': 11, 'source': '../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf'})
PyPDF Directory[](#pypdf-directory "Direct link to PyPDF Directory")
---------------------------------------------------------------------
Load PDFs from directory
from langchain_community.document_loaders import PyPDFDirectoryLoader
**API Reference:**[PyPDFDirectoryLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PyPDFDirectoryLoader.html)
directory_path = "../../../docs/integrations/document_loaders/example_data/"loader = PyPDFDirectoryLoader("example_data/")docs = loader.load()
Using PDFPlumber[](#using-pdfplumber "Direct link to Using PDFPlumber")
------------------------------------------------------------------------
Like PyMuPDF, the output Documents contain detailed metadata about the PDF and its pages, and returns one document per page.
from langchain_community.document_loaders import PDFPlumberLoaderdata = loader.load()data[0]
**API Reference:**[PDFPlumberLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.PDFPlumberLoader.html)
Using AmazonTextractPDFParser[](#using-amazontextractpdfparser "Direct link to Using AmazonTextractPDFParser")
---------------------------------------------------------------------------------------------------------------
The AmazonTextractPDFLoader calls the [Amazon Textract Service](https://aws.amazon.com/textract/) to convert PDFs into a Document structure. The loader does pure OCR at the moment, with more features like layout support planned, depending on demand. Single and multi-page documents are supported with up to 3000 pages and 512 MB of size.
For the call to be successful an AWS account is required, similar to the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) requirements.
Besides the AWS configuration, it is very similar to the other PDF loaders, while also supporting JPEG, PNG and TIFF and non-native PDF formats.
from langchain_community.document_loaders import AmazonTextractPDFLoaderloader = AmazonTextractPDFLoader("example_data/alejandro_rosalez_sample-small.jpeg")documents = loader.load()
**API Reference:**[AmazonTextractPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.AmazonTextractPDFLoader.html)
Using AzureAIDocumentIntelligenceLoader[](#using-azureaidocumentintelligenceloader "Direct link to Using AzureAIDocumentIntelligenceLoader")
---------------------------------------------------------------------------------------------------------------------------------------------
[Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files. Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`.
This [current implementation](https://aka.ms/di-langchain) of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode="single"` or `mode="page"` to return pure texts in a single page or document split by page.
### Prerequisite[](#prerequisite "Direct link to Prerequisite")
An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don't have. You will be passing `<endpoint>` and `<key>` as parameters to the loader.
%pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoaderfile_path = "<filepath>"endpoint = "<endpoint>"key = "<key>"loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout")documents = loader.load()
**API Reference:**[AzureAIDocumentIntelligenceLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.doc_intelligence.AzureAIDocumentIntelligenceLoader.html)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_pdf.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to load Microsoft Office files
](/v0.2/docs/how_to/document_loader_office_file/)[
Next
How to create a dynamic (self-constructing) chain
](/v0.2/docs/how_to/dynamic_chain/)
* [Using PyPDF](#using-pypdf)
* [Vector search over PDFs](#vector-search-over-pdfs)
* [Extract text from images](#extract-text-from-images)
* [Using PyMuPDF](#using-pymupdf)
* [Using MathPix](#using-mathpix)
* [Using Unstructured](#using-unstructured)
* [Retain Elements](#retain-elements)
* [Fetching remote PDFs using Unstructured](#fetching-remote-pdfs-using-unstructured)
* [Using PyPDFium2](#using-pypdfium2)
* [Using PDFMiner](#using-pdfminer)
* [Using PDFMiner to generate HTML text](#using-pdfminer-to-generate-html-text)
* [PyPDF Directory](#pypdf-directory)
* [Using PDFPlumber](#using-pdfplumber)
* [Using AmazonTextractPDFParser](#using-amazontextractpdfparser)
* [Using AzureAIDocumentIntelligenceLoader](#using-azureaidocumentintelligenceloader)
* [Prerequisite](#prerequisite) | null |
https://python.langchain.com/v0.2/docs/how_to/document_loader_office_file/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to load Microsoft Office files
On this page
How to load Microsoft Office files
==================================
The [Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS.
This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) object that we can use downstream.
Loading DOCX, XLSX, PPTX with AzureAIDocumentIntelligenceLoader[](#loading-docx-xlsx-pptx-with-azureaidocumentintelligenceloader "Direct link to Loading DOCX, XLSX, PPTX with AzureAIDocumentIntelligenceLoader")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files. Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`.
This [current implementation](https://aka.ms/di-langchain) of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode="single"` or `mode="page"` to return pure texts in a single page or document split by page.
### Prerequisite[](#prerequisite "Direct link to Prerequisite")
An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don't have. You will be passing `<endpoint>` and `<key>` as parameters to the loader.
%pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligencefrom langchain_community.document_loaders import AzureAIDocumentIntelligenceLoaderfile_path = "<filepath>"endpoint = "<endpoint>"key = "<key>"loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout")documents = loader.load()
**API Reference:**[AzureAIDocumentIntelligenceLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.doc_intelligence.AzureAIDocumentIntelligenceLoader.html)
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/document_loader_office_file.mdx)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to load Markdown
](/v0.2/docs/how_to/document_loader_markdown/)[
Next
How to load PDFs
](/v0.2/docs/how_to/document_loader_pdf/)
* [Loading DOCX, XLSX, PPTX with AzureAIDocumentIntelligenceLoader](#loading-docx-xlsx-pptx-with-azureaidocumentintelligenceloader)
* [Prerequisite](#prerequisite) | null |
https://python.langchain.com/v0.2/docs/how_to/dynamic_chain/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create a dynamic (self-constructing) chain
How to create a dynamic (self-constructing) chain
=================================================
Prerequisites
This guide assumes familiarity with the following:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [How to turn any function into a runnable](/v0.2/docs/how_to/functions/)
Sometimes we want to construct parts of a chain at runtime, depending on the chain inputs ([routing](/v0.2/docs/how_to/routing/) is the most common example of this). We can create dynamic chains like this using a very useful property of RunnableLambda's, which is that if a RunnableLambda returns a Runnable, that Runnable is itself invoked. Let's see an example.
* OpenAI
* Anthropic
* Azure
* Google
* Cohere
* FireworksAI
* Groq
* MistralAI
* TogetherAI
pip install -qU langchain-openai
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-3.5-turbo-0125")
pip install -qU langchain-anthropic
import getpassimport osos.environ["ANTHROPIC_API_KEY"] = getpass.getpass()from langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
pip install -qU langchain-openai
import getpassimport osos.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()from langchain_openai import AzureChatOpenAIllm = AzureChatOpenAI( azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],)
pip install -qU langchain-google-vertexai
import getpassimport osos.environ["GOOGLE_API_KEY"] = getpass.getpass()from langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model="gemini-pro")
pip install -qU langchain-cohere
import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass()from langchain_cohere import ChatCoherellm = ChatCohere(model="command-r")
pip install -qU langchain-fireworks
import getpassimport osos.environ["FIREWORKS_API_KEY"] = getpass.getpass()from langchain_fireworks import ChatFireworksllm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
pip install -qU langchain-groq
import getpassimport osos.environ["GROQ_API_KEY"] = getpass.getpass()from langchain_groq import ChatGroqllm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpassimport osos.environ["MISTRAL_API_KEY"] = getpass.getpass()from langchain_mistralai import ChatMistralAIllm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpassimport osos.environ["TOGETHER_API_KEY"] = getpass.getpass()from langchain_openai import ChatOpenAIllm = ChatOpenAI( base_url="https://api.together.xyz/v1", api_key=os.environ["TOGETHER_API_KEY"], model="mistralai/Mixtral-8x7B-Instruct-v0.1",)
# | echo: falsefrom langchain_anthropic import ChatAnthropicllm = ChatAnthropic(model="claude-3-sonnet-20240229")
**API Reference:**[ChatAnthropic](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html)
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import Runnable, RunnablePassthrough, chaincontextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""contextualize_prompt = ChatPromptTemplate.from_messages( [ ("system", contextualize_instructions), ("placeholder", "{chat_history}"), ("human", "{question}"), ])contextualize_question = contextualize_prompt | llm | StrOutputParser()qa_instructions = ( """Answer the user question given the following context:\n\n{context}.""")qa_prompt = ChatPromptTemplate.from_messages( [("system", qa_instructions), ("human", "{question}")])@chaindef contextualize_if_needed(input_: dict) -> Runnable: if input_.get("chat_history"): # NOTE: This is returning another Runnable, not an actual output. return contextualize_question else: return RunnablePassthrough()@chaindef fake_retriever(input_: dict) -> str: return "egypt's population in 2024 is about 111 million"full_chain = ( RunnablePassthrough.assign(question=contextualize_if_needed).assign( context=fake_retriever ) | qa_prompt | llm | StrOutputParser())full_chain.invoke( { "question": "what about egypt", "chat_history": [ ("human", "what's the population of indonesia"), ("ai", "about 276 million"), ], })
**API Reference:**[StrOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) | [Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) | [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) | [chain](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.chain.html)
"According to the context provided, Egypt's population in 2024 is estimated to be about 111 million."
The key here is that `contextualize_if_needed` returns another Runnable and not an actual output. This returned Runnable is itself run when the full chain is executed.
Looking at the trace we can see that, since we passed in chat\_history, we executed the contextualize\_question chain as part of the full chain: [https://smith.langchain.com/public/9e0ae34c-4082-4f3f-beed-34a2a2f4c991/r](https://smith.langchain.com/public/9e0ae34c-4082-4f3f-beed-34a2a2f4c991/r)
Note that the streaming, batching, etc. capabilities of the returned Runnable are all preserved
for chunk in contextualize_if_needed.stream( { "question": "what about egypt", "chat_history": [ ("human", "what's the population of indonesia"), ("ai", "about 276 million"), ], }): print(chunk)
What is the population of Egypt?
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/dynamic_chain.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to load PDFs
](/v0.2/docs/how_to/document_loader_pdf/)[
Next
Text embedding models
](/v0.2/docs/how_to/embed_text/) | null |
https://python.langchain.com/v0.2/docs/how_to/embed_text/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* Text embedding models
On this page
Text embedding models
=====================
info
Head to [Integrations](/v0.2/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers.
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former, `.embed_documents`, takes as input multiple texts, while the latter, `.embed_query`, takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). `.embed_query` will return a list of floats, whereas `.embed_documents` returns a list of lists of floats.
Get started[](#get-started "Direct link to Get started")
---------------------------------------------------------
### Setup[](#setup "Direct link to Setup")
* OpenAI
* Cohere
* Hugging Face
To start we'll need to install the OpenAI partner package:
pip install langchain-openai
Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running:
export OPENAI_API_KEY="..."
If you'd prefer not to set an environment variable you can pass the key in directly via the `api_key` named parameter when initiating the OpenAI LLM class:
from langchain_openai import OpenAIEmbeddingsembeddings_model = OpenAIEmbeddings(api_key="...")
**API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
Otherwise you can initialize without any params:
from langchain_openai import OpenAIEmbeddingsembeddings_model = OpenAIEmbeddings()
**API Reference:**[OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
To start we'll need to install the Cohere SDK package:
pip install langchain-cohere
Accessing the API requires an API key, which you can get by creating an account and heading [here](https://dashboard.cohere.com/api-keys). Once we have a key we'll want to set it as an environment variable by running:
export COHERE_API_KEY="..."
If you'd prefer not to set an environment variable you can pass the key in directly via the `cohere_api_key` named parameter when initiating the Cohere LLM class:
from langchain_cohere import CohereEmbeddingsembeddings_model = CohereEmbeddings(cohere_api_key="...")
**API Reference:**[CohereEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_cohere.embeddings.CohereEmbeddings.html)
Otherwise you can initialize without any params:
from langchain_cohere import CohereEmbeddingsembeddings_model = CohereEmbeddings()
**API Reference:**[CohereEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_cohere.embeddings.CohereEmbeddings.html)
To start we'll need to install the Hugging Face partner package:
pip install langchain-huggingface
You can then load any [Sentence Transformers model](https://huggingface.co/models?library=sentence-transformers) from the Hugging Face Hub.
from langchain_huggingface import HuggingFaceEmbeddingsembeddings_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
**API Reference:**[HuggingFaceEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_huggingface.embeddings.huggingface.HuggingFaceEmbeddings.html)
You can also leave the `model_name` blank to use the default [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) model.
from langchain_huggingface import HuggingFaceEmbeddingsembeddings_model = HuggingFaceEmbeddings()
**API Reference:**[HuggingFaceEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_huggingface.embeddings.huggingface.HuggingFaceEmbeddings.html)
### `embed_documents`[](#embed_documents "Direct link to embed_documents")
#### Embed list of texts[](#embed-list-of-texts "Direct link to Embed list of texts")
Use `.embed_documents` to embed a list of strings, recovering a list of embeddings:
embeddings = embeddings_model.embed_documents( [ "Hi there!", "Oh, hello!", "What's your name?", "My friends call me World", "Hello World!" ])len(embeddings), len(embeddings[0])
(5, 1536)
### `embed_query`[](#embed_query "Direct link to embed_query")
#### Embed single query[](#embed-single-query "Direct link to Embed single query")
Use `.embed_query` to embed a single piece of text (e.g., for the purpose of comparing to other embedded pieces of texts).
embedded_query = embeddings_model.embed_query("What was the name mentioned in the conversation?")embedded_query[:5]
[0.0053587136790156364, -0.0004999046213924885, 0.038883671164512634, -0.003001077566295862, -0.00900818221271038]
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/embed_text.mdx)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create a dynamic (self-constructing) chain
](/v0.2/docs/how_to/dynamic_chain/)[
Next
How to combine results from multiple retrievers
](/v0.2/docs/how_to/ensemble_retriever/)
* [Get started](#get-started)
* [Setup](#setup)
* [`embed_documents`](#embed_documents)
* [`embed_query`](#embed_query) | null |
https://python.langchain.com/v0.2/docs/how_to/example_selectors_mmr/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to select examples by maximal marginal relevance (MMR)
How to select examples by maximal marginal relevance (MMR)
==========================================================
The `MaxMarginalRelevanceExampleSelector` selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.
from langchain_community.vectorstores import FAISSfrom langchain_core.example_selectors import ( MaxMarginalRelevanceExampleSelector, SemanticSimilarityExampleSelector,)from langchain_core.prompts import FewShotPromptTemplate, PromptTemplatefrom langchain_openai import OpenAIEmbeddingsexample_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)# Examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},]
**API Reference:**[FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html) | [SemanticSimilarityExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html) | [FewShotPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
example_selector = MaxMarginalRelevanceExampleSelector.from_examples( # The list of examples available to select from. examples, # The embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # The VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # The number of examples to produce. k=2,)mmr_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)
# Input is a feeling, so should select the happy/sad example as the first oneprint(mmr_prompt.format(adjective="worried"))
Give the antonym of every inputInput: happyOutput: sadInput: windyOutput: calmInput: worriedOutput:
# Let's compare this to what we would just get if we went solely off of similarity,# by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector.example_selector = SemanticSimilarityExampleSelector.from_examples( # The list of examples available to select from. examples, # The embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # The VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # The number of examples to produce. k=2,)similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)print(similar_prompt.format(adjective="worried"))
Give the antonym of every inputInput: happyOutput: sadInput: sunnyOutput: gloomyInput: worriedOutput:
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/example_selectors_mmr.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to select examples by length
](/v0.2/docs/how_to/example_selectors_length_based/)[
Next
How to select examples by n-gram overlap
](/v0.2/docs/how_to/example_selectors_ngram/) | null |
https://python.langchain.com/v0.2/docs/how_to/ensemble_retriever/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to combine results from multiple retrievers
On this page
How to combine results from multiple retrievers
===============================================
The [EnsembleRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) supports ensembling of results from multiple retrievers. It is initialized with a list of [BaseRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html) objects. EnsembleRetrievers rerank the results of the constituent retrievers based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm.
By leveraging the strengths of different algorithms, the `EnsembleRetriever` can achieve better performance than any single algorithm.
The most common pattern is to combine a sparse retriever (like BM25) with a dense retriever (like embedding similarity), because their strengths are complementary. It is also known as "hybrid search". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity.
Basic usage[](#basic-usage "Direct link to Basic usage")
---------------------------------------------------------
Below we demonstrate ensembling of a [BM25Retriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.bm25.BM25Retriever.html) with a retriever derived from the [FAISS vector store](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html).
%pip install --upgrade --quiet rank_bm25 > /dev/null
from langchain.retrievers import EnsembleRetrieverfrom langchain_community.retrievers import BM25Retrieverfrom langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsdoc_list_1 = [ "I like apples", "I like oranges", "Apples and oranges are fruits",]# initialize the bm25 retriever and faiss retrieverbm25_retriever = BM25Retriever.from_texts( doc_list_1, metadatas=[{"source": 1}] * len(doc_list_1))bm25_retriever.k = 2doc_list_2 = [ "You like apples", "You like oranges",]embedding = OpenAIEmbeddings()faiss_vectorstore = FAISS.from_texts( doc_list_2, embedding, metadatas=[{"source": 2}] * len(doc_list_2))faiss_retriever = faiss_vectorstore.as_retriever(search_kwargs={"k": 2})# initialize the ensemble retrieverensemble_retriever = EnsembleRetriever( retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5])
**API Reference:**[EnsembleRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) | [BM25Retriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.bm25.BM25Retriever.html) | [FAISS](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
docs = ensemble_retriever.invoke("apples")docs
[Document(page_content='I like apples', metadata={'source': 1}), Document(page_content='You like apples', metadata={'source': 2}), Document(page_content='Apples and oranges are fruits', metadata={'source': 1}), Document(page_content='You like oranges', metadata={'source': 2})]
Runtime Configuration[](#runtime-configuration "Direct link to Runtime Configuration")
---------------------------------------------------------------------------------------
We can also configure the individual retrievers at runtime using [configurable fields](/v0.2/docs/how_to/configure/). Below we update the "top-k" parameter for the FAISS retriever specifically:
from langchain_core.runnables import ConfigurableFieldfaiss_retriever = faiss_vectorstore.as_retriever( search_kwargs={"k": 2}).configurable_fields( search_kwargs=ConfigurableField( id="search_kwargs_faiss", name="Search Kwargs", description="The search kwargs to use", ))ensemble_retriever = EnsembleRetriever( retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5])
**API Reference:**[ConfigurableField](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.utils.ConfigurableField.html)
config = {"configurable": {"search_kwargs_faiss": {"k": 1}}}docs = ensemble_retriever.invoke("apples", config=config)docs
[Document(page_content='I like apples', metadata={'source': 1}), Document(page_content='You like apples', metadata={'source': 2}), Document(page_content='Apples and oranges are fruits', metadata={'source': 1})]
Notice that this only returns one source from the FAISS retriever, because we pass in the relevant configuration at run time
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/ensemble_retriever.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Text embedding models
](/v0.2/docs/how_to/embed_text/)[
Next
How to select examples by length
](/v0.2/docs/how_to/example_selectors_length_based/)
* [Basic usage](#basic-usage)
* [Runtime Configuration](#runtime-configuration) | null |
https://python.langchain.com/v0.2/docs/how_to/example_selectors_similarity/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to select examples by similarity
How to select examples by similarity
====================================
This object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
from langchain_chroma import Chromafrom langchain_core.example_selectors import SemanticSimilarityExampleSelectorfrom langchain_core.prompts import FewShotPromptTemplate, PromptTemplatefrom langchain_openai import OpenAIEmbeddingsexample_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)# Examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},]
**API Reference:**[SemanticSimilarityExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html) | [FewShotPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) | [OpenAIEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html)
example_selector = SemanticSimilarityExampleSelector.from_examples( # The list of examples available to select from. examples, # The embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # The VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # The number of examples to produce. k=1,)similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)
# Input is a feeling, so should select the happy/sad exampleprint(similar_prompt.format(adjective="worried"))
Give the antonym of every inputInput: happyOutput: sadInput: worriedOutput:
# Input is a measurement, so should select the tall/short exampleprint(similar_prompt.format(adjective="large"))
Give the antonym of every inputInput: tallOutput: shortInput: largeOutput:
# You can add new examples to the SemanticSimilarityExampleSelector as wellsimilar_prompt.example_selector.add_example( {"input": "enthusiastic", "output": "apathetic"})print(similar_prompt.format(adjective="passionate"))
Give the antonym of every inputInput: enthusiasticOutput: apatheticInput: passionateOutput:
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/example_selectors_similarity.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to select examples by n-gram overlap
](/v0.2/docs/how_to/example_selectors_ngram/)[
Next
How to use reference examples when doing extraction
](/v0.2/docs/how_to/extraction_examples/) | null |
https://python.langchain.com/v0.2/docs/how_to/example_selectors_length_based/ | * [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to select examples by length
How to select examples by length
================================
This example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
from langchain_core.example_selectors import LengthBasedExampleSelectorfrom langchain_core.prompts import FewShotPromptTemplate, PromptTemplate# Examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},]example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)example_selector = LengthBasedExampleSelector( # The examples it has available to choose from. examples=examples, # The PromptTemplate being used to format the examples. example_prompt=example_prompt, # The maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25, # The function used to get the length of a string, which is used # to determine which examples to include. It is commented out because # it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)))dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)
**API Reference:**[LengthBasedExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.length_based.LengthBasedExampleSelector.html) | [FewShotPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html) | [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html)
# An example with small input, so it selects all examples.print(dynamic_prompt.format(adjective="big"))
Give the antonym of every inputInput: happyOutput: sadInput: tallOutput: shortInput: energeticOutput: lethargicInput: sunnyOutput: gloomyInput: windyOutput: calmInput: bigOutput:
# An example with long input, so it selects only one example.long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"print(dynamic_prompt.format(adjective=long_string))
Give the antonym of every inputInput: happyOutput: sadInput: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything elseOutput:
# You can add an example to an example selector as well.new_example = {"input": "big", "output": "small"}dynamic_prompt.example_selector.add_example(new_example)print(dynamic_prompt.format(adjective="enthusiastic"))
Give the antonym of every inputInput: happyOutput: sadInput: tallOutput: shortInput: energeticOutput: lethargicInput: sunnyOutput: gloomyInput: windyOutput: calmInput: bigOutput: smallInput: enthusiasticOutput:
[Edit this page](https://github.com/langchain-ai/langchain/edit/master/docs/docs/how_to/example_selectors_length_based.ipynb)
* * *
#### Was this page helpful?
#### You can also leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchain/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to combine results from multiple retrievers
](/v0.2/docs/how_to/ensemble_retriever/)[
Next
How to select examples by maximal marginal relevance (MMR)
](/v0.2/docs/how_to/example_selectors_mmr/) | null |