# LangChain v0.8.2 - Table of Contents

## Guides

- [README](readme.md)
- [Changelog](changelog.md)

- Guides
  - [Evaluating Agent Behavior](evaluation.md)

- Notebooks
  - [Getting Started](getting_started.md)
  - [Executing Custom Elixir Functions](custom_functions.md)
  - [Images: Generating context-specific descriptions](context-specific-image-descriptions.md)

## Modules

- [LangChain.ChatModels.ChatOrq](LangChain.ChatModels.ChatOrq.md): Chat adapter for orq.ai Deployments API.
- [LangChain.ChatModels.ReasoningOptions](LangChain.ChatModels.ReasoningOptions.md): Embedded schema for OpenAI reasoning configuration options.
- [LangChain.Images.ModelsLabImage](LangChain.Images.ModelsLabImage.md): Represents the [ModelsLab Images API](https://docs.modelslab.com/image-generation/overview)
for text-to-image generation using Flux, SDXL, Stable Diffusion, and 10,000+
community fine-tuned models.
- [LangChain.Message.Citation](LangChain.Message.Citation.md): Represents a citation linking a span of response text to a source.
- [LangChain.Message.CitationSource](LangChain.Message.CitationSource.md): Represents the source of a citation - where the cited information came from.
- [LangChain.NativeTool](LangChain.NativeTool.md): Represents built-in tools available from AI/LLM services that can be used within the LangChain framework.
- [LangChain.Telemetry](LangChain.Telemetry.md): Telemetry events for LangChain.
- [LangChain.Tools.DeepResearch.ResearchResult.ToolCall](LangChain.Tools.DeepResearch.ResearchResult.ToolCall.md)
- [LangChain.Tools.DeepResearch.ResearchResult.Usage](LangChain.Tools.DeepResearch.ResearchResult.Usage.md)
- [LangChain.Utils.AwsEventstreamDecoder](LangChain.Utils.AwsEventstreamDecoder.md): Decodes AWS messages in the application/vnd.amazon.eventstream content-type.
Ignores the headers because on Bedrock it's the same content type, event type & message type headers in every message.

- [LangChain.Utils.BedrockStreamDecoder](LangChain.Utils.BedrockStreamDecoder.md)
- [LangChain.Utils.Parser.LLAMA_3_1_CustomToolParser](LangChain.Utils.Parser.LLAMA_3_1_CustomToolParser.md)
- [LangChain.Utils.Parser.LLAMA_3_2_CustomToolParser](LangChain.Utils.Parser.LLAMA_3_2_CustomToolParser.md)

- Chat Models
  - [LangChain.ChatModels.ChatAnthropic](LangChain.ChatModels.ChatAnthropic.md): Module for interacting with [Anthropic models](https://docs.anthropic.com/claude/docs/models-overview#claude-3-a-new-generation-of-ai).
  - [LangChain.ChatModels.ChatAwsMantle](LangChain.ChatModels.ChatAwsMantle.md): Represents a chat model hosted by AWS Bedrock's **Mantle** endpoint — the
OpenAI-compatible gateway AWS introduced for third-party models such as
Moonshot AI's Kimi K2 family and OpenAI's gpt-oss series.
  - [LangChain.ChatModels.ChatBumblebee](LangChain.ChatModels.ChatBumblebee.md): Represents a chat model hosted by Bumblebee and accessed through an
`Nx.Serving`.
  - [LangChain.ChatModels.ChatDeepSeek](LangChain.ChatModels.ChatDeepSeek.md): Module for interacting with [DeepSeek models](https://www.deepseek.com/).
  - [LangChain.ChatModels.ChatGoogleAI](LangChain.ChatModels.ChatGoogleAI.md): Parses and validates inputs for making a request for the Google AI  Chat API.
  - [LangChain.ChatModels.ChatGrok](LangChain.ChatModels.ChatGrok.md): Module for interacting with [xAI's Grok models](https://docs.x.ai/docs/models).
  - [LangChain.ChatModels.ChatMistralAI](LangChain.ChatModels.ChatMistralAI.md)
  - [LangChain.ChatModels.ChatModel](LangChain.ChatModels.ChatModel.md)
  - [LangChain.ChatModels.ChatOllamaAI](LangChain.ChatModels.ChatOllamaAI.md): Represents the [Ollama AI Chat model](https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-chat-completion)
  - [LangChain.ChatModels.ChatOpenAI](LangChain.ChatModels.ChatOpenAI.md): Represents the [OpenAI
ChatModel](https://platform.openai.com/docs/api-reference/chat/create).
  - [LangChain.ChatModels.ChatOpenAIResponses](LangChain.ChatModels.ChatOpenAIResponses.md): Represents the OpenAI Responses API
  - [LangChain.ChatModels.ChatPerplexity](LangChain.ChatModels.ChatPerplexity.md): Represents the [Perplexity Chat model](https://docs.perplexity.ai/api-reference/chat-completions).
  - [LangChain.ChatModels.ChatReqLLM](LangChain.ChatModels.ChatReqLLM.md): ChatModel adapter using the `req_llm` library as the HTTP/LLM backend.
  - [LangChain.ChatModels.ChatVertexAI](LangChain.ChatModels.ChatVertexAI.md): Parses and validates inputs for making a request for the Google AI  Chat API.

- Chains
  - [LangChain.Chains.DataExtractionChain](LangChain.Chains.DataExtractionChain.md): Defines an LLMChain for performing data extraction from a body of text.
  - [LangChain.Chains.LLMChain](LangChain.Chains.LLMChain.md): Define an LLMChain. This is the heart of the LangChain library.
  - [LangChain.Chains.SummarizeConversationChain](LangChain.Chains.SummarizeConversationChain.md): When an AI conversation has many back-and-forth messages (from user to
assistant to user to assistant, etc.), the number of messages and the total
token count can be large. Large token counts present the following problems
  - [LangChain.Chains.TextToTitleChain](LangChain.Chains.TextToTitleChain.md): A convenience chain for turning a user's prompt text into a summarized title
for the anticipated conversation.

- Run Modes
  - [LangChain.Chains.LLMChain.Mode](LangChain.Chains.LLMChain.Mode.md): Behaviour for LLMChain execution modes.
  - [LangChain.Chains.LLMChain.Mode.Steps](LangChain.Chains.LLMChain.Mode.Steps.md): Pipe-friendly building blocks for composing custom execution modes.
  - [LangChain.Chains.LLMChain.Modes.Step](LangChain.Chains.LLMChain.Modes.Step.md): Execution mode that runs a single step at a time.
  - [LangChain.Chains.LLMChain.Modes.UntilSuccess](LangChain.Chains.LLMChain.Modes.UntilSuccess.md): Execution mode that loops until a successful result.
  - [LangChain.Chains.LLMChain.Modes.UntilToolUsed](LangChain.Chains.LLMChain.Modes.UntilToolUsed.md): Execution mode that loops until a specific tool is called.
  - [LangChain.Chains.LLMChain.Modes.WhileNeedsResponse](LangChain.Chains.LLMChain.Modes.WhileNeedsResponse.md): Execution mode that loops while the chain needs a response.

- Messages
  - [LangChain.Message](LangChain.Message.md): Models a complete `Message` for a chat LLM.
  - [LangChain.Message.ContentPart](LangChain.Message.ContentPart.md): Models a `ContentPart`. ContentParts are now used for multi-modal support in
both messages and tool results. This enables richer responses, allowing text,
images, files, and thinking blocks to be combined in a single message or tool
result.
  - [LangChain.Message.ToolCall](LangChain.Message.ToolCall.md): Represents an LLM's request to use tool. It specifies the tool to execute and
may provide arguments for the tool to use.
  - [LangChain.Message.ToolResult](LangChain.Message.ToolResult.md): Represents a the result of running a requested tool. The LLM's requests a tool
use through a `ToolCall`. A `ToolResult` returns the answer or result from the
application back to the AI.
  - [LangChain.MessageDelta](LangChain.MessageDelta.md): Models a "delta" message from a chat LLM. A delta is a small chunk, or piece
of a much larger complete message. A series of deltas are used to construct
the complete message.
  - [LangChain.MessageProcessors.JsonProcessor](LangChain.MessageProcessors.JsonProcessor.md): A built-in Message processor that processes a received Message for JSON
contents.
  - [LangChain.PromptTemplate](LangChain.PromptTemplate.md): Enables defining a prompt, optionally as a template, but delaying the final
building of it until a later time when input values are substituted in.
  - [LangChain.TokenUsage](LangChain.TokenUsage.md): Contains token usage information returned from an LLM.

- Functions
  - [LangChain.Function](LangChain.Function.md): Defines a "function" that can be provided to an LLM for the LLM to optionally
execute and pass argument data to.
  - [LangChain.FunctionParam](LangChain.FunctionParam.md): Define a function parameter as a struct. Used to generate the expected
JSONSchema data for describing one or more arguments being passed to a
`LangChain.Function`.

- Callbacks
  - [LangChain.Callbacks](LangChain.Callbacks.md): Defines the structure of callbacks and provides utilities for executing them.
  - [LangChain.Chains.ChainCallbacks](LangChain.Chains.ChainCallbacks.md): Defines the callbacks fired by an LLMChain and LLM module.

- Routing
  - [LangChain.Chains.RoutingChain](LangChain.Chains.RoutingChain.md): Run a router based on a user's initial prompt to determine what category best
matches from the given options. If there is no good match, the value "DEFAULT"
is returned.
  - [LangChain.Routing.PromptRoute](LangChain.Routing.PromptRoute.md): Defines a route or direction a prompting interaction with an LLM can take.

- File Uploaders
  - [LangChain.FileUploader](LangChain.FileUploader.md): Behaviour for uploading files to LLM providers.
  - [LangChain.FileUploader.FileAnthropic](LangChain.FileUploader.FileAnthropic.md): Uploads files to Anthropic's [Files API](https://docs.anthropic.com/en/api/files-create).
  - [LangChain.FileUploader.FileGoogle](LangChain.FileUploader.FileGoogle.md): Uploads files to Google Gemini's [File API](https://ai.google.dev/gemini-api/docs/files).
  - [LangChain.FileUploader.FileOpenAI](LangChain.FileUploader.FileOpenAI.md): Uploads files to OpenAI's [Files API](https://platform.openai.com/docs/api-reference/files).
  - [LangChain.FileUploader.FileResult](LangChain.FileUploader.FileResult.md): Represents the result of a file upload to an LLM provider.

- Images
  - [LangChain.Images](LangChain.Images.md): Functions for working with `LangChain.GeneratedImage` files.

  - [LangChain.Images.GeneratedImage](LangChain.Images.GeneratedImage.md): Represents a generated image where we have either the base64 encoded contents
or a temporary URL to it.
  - [LangChain.Images.OpenAIImage](LangChain.Images.OpenAIImage.md): Represents the [OpenAI Images API
endpoint](https://platform.openai.com/docs/api-reference/images) for working
with DALL-E-2 and DALL-E-3.

- Text Splitter
  - [LangChain.TextSplitter.CharacterTextSplitter](LangChain.TextSplitter.CharacterTextSplitter.md): The `CharacterTextSplitter` is a length based text splitter
that divides text based on specified characters.
This splitter provides consistent chunk sizes.
It operates as follows
  - [LangChain.TextSplitter.LanguageSeparators](LangChain.TextSplitter.LanguageSeparators.md): Separators lists for programming and markdown languages.
Useful to use with `LangChain.TextSplitter.RecursiveCharacterTextSplitter`.

  - [LangChain.TextSplitter.RecursiveCharacterTextSplitter](LangChain.TextSplitter.RecursiveCharacterTextSplitter.md): The `RecursiveCharacterTextSplitter` is the recommended spliltter for generic text.
It splits the text based on a list of characters.
It uses each of these characters sequentially, until the text is split
into small enough chunks. The default list is `["

- Tools
  - [LangChain.Tools.Calculator](LangChain.Tools.Calculator.md): Defines a Calculator tool for performing basic math calculations.
  - [LangChain.Tools.DeepResearch](LangChain.Tools.DeepResearch.md): Defines an OpenAI Deep Research tool for conducting comprehensive research on complex topics.
  - [LangChain.Tools.DeepResearch.ResearchRequest](LangChain.Tools.DeepResearch.ResearchRequest.md): Represents a Deep Research request sent to the OpenAI API.
  - [LangChain.Tools.DeepResearch.ResearchResult](LangChain.Tools.DeepResearch.ResearchResult.md): Represents the final result of a completed Deep Research request.
  - [LangChain.Tools.DeepResearch.ResearchStatus](LangChain.Tools.DeepResearch.ResearchStatus.md): Represents the status of a Deep Research request.
  - [LangChain.Tools.DeepResearchClient](LangChain.Tools.DeepResearchClient.md): HTTP client for OpenAI Deep Research API.

- WebSocket
  - [LangChain.WebSocket](LangChain.WebSocket.md): A generic WebSocket client GenServer built on `Mint.WebSocket`.

- Evaluation
  - [LangChain.Trajectory](LangChain.Trajectory.md): Captures the structured sequence of messages and tool calls produced during
an `LLMChain` run for inspection, serialization, and comparison.
  - [LangChain.Trajectory.Assertions](LangChain.Trajectory.Assertions.md): ExUnit assertion helpers for trajectory comparison.

- Utils
  - [LangChain.Config](LangChain.Config.md): Utility that handles interaction with the application's configuration.

  - [LangChain.Gettext](LangChain.Gettext.md): A module providing Internationalization with a gettext-based API.
  - [LangChain.Utils](LangChain.Utils.md): Collection of helpful utilities mostly for internal use.

  - [LangChain.Utils.BedrockConfig](LangChain.Utils.BedrockConfig.md): Configuration for AWS Bedrock.
  - [LangChain.Utils.ChainResult](LangChain.Utils.ChainResult.md): Module to help when working with the results of a chain.

  - [LangChain.Utils.ChatTemplates](LangChain.Utils.ChatTemplates.md): Functions for converting messages into the various commonly used chat template
formats.

- Exceptions
  - [LangChain.LangChainError](LangChain.LangChainError.md): Exception used for raising LangChain specific errors.

