View Source API Reference LangChain v0.3.0-rc.1

Modules

Defines the structure of callbacks and provides utilities for executing them.

Defines the callbacks fired by an LLMChain.

Defines an LLMChain for performing data extraction from a body of text.

Run a router based on a user's initial prompt to determine what category best matches from the given options. If there is no good match, the value "DEFAULT" is returned.

When an AI conversation has many back-and-forth messages (from user to assistant to user to assistant, etc.), the number of messages and the total token count can be large. Large token counts present the following problems

A convenience chain for turning a user's prompt text into a summarized title for the anticipated conversation.

Represents a chat model hosted by Bumblebee and accessed through an Nx.Serving.

Parses and validates inputs for making a request for the Google AI Chat API.

Parses and validates inputs for making a request for the Google AI Chat API.

Defines the callbacks fired by an LLM module.

Utility that handles interaction with the application's configuration.

Defines a "function" that can be provided to an LLM for the LLM to optionally execute and pass argument data to.

Define a function parameter as a struct. Used to generate the expected JSONSchema data for describing one or more arguments being passed to a LangChain.Function.

A module providing Internationalization with a gettext-based API.

Functions for working with LangChain.GeneratedImage files.

Represents a generated image where we have either the base64 encoded contents or a temporary URL to it.

Represents the OpenAI Images API endpoint for working with DALL-E-2 and DALL-E-3.

Exception used for raising LangChain specific errors.

Models a complete Message for a chat LLM.

Models a ContentPart. Some LLMs support combining text, images, and possibly other content as part of a single user message. A ContentPart represents a block, or part, of a message's content that is all of one type.

Represents an LLM's request to use tool. It specifies the tool to execute and may provide arguments for the tool to use.

Represents a the result of running a requested tool. The LLM's requests a tool use through a ToolCall. A ToolResult returns the answer or result from the application back to the AI.

Models a "delta" message from a chat LLM. A delta is a small chunk, or piece of a much larger complete message. A series of deltas are used to construct the complete message.

A built-in Message processor that processes a received Message into the provided Ecto.Changeset.

A built-in Message processor that processes a received Message for JSON contents.

Enables defining a prompt, optionally as a template, but delaying the final building of it until a later time when input values are substituted in.

Defines a route or direction a prompting interaction with an LLM can take.

Contains token usage information returned from an LLM.

Defines a Calculator tool for performing basic math calculations.

Collection of helpful utilities mostly for internal use.

Decodes AWS messages in the application/vnd.amazon.eventstream content-type. Ignores the headers because on Bedrock it's the same content type, event type & message type headers in every message.

Configuration for AWS Bedrock.

Module to help when working with the results of a chain.

Functions for converting messages into the various commonly used chat template formats.