API Reference letta_api v#1.0.0
Modules
API calls for all endpoints tagged Admin
.
API calls for all endpoints tagged Agents
.
API calls for all endpoints tagged Auth
.
API calls for all endpoints tagged Blocks
.
API calls for all endpoints tagged Embeddings
.
API calls for all endpoints tagged Groups
.
API calls for all endpoints tagged Health
.
API calls for all endpoints tagged Identities
.
API calls for all endpoints tagged Jobs
.
API calls for all endpoints tagged Llms
.
API calls for all endpoints tagged Messages
.
API calls for all endpoints tagged Models
.
API calls for all endpoints tagged Organization
.
API calls for all endpoints tagged Providers
.
API calls for all endpoints tagged Runs
.
API calls for all endpoints tagged SandboxConfig
.
API calls for all endpoints tagged Sources
.
API calls for all endpoints tagged Steps
.
API calls for all endpoints tagged Tag
.
API calls for all endpoints tagged Tools
.
API calls for all endpoints tagged Users
.
API calls for all endpoints tagged Voice
.
Handle Tesla connections for LettaAPI.
Helper functions for deserializing responses into models
Action data model.
Action parameter data models.
Action response data model.
Representation of an agent's state. This is the state of the agent at a given time, and is persisted in the DB backend. The state has all the information needed to recreate a persisted agent. Parameters: id (str): The unique identifier of the agent. name (str): The name of the agent (must be unique to the user). created_at (datetime): The datetime the agent was created. message_ids (List[str]): The ids of the messages in the agent's in-context memory. memory (Memory): The in-context memory of the agent. tools (List[str]): The tools used by the agent. This includes any memory editing functions specified in memory
. system (str): The system prompt used by the agent. llm_config (LLMConfig): The LLM configuration used by the agent. embedding_config (EmbeddingConfig): The embedding configuration used by the agent.
Enum to represent the type of agent.
Generic type for any value. Used by auto-generated deserializer.
App authenticatio scheme.
App data model.
The args JSON schema of the function.
A message sent by the LLM in response to user input. Used in the LLM context. Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)
Auth scheme field.
A Block represents a reserved section of the LLM's context window which is editable. Block
objects contained in the Memory
object, which is able to edit the Block values. Parameters: label (str): The label of the block (e.g. 'human', 'persona'). This defines a category for the block. value (str): The value of the block. This is the string that is represented in the context window. limit (int): The character limit of the block. is_template (bool): Whether the block is a template (e.g. saved human/persona options). Non-template blocks are not stored in the database and are ephemeral, while templated blocks are stored in the database. label (str): The label of the block (e.g. 'human', 'persona'). This defines a category for the block. template_name (str): The name of the block template (if it is a template). description (str): Description of the block. metadata (Dict): Metadata of the block. user_id (str): The unique identifier of the user associated with the block.
Update a block
A ToolRule represents a tool that can be invoked by the agent.
A ToolRule that conditionally maps to different child tools based on the output.
The configuration for the sandbox.
The JSON configuration data for the sandbox.
The message content sent by the agent (can be a string or an array of content parts)
The content of the message.
The message content sent by the assistant (can be a string or an array of content parts)
The message content sent by the user (can be a string or an array of multi-modal content parts)
Overview of the context window, including the number of messages and tokens.
Represents a tool rule configuration where if this tool gets called, it must continue the agent loop.
CreateAgent model specifically for POST request body, excluding user_id which comes from headers
Create a block
The timestamp when the object was created.
The id of the user that made this object.
The id of the user that made this Tool.
The description of the tool.
Embedding model configuration. This object specifies all the information necessary to access an embedding model to usage with Letta, except for secret keys. Attributes: embedding_endpoint_type (str): The endpoint type for the model. embedding_endpoint (str): The endpoint for the model. embedding_model (str): The model for the embedding. embedding_dim (int): The dimension of the embedding. embedding_chunk_size (int): The chunk size of the embedding. azure_endpoint (:obj:str
, optional): The Azure endpoint for the model (Azure only). azure_version (str): The Azure version for the model (Azure only). azure_deployment (str): The Azure deployment for the model (Azure only).
Representation of a single FileMetadata
Health check response body
Representation of an agent's internal reasoning where reasoning content has been hidden from the response. Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent
A property of an identity
Enum to represent the type of the identity property.
Enum to represent the type of the identity.
Represents the initial tool rule configuration.
Representation of offline jobs, used for tracking status of data loading tasks (involving parsing and embedding files). Parameters: id (str): The unique identifier of the job. status (JobStatus): The status of the job. created_at (datetime): The unix timestamp of when the job was created. completed_at (datetime): The unix timestamp of when the job was completed. user_id (str): The unique identifier of the user associated with the.
Status of the job.
Response format for JSON object responses.
The JSON schema of the function.
The JSON schema of the function (auto-generated from source_code if not provided)
Response format for JSON schema-based responses.
The id of the user that made this object.
The id of the user that made this Tool.
Response object from an agent interaction, consisting of the new messages generated by the agent and usage statistics. The type of the returned messages can be either Message
or LettaMessage
, depending on what was specified in the request. Attributes: messages (List[Union[Message, LettaMessage]]): The messages returned by the agent. usage (LettaUsageStatistics): The usage statistics
Usage statistics for the agent interaction. Attributes: completion_tokens (int): The number of tokens generated by the agent. prompt_tokens (int): The number of tokens in the prompt. total_tokens (int): The total number of tokens processed by the agent. step_count (int): The number of steps taken by the agent.
Configuration for a Language Model (LLM) model. This object specifies all the information necessary to access an LLM model to usage with Letta, except for secret keys. Attributes: model (str): The name of the LLM model. model_endpoint_type (str): The endpoint type for the model. model_endpoint (str): The endpoint for the model. model_wrapper (str): The wrapper for the model. This is used to wrap additional text around the input/output of the model. This is useful for text-to-text completions, such as the Completions API in OpenAI. context_window (int): The context window size for the model. put_inner_thoughts_in_kwargs (bool): Puts inner_thoughts
as a kwarg in the function call if this is set to True. This helps with function calling performance and also the generation of inner thoughts. temperature (float): The temperature to use when generating text with the model. A higher temperature will result in more random text. max_tokens (int): The maximum number of tokens to generate.
Represents a tool rule configuration which constrains the total number of times this tool can be invoked in a single step.
A simple wrapper around MCP's tool definition (to avoid conflict with our own)
Represents the in-context memory (i.e. Core memory) of the agent. This includes both the Block
objects (labelled by sections), as well as tools to edit the blocks.
Letta's internal representation of a message. Includes methods to convert to/from LLM provider formats. Attributes: id (str): The unique identifier of the message. role (MessageRole): The role of the participant. text (str): The text of the message. user_id (str): The unique identifier of the user. agent_id (str): The unique identifier of the agent. model (str): The model used to make the function call. name (str): The name of the participant. created_at (datetime): The time the message was created. tool_calls (List[OpenAIToolCall,]): The list of tool calls requested. tool_call_id (str): The id of the tool call. step_id (str): The id of the step that this message was created in. otid (str): The offline threading id associated with this message. tool_returns (List[ToolReturn]): The list of tool returns requested. group_id (str): The multi-agent group that the message was sent in. sender_id (str): The id of the sender of the message, can be an identity id or agent id.
Request to create a message
A dictionary of additional metadata for the tool.
The name of the function.
The name of the tool to run.
The unique identifier of the organization associated with the sandbox.
The unique identifier of the organization associated with the tool.
A ToolRule that only allows a child tool to be called if the parent has been called.
Representation of a passage, which is stored in archival memory. Parameters: text (str): The text of the passage. embedding (List[float]): The embedding of the passage. embedding_config (EmbeddingConfig): The embedding configuration used by the passage. created_at (datetime): The creation date of the passage. user_id (str): The unique identifier of the user associated with the passage. agent_id (str): The unique identifier of the agent associated with the passage. source_id (str): The data source of the passage. file_id (str): The unique identifier of the file associated with the passage.
Representation of an agent's internal reasoning. Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step
Representation of a run, which is a job with a 'run' prefix in its ID. Inherits all fields and behavior from Job except for the ID prefix. Parameters: id (str): The unique identifier of the run (prefixed with 'run-'). status (JobStatus): The status of the run. created_at (datetime): The unix timestamp of when the run was created. completed_at (datetime): The unix timestamp of when the run was completed. user_id (str): The unique identifier of the user associated with the run.
Pydantic model for updating SandboxConfig fields.
Representation of a source, which is a collection of files and passages. Parameters: id (str): The ID of the source name (str): The name of the source. embedding_config (EmbeddingConfig): The embedding configuration used by the source. user_id (str): The ID of the user that created the source. metadata (dict): Metadata associated with the source. description (str): The description of the source.
The source code of the function.
Schema for creating a new Source.
The type of the source code.
Schema for updating an existing Source.
A message generated by the system. Never streamed back on a response, only used for cursor pagination. Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system
Represents a terminal tool rule configuration where if this tool gets called, it must end the agent loop.
Response format for plain text responses.
Representation of a tool, which is a function that can be called by the agent. Parameters: id (str): The unique identifier of the tool. name (str): The name of the function. tags (List[str]): Metadata tags. source_code (str): The source code of the function. json_schema (Dict): The JSON schema of the function.
A message representing a request to call a tool (generated by the LLM to trigger tool execution). Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call
A message representing the return value of a tool call (generated by Letta executing the requested tool). Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_return (str): The return value of the tool status (Literal["success", "error"]): The status of the tool call tool_call_id (str): A unique identifier for the tool call that generated this message stdout (Optional[List(str)]): Captured stdout (e.g. prints, logs) from the tool invocation stderr (Optional[List(str)]): Captured stderr from the tool invocation
The timestamp when the object was last updated.
Representation of a user. Parameters: id (str): The unique identifier of the user. name (str): The name of the user. created_at (datetime): The creation date of the user.
A message sent by the user. Never streamed back on a response, only used for cursor pagination. Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)
The value of the property
Helper functions for building Tesla requests