LettaAPI.Model.LlmConfig (letta_api v1.0.0)

Configuration for a Language Model (LLM) model. This object specifies all the information necessary to access an LLM model to usage with Letta, except for secret keys. Attributes: model (str): The name of the LLM model. model_endpoint_type (str): The endpoint type for the model. model_endpoint (str): The endpoint for the model. model_wrapper (str): The wrapper for the model. This is used to wrap additional text around the input/output of the model. This is useful for text-to-text completions, such as the Completions API in OpenAI. context_window (int): The context window size for the model. put_inner_thoughts_in_kwargs (bool): Puts inner_thoughts as a kwarg in the function call if this is set to True. This helps with function calling performance and also the generation of inner thoughts. temperature (float): The temperature to use when generating text with the model. A higher temperature will result in more random text. max_tokens (int): The maximum number of tokens to generate.

Summary

Types

t()

@type t() :: %LettaAPI.Model.LlmConfig{
  context_window: integer(),
  enable_reasoner: boolean() | nil,
  handle: String.t() | nil,
  max_reasoning_tokens: integer() | nil,
  max_tokens: integer() | nil,
  model: String.t(),
  model_endpoint: String.t() | nil,
  model_endpoint_type: String.t(),
  model_wrapper: String.t() | nil,
  provider_name: String.t() | nil,
  put_inner_thoughts_in_kwargs: boolean() | nil,
  reasoning_effort: String.t() | nil,
  temperature: number() | nil
}

Functions

decode(value)