AI (AI SDK v0.0.1-rc.0)
View SourceAI is an Elixir SDK for building AI-powered applications.
It provides a unified API to interact with various AI model providers like OpenAI, Anthropic, and others.
Summary
Functions
Generates text using an AI model.
Creates an OpenAI model with the specified model ID.
Creates an OpenAI-compatible provider with the specified model ID.
Creates an OpenAI completion model with the specified model ID.
Streams text generation from an AI model, returning chunks as they are generated.
Functions
Generates text using an AI model.
Options
:model
- The language model to use:system
- A system message that will be part of the prompt:prompt
- A simple text prompt (can use either prompt or messages):messages
- A list of messages (can use either prompt or messages):max_tokens
- Maximum number of tokens to generate:temperature
- Temperature setting for randomness:top_p
- Nucleus sampling:tools
- Tools that are accessible to and can be called by the model:tool_choice
- The tool choice strategy (default: 'auto')
Examples
{:ok, result} = AI.generate_text(%{
model: AI.openai_compatible("gpt-3.5-turbo", base_url: "https://api.example.com"),
system: "You are a friendly assistant!",
prompt: "Why is the sky blue?"
})
IO.puts(result.text)
Creates an OpenAI model with the specified model ID.
This function creates a model that uses the official OpenAI API.
Options
:api_key
- The API key to use for authentication (default: OPENAI_API_KEY environment variable):base_url
- The base URL of the API (default: "https://api.openai.com"):structured_outputs
- Whether the model supports structured outputs (default: false):use_legacy_function_calling
- Whether to use legacy function calling format (default: false):reasoning_effort
- For O-series models (o1, o3), controls the reasoning effort (low, medium, high)
Examples
model = AI.openai("gpt-4")
# With custom API key
model = AI.openai("gpt-4", api_key: "your-api-key")
# With structured outputs
model = AI.openai("gpt-4", structured_outputs: true)
# With reasoning effort for O-series models
model = AI.openai("o1-mini", reasoning_effort: "high")
Creates an OpenAI-compatible provider with the specified model ID.
This function creates a model that can be used with OpenAI-compatible APIs, such as Ollama, LMStudio, and any other API that follows the OpenAI format.
Options
:base_url
- The base URL of the API (required):api_key
- The API key to use for authentication (optional):headers
- Additional headers to include in requests (optional):supports_image_urls
- Whether the model supports image URLs (default: false):supports_structured_outputs
- Whether the model supports structured outputs (default: false)
Examples
model = AI.openai_compatible("gpt-3.5-turbo", base_url: "https://api.example.com")
# With API key
model = AI.openai_compatible("gpt-4",
base_url: "https://api.openai.com",
api_key: System.get_env("OPENAI_API_KEY")
)
Creates an OpenAI completion model with the specified model ID.
This function creates a model that uses the official OpenAI completion API.
Options
:api_key
- The API key to use for authentication (default: OPENAI_API_KEY environment variable):base_url
- The base URL of the API (default: "https://api.openai.com")
Examples
model = AI.openai_completion("text-davinci-003")
# With custom API key
model = AI.openai_completion("text-davinci-003", api_key: "your-api-key")
Streams text generation from an AI model, returning chunks as they are generated.
Options
:model
- The language model to use:system
- A system message that will be part of the prompt:prompt
- A simple text prompt (can use either prompt or messages):messages
- A list of messages (can use either prompt or messages):max_tokens
- Maximum number of tokens to generate:temperature
- Temperature setting for randomness:top_p
- Nucleus sampling:top_k
- Top-k sampling:frequency_penalty
- Penalize new tokens based on their frequency:presence_penalty
- Penalize new tokens based on their presence:tools
- Tools that are accessible to and can be called by the model:mode
- Return format::string
(default) for plain text chunks, or:event
for event tuples
Examples
Default String Mode
{:ok, result} = AI.stream_text(%{
model: AI.openai("gpt-3.5-turbo"),
system: "You are a friendly assistant!",
prompt: "Why is the sky blue?"
})
# Process chunks as they arrive - each chunk is a string
result.stream
|> Stream.each(&IO.write/1)
|> Stream.run()
# Or collect all chunks into a single string
full_text = Enum.join(result.stream, "")
Event Mode
{:ok, result} = AI.stream_text(%{
model: AI.openai("gpt-3.5-turbo"),
prompt: "Tell me a story",
mode: :event
})
# Process different event types
result.stream
|> Enum.reduce("", fn
{:text_delta, chunk}, acc ->
IO.write(chunk)
acc <> chunk
{:finish, reason}, acc ->
IO.puts("\nFinished: #{reason}")
acc
{:error, error}, acc ->
IO.puts("\nError: #{inspect(error)}")
acc
_, acc -> acc
end)