ExLLM.Adapters.Ollama (ex_llm v0.5.0)
View SourceOllama API adapter for ExLLM - provides local model inference via Ollama server.
Configuration
This adapter requires a running Ollama server. By default, it connects to localhost:11434.
Using Environment Variables
# Set environment variables
export OLLAMA_API_BASE="http://localhost:11434" # optional
export OLLAMA_MODEL="llama2" # optional
# Use with default environment provider
ExLLM.Adapters.Ollama.chat(messages, config_provider: ExLLM.ConfigProvider.Env)
Using Static Configuration
config = %{
ollama: %{
base_url: "http://localhost:11434",
model: "llama2"
}
}
{:ok, provider} = ExLLM.ConfigProvider.Static.start_link(config)
ExLLM.Adapters.Ollama.chat(messages, config_provider: provider)
Example Usage
messages = [
%{role: "user", content: "Hello, how are you?"}
]
# Simple chat
{:ok, response} = ExLLM.Adapters.Ollama.chat(messages)
IO.puts(response.content)
# Streaming chat
{:ok, stream} = ExLLM.Adapters.Ollama.stream_chat(messages)
for chunk <- stream do
if chunk.content, do: IO.write(chunk.content)
end
Available Models
To see available models, ensure Ollama is running and use:
{:ok, models} = ExLLM.Adapters.Ollama.list_models()
Configuration Management
The adapter provides functions to generate and update model configurations:
# Generate configuration for all installed models
{:ok, yaml} = ExLLM.Adapters.Ollama.generate_config()
# Save the configuration
{:ok, path} = ExLLM.Adapters.Ollama.generate_config(save: true)
# Update a specific model's configuration
{:ok, yaml} = ExLLM.Adapters.Ollama.update_model_config("llama3.1")
This is useful for keeping your config/models/ollama.yml
in sync with your
locally installed models and their actual capabilities.
Summary
Functions
Copy a model to a new name.
Delete a model from Ollama.
Generate a completion for the given prompt using Ollama's /api/generate endpoint.
Generate YAML configuration for all locally installed Ollama models.
List currently loaded models.
Pull a model from the Ollama library.
Push a model to the Ollama library.
Get detailed information about a specific model.
Stream a completion using the /api/generate endpoint.
Update configuration for a specific model in ollama.yml.
Get Ollama version information.
Functions
Copy a model to a new name.
Uses Ollama's /api/copy endpoint to create a copy of an existing model with a new name.
Examples
{:ok, _} = ExLLM.Adapters.Ollama.copy_model("llama3.1", "my-llama")
Delete a model from Ollama.
Uses Ollama's /api/delete endpoint to remove a model from the local model store.
Examples
{:ok, _} = ExLLM.Adapters.Ollama.delete_model("old-model")
Generate a completion for the given prompt using Ollama's /api/generate endpoint.
This is for non-chat completions, useful for base models or specific use cases.
Options
:model
- The model to use (required):prompt
- The prompt text (required):suffix
- Text to append after the generation:images
- List of base64-encoded images for multimodal models:format
- Response format (e.g., "json"):options
- Model-specific options (temperature, seed, etc.):context
- Context from previous request for maintaining conversation state:raw
- If true, no formatting will be applied to the prompt:keep_alive
- How long to keep the model loaded (e.g., "5m"):timeout
- Request timeout in milliseconds
Examples
# Simple completion
{:ok, response} = ExLLM.Adapters.Ollama.generate("Complete this: The sky is",
model: "llama3.1")
# With options
{:ok, response} = ExLLM.Adapters.Ollama.generate("Write a haiku about coding",
model: "llama3.1",
options: %{temperature: 0.7, seed: 42})
# Maintain conversation context
{:ok, response1} = ExLLM.Adapters.Ollama.generate("Hi, I'm learning Elixir",
model: "llama3.1")
{:ok, response2} = ExLLM.Adapters.Ollama.generate("What should I learn first?",
model: "llama3.1",
context: response1.context)
Generate YAML configuration for all locally installed Ollama models.
This function fetches information about all installed models and generates
a YAML configuration that can be saved to config/models/ollama.yml
.
Options
:save
- When true, saves the configuration to the file (default: false):path
- Custom path for the YAML file (default: "config/models/ollama.yml"):merge
- When true, merges with existing configuration (default: true)
Examples
# Generate configuration and return as string
{:ok, yaml} = ExLLM.Adapters.Ollama.generate_config()
# Save directly to config/models/ollama.yml
{:ok, path} = ExLLM.Adapters.Ollama.generate_config(save: true)
# Save to custom location
{:ok, path} = ExLLM.Adapters.Ollama.generate_config(
save: true,
path: "my_ollama_config.yml"
)
# Replace existing configuration instead of merging
{:ok, yaml} = ExLLM.Adapters.Ollama.generate_config(merge: false)
List currently loaded models.
Uses Ollama's /api/ps endpoint to show which models are currently loaded in memory.
Examples
{:ok, loaded} = ExLLM.Adapters.Ollama.list_running_models()
Pull a model from the Ollama library.
Uses Ollama's /api/pull endpoint to download a model. This returns a stream of progress updates.
Examples
{:ok, stream} = ExLLM.Adapters.Ollama.pull_model("llama3.1:latest")
for update <- stream do
IO.puts("Status: #{update["status"]}")
end
Push a model to the Ollama library.
Uses Ollama's /api/push endpoint to upload a model.
Examples
{:ok, stream} = ExLLM.Adapters.Ollama.push_model("my-model:latest")
Get detailed information about a specific model.
Uses Ollama's /api/show endpoint to retrieve model details including modelfile, parameters, template, and more.
Examples
{:ok, info} = ExLLM.Adapters.Ollama.show_model("llama3.1")
Stream a completion using the /api/generate endpoint.
Similar to generate/2
but returns a stream of response chunks.
Update configuration for a specific model in ollama.yml.
This function fetches the latest information for a specific model and updates its entry in the configuration file.
Options
:save
- When true, saves the configuration to the file (default: true):path
- Custom path for the YAML file (default: "config/models/ollama.yml")
Examples
# Update a specific model's configuration
{:ok, yaml} = ExLLM.Adapters.Ollama.update_model_config("llama3.1")
# Update without saving (preview changes)
{:ok, yaml} = ExLLM.Adapters.Ollama.update_model_config("llama3.1", save: false)
Get Ollama version information.
Examples
{:ok, version} = ExLLM.Adapters.Ollama.version()