Provider for the Ollama API, using the Omni.Dialects.OllamaChat dialect.
Not loaded by default — must be explicitly enabled. Either add it to your provider list in application config:
config :omni, :providers, [:anthropic, :openai, :google, :ollama]Or load it at runtime:
Omni.Provider.load([:ollama])Configuration
Defaults to a local Ollama instance at http://localhost:11434 with no
authentication. For cloud-hosted Ollama instances that require an API key:
config :omni, Omni.Providers.Ollama,
base_url: "https://ollama.com",
api_key: {:system, "OLLAMA_API_KEY"}Models
By default, models are loaded from priv/models/ollama-cloud.json. For local
Ollama instances, override with a list of models matching what you have
pulled locally. Each entry can be a string (just the model ID) or a keyword
list for full control:
config :omni, Omni.Providers.Ollama,
models: [
"mistral:7b",
[id: "llama3.1:8b", name: "Llama 3.1 8B", context_size: 128_000, max_output_tokens: 8192],
[id: "qwen3.5:4b", name: "Qwen 3.5 4B", context_size: 32_768, reasoning: true]
]String entries use the ID as the display name with default values for all
other fields. Keyword entries accept any field from Omni.Model.new/1 —
only :id is required, everything else has sensible defaults.
Limitations
- Image attachments: Only base64-encoded images are supported. URL-based
image attachments (
{:url, url}) are silently skipped because Ollama's API has no URL image input mechanism.