Factory for Ollama models via its OpenAI-compatible HTTP server.
Like llama.cpp, the available models depend on what has been pulled into the local Ollama instance, so this module provides a factory rather than a static catalog.
Examples
iex> Planck.AI.Models.Ollama.model("llama3.2")
%Planck.AI.Model{provider: :ollama, base_url: "http://localhost:11434", ...}
iex> Planck.AI.Models.Ollama.model("qwen2.5-coder:7b",
...> base_url: "http://10.0.0.5:11434",
...> context_window: 32_768
...> )
Summary
Functions
Builds a Planck.AI.Model for an Ollama-hosted model.
Functions
@spec model( String.t(), keyword() ) :: Planck.AI.Model.t()
Builds a Planck.AI.Model for an Ollama-hosted model.
Options
:base_url— base URL of the Ollama server. Defaults tohttp://localhost:11434.:context_window— context window size. Defaults to4096.:max_tokens— max tokens to generate. Defaults to2048.:supports_thinking— whether the model supports thinking blocks. Defaults tofalse.:input_types— list of supported input modalities. Defaults to[:text].:default_opts— inference parameters applied on every call unless overridden by the caller (e.g.[temperature: 0.8, top_p: 0.9]). Defaults to[].