ExLLM.Providers.Ollama (ex_llm v0.8.1)

View Source

Ollama API adapter for ExLLM - provides local model inference via Ollama server.

Configuration

This adapter requires a running Ollama server. By default, it connects to localhost:11434.

Using Environment Variables

# Set environment variables
export OLLAMA_API_BASE="http://localhost:11434"  # optional
export OLLAMA_MODEL="llama2"  # optional

# Use with default environment provider
ExLLM.Providers.Ollama.chat(messages, config_provider: ExLLM.Infrastructure.ConfigProvider.Env)

Using Static Configuration

config = %{
  ollama: %{
    base_url: "http://localhost:11434",
    model: "llama2"
  }
}
{:ok, provider} = ExLLM.Infrastructure.ConfigProvider.Static.start_link(config)
ExLLM.Providers.Ollama.chat(messages, config_provider: provider)

Example Usage

messages = [
  %{role: "user", content: "Hello, how are you?"}
]

# Simple chat
{:ok, response} = ExLLM.Providers.Ollama.chat(messages)
IO.puts(response.content)

# Streaming chat
{:ok, stream} = ExLLM.Providers.Ollama.stream_chat(messages)
for chunk <- stream do
  if chunk.content, do: IO.write(chunk.content)
end

Available Models

To see available models, ensure Ollama is running and use:

{:ok, models} = ExLLM.Providers.Ollama.list_models()

Configuration Management

The adapter provides functions to generate and update model configurations:

# Generate configuration for all installed models
{:ok, yaml} = ExLLM.Providers.Ollama.generate_config()

# Save the configuration
{:ok, path} = ExLLM.Providers.Ollama.generate_config(save: true)

# Update a specific model's configuration
{:ok, yaml} = ExLLM.Providers.Ollama.update_model_config("llama3.1")

This is useful for keeping your config/models/ollama.yml in sync with your locally installed models and their actual capabilities.

Summary

Functions

Delete a model from Ollama.

Generate a completion for the given prompt using Ollama's /api/generate endpoint.

Generate YAML configuration for all locally installed Ollama models.

List currently loaded models.

Pull a model from the Ollama library.

Push a model to the Ollama library.

Get detailed information about a specific model.

Stream a completion using the /api/generate endpoint.

Update configuration for a specific model in ollama.yml.

Get Ollama version information.

Functions

copy_model(source, destination, options \\ [])

Copy a model to a new name.

Uses Ollama's /api/copy endpoint to create a copy of an existing model with a new name.

Examples

{:ok, _} = ExLLM.Providers.Ollama.copy_model("llama3.1", "my-llama")

delete_model(model_name, options \\ [])

Delete a model from Ollama.

Uses Ollama's /api/delete endpoint to remove a model from the local model store.

Examples

{:ok, _} = ExLLM.Providers.Ollama.delete_model("old-model")

generate(prompt, options \\ [])

Generate a completion for the given prompt using Ollama's /api/generate endpoint.

This is for non-chat completions, useful for base models or specific use cases.

Options

  • :model - The model to use (required)
  • :prompt - The prompt text (required)
  • :suffix - Text to append after the generation
  • :images - List of base64-encoded images for multimodal models
  • :format - Response format (e.g., "json")
  • :options - Model-specific options (temperature, seed, etc.)
  • :context - Context from previous request for maintaining conversation state
  • :raw - If true, no formatting will be applied to the prompt
  • :keep_alive - How long to keep the model loaded (e.g., "5m")
  • :timeout - Request timeout in milliseconds

Examples

# Simple completion
{:ok, response} = ExLLM.Providers.Ollama.generate("Complete this: The sky is", 
  model: "llama3.1")

# With options
{:ok, response} = ExLLM.Providers.Ollama.generate("Write a haiku about coding",
  model: "llama3.1",
  options: %{temperature: 0.7, seed: 42})

# Maintain conversation context
{:ok, response1} = ExLLM.Providers.Ollama.generate("Hi, I'm learning Elixir",
  model: "llama3.1")
{:ok, response2} = ExLLM.Providers.Ollama.generate("What should I learn first?",
  model: "llama3.1",
  context: response1.context)

generate_config(options \\ [])

Generate YAML configuration for all locally installed Ollama models.

This function fetches information about all installed models and generates a YAML configuration that can be saved to config/models/ollama.yml.

Options

  • :save - When true, saves the configuration to the file (default: false)
  • :path - Custom path for the YAML file (default: "config/models/ollama.yml")
  • :merge - When true, merges with existing configuration (default: true)

Examples

# Generate configuration and return as string
{:ok, yaml} = ExLLM.Providers.Ollama.generate_config()

# Save directly to config/models/ollama.yml
{:ok, path} = ExLLM.Providers.Ollama.generate_config(save: true)

# Save to custom location
{:ok, path} = ExLLM.Providers.Ollama.generate_config(
  save: true, 
  path: "my_ollama_config.yml"
)

# Replace existing configuration instead of merging
{:ok, yaml} = ExLLM.Providers.Ollama.generate_config(merge: false)

list_running_models(options \\ [])

List currently loaded models.

Uses Ollama's /api/ps endpoint to show which models are currently loaded in memory.

Examples

{:ok, loaded} = ExLLM.Providers.Ollama.list_running_models()

pull_model(model_name, options \\ [])

Pull a model from the Ollama library.

Uses Ollama's /api/pull endpoint to download a model. This returns a stream of progress updates.

Examples

{:ok, stream} = ExLLM.Providers.Ollama.pull_model("llama3.1:latest")
for update <- stream do
  IO.puts("Status: #{update["status"]}")
end

push_model(model_name, options \\ [])

Push a model to the Ollama library.

Uses Ollama's /api/push endpoint to upload a model.

Examples

{:ok, stream} = ExLLM.Providers.Ollama.push_model("my-model:latest")

show_model(model_name, options \\ [])

Get detailed information about a specific model.

Uses Ollama's /api/show endpoint to retrieve model details including modelfile, parameters, template, and more.

Examples

{:ok, info} = ExLLM.Providers.Ollama.show_model("llama3.1")

stream_generate(prompt, options \\ [])

Stream a completion using the /api/generate endpoint.

Similar to generate/2 but returns a stream of response chunks.

update_model_config(model_name, options \\ [])

Update configuration for a specific model in ollama.yml.

This function fetches the latest information for a specific model and updates its entry in the configuration file.

Options

  • :save - When true, saves the configuration to the file (default: true)
  • :path - Custom path for the YAML file (default: "config/models/ollama.yml")

Examples

# Update a specific model's configuration
{:ok, yaml} = ExLLM.Providers.Ollama.update_model_config("llama3.1")

# Update without saving (preview changes)
{:ok, yaml} = ExLLM.Providers.Ollama.update_model_config("llama3.1", save: false)

version(options \\ [])

Get Ollama version information.

Examples

{:ok, version} = ExLLM.Providers.Ollama.version()