View Source Omni.Providers.OllamaGen (Omni v0.1.0)

An alternative Provider implementation for Ollama, using the Ollama Completion API. This Provider is preferred when you need fine grained control over the prompt templates, that isn't possible using the normal chat API.

This Provider extends tha Omni.Providers.Ollama Provider, and so the base URL can be configured in the same way.

Summary

Functions

Returns the schema for this Provider.

Functions

Returns the schema for this Provider.

Schema

  • :model (String.t/0) - Required. The ollama model name.

  • :prompt (String.t/0) - Required. Prompt to generate a response for.

  • :images (list of String.t/0) - A list of Base64 encoded images to be included with the prompt (for multimodal models only).

  • :system (String.t/0) - System prompt, overriding the model default.

  • :template (String.t/0) - Prompt template, overriding the model default.

  • :context - The context parameter returned from a previous call (enabling short conversational memory).

  • :format (String.t/0) - Set the expected format of the response (json).

  • :raw (boolean/0) - Set true if specifying a fully templated prompt. (:template is ingored)

  • :stream (boolean/0) - Whether to stream the response. The default value is false.

  • :keep_alive - How long to keep the model loaded.

  • :options - Additional advanced model parameters.