View Source Omni.Provider behaviour (Omni v0.1.0)
A Provider represents an LLM provider service. By fully implementing the Provider behaviour, a module can be made to support any LLM backend available.
Out of the box, Omni ships Providers for:
Anthropic
- chat with any of of the Claude modelsGoogle
- chat with any of of the Gemini modelsOllama
- use Ollama to chat with any local modelOpenAI
- configurable with any other OpenAI compatible chat API
Implementing a Provider
This module also provides a number of macros that streamline creating a Provider implementation. Most callbacks can be implemented simply by calling the relevant macro.
defmodule MyLLM do
use Omni.Provider
@api_key Application.compile_env(:omni, [__MODULE__, :api_key])
# Macros
base_url "http://localhost:1234/api"
headers %{authorization: "Bearer #{@api_key}"}
endpoint "/chat"
stream_endpoint "/chat", stream: true
schema [
# define NimbleOptions schema for chat request params
]
# Callbacks
@impl true
def parse_stream(data) do
# parse binary data stream into chunks
{:cont, [chunks]}
end
@impl true
def merge_stream(body, data) do
# merge stream data chunks into single response
body
end
end
Extending a Provider
The extend/2
macro can be used to inherit from an already implemented
Provider. For example, if you were implementing a LLM service that was OpenAI
compatible, but required a different kind of authorization header.
defmodule MyLLM do
use Omni.Provider
@api_key Application.compile_env(:omni, [__MODULE__, :api_key])
extends Omni.Providers.OpenAI, except: [:headers]
headers %{x_auth_token: @api_key}
end
Summary
Types
Alt name
Request headers
Initialization options.
An arbitrary keyword list of request options for the LLM.
An arbitrary map representing the response from the LLM provider.
Provider struct
Callbacks
Invoked to return the Provider base URL as a string.
Invoked to return the Provider request body as a map.
Invoked to return the Provider chat endpoint and any default request options for that endpoint. Returns a tuple.
Invoked to return the Provider request headers/0
.
Invoked to reconstruct a streaming data chunk back into a full response body.
Invoked to parse a streaming request data chunk into a list of one or more structured messages.
Invoked to return the Provider chat streaming endpoint and any default request options for that endpoint. Returns a tuple.
Functions
Defines the base URL for the Provider.
Defines the chat endpoint for the Provider.
Extends an existing Provider module by delegating all callbacks to the parent module.
Defines the request headers/0
for the Provider.
Initializes a new instance of a Provider, using the given module or alt/0
alias.
Defines the schema for the Provider request/0
options.
Defines the streaming chat endpoint for the Provider.
Types
@type alt() :: :anthropic | :google | :openai | :ollama
Alt name
Accepted aliases for the built in Provider modules.
@type headers() :: Enumerable.t({atom() | String.t(), String.t() | [String.t()]})
Request headers
Can be any Enumerable containing keys and values. Handled by Req
as follows:
- atom header names are turned into strings, replacing _ with -. For example, :user_agent becomes "user-agent".
- string header names are downcased.
@type init_opts() :: keyword()
Initialization options.
An arbitrary keyword list of options to initialize the Provider.
@type request() :: keyword()
An arbitrary keyword list of request options for the LLM.
Whilst there are some similarities across providers, refer to the documentation for each provider to ensure you construct a valid request.
@type response() :: keyword()
An arbitrary map representing the response from the LLM provider.
Refer to the documentation for each provider to understand the expected response format.
@type t() :: %Omni.Provider{mod: module(), req: Req.Request.t()}
Provider struct
Once initialized, is used to make subsequent API requests.
Callbacks
Invoked to return the Provider base URL as a string.
For a simple implementation, prefer the base_url/1
macro. Manually
implementing the callback allows using the initialization options to
dynamically generate the return value.
Invoked to return the Provider request body as a map.
This callback is optional. If not implemented, the default implementaion
returns the user request/0
options as a map/0
.
Invoked to return the Provider chat endpoint and any default request options for that endpoint. Returns a tuple.
For a simple implementation, prefer the endpoint/2
macro. Manually
implementing the callback allows using the user request/0
options to
dynamically generate the return value.
Invoked to return the Provider request headers/0
.
For a simple implementation, prefer the headers/1
macro. Manually
implementing the callback allows using the initialization options to
dynamically generate the return value.
This callback is optional. If not implemented, the default implementaion returns an empty set of headers.
Invoked to reconstruct a streaming data chunk back into a full response body.
Receives the response body and a single streaming message. The body should be returned with the message merged into it.
This callback is optional. If not implemented, the default implementaion
always returns the body as an empty string. If a Provider does not implement
this callback, calling Task.await(stream_request_task)
will return the body
as an empty string, but will still send streaming messages to the specified
process.
Invoked to parse a streaming request data chunk into a list of one or more structured messages.
Receives a binary data chunk.
Returning {:cont, messages}
emits each of the messages and continues
streaming chunks.
Returning {:halt, messages}
emits each of the messages and cancels the
streaming request.
@callback schema() :: NimbleOptions.t()
Invoked to return the Provider chat schema, used to validate the request/0
options.
Prefer the schema/1
macro over a manual implementation.
Invoked to return the Provider chat streaming endpoint and any default request options for that endpoint. Returns a tuple.
For a simple implementation, prefer the stream_endpoint/2
macro. Manually
implementing the callback allows using the user request/0
options to
dynamically generate the return value.
Functions
Defines the base URL for the Provider.
Defines the chat endpoint for the Provider.
Optionally accepts list of default request/0
options for this endpoint.
Extends an existing Provider module by delegating all callbacks to the parent module.
Use the :except
option to specify which callbacks shouldn't be delegated to
the parent, allowing you to override with a tailored implementation.
Defines the request headers/0
for the Provider.
Initializes a new instance of a Provider, using the given module or alt/0
alias.
Accepts a list of initialization options. Refer to the Provider module docs for details of the accepted options.
Example
iex> Omni.Provider.init(:openai)
%Omni.Provider{mod: Omni.Providers.OpenAI, req: %Req.Request{}}
@spec schema(NimbleOptions.schema()) :: Macro.t()
Defines the schema for the Provider request/0
options.
Schemas should be defined as a NimbleOptions.schema/0
.
Defines the streaming chat endpoint for the Provider.
Optionally accepts list of default request/0
options for this endpoint.