View Source Omni.Providers.OpenAI (Omni v0.1.1)
Provider implementation for the OpenAI Chat API. Use this Provider to chat with any of the Chat GPT models.
Authorization
Obtain an API key from the OpenAI Developer Dashboard
and add it to your application's config.exs
:
config :omni, Omni.Providers.OpenAI, "sk-proj-notarealkey"
Alternatively, pass the API key to Onmi.init/2
:
iex> Omni.init(:openai, api_key: api_key)
%Omni.Provider{mod: Omni.Providers.OpenAI, req: %Req.Request{}}
OpenAI configuration
This Provider accepts the following initialization options:
:organization_id
- When given, sets theopenai-organization
header.:project_id
- When given, sets theopenai-project
header.
Base URL
Many LLM providers mirror OpenAI's API. In such cases, you can use this
Provider module and pass the :base_url
option to Omni.init/2
:
iex> Omni.init(:ollama, base_url: "https://api.together.xyz/v1")
%Omni.Provider{mod: Omni.Providers.OpenAI, req: %Req.Request{}}
Summary
Functions
Returns the schema for this Provider.
Schema
:model
(String.t/0
) - Required. Name of the model to use.:messages
(list ofmap/0
) - Required. A list of messages comprising the conversation so far.:logit_bias
(map ofString.t/0
keys andinteger/0
values) - Modify the likelihood of specified tokens appearing in the completion.:log_probs
(boolean/0
) - Whether to return log probabilities of the output tokens or not.:top_logprobs
- An integer between 0 and 20 specifying the number of most likely tokens to return at each token position.:max_tokens
(non_neg_integer/0
) - The maximum number of tokens that can be generated in the chat completion.:n
(non_neg_integer/0
) - How many chat completion choices to generate for each input message.:frequency_penalty
(float/0
) - Number between -2.0 and 2.0.:presence_penalty
(float/0
) - Number between -2.0 and 2.0.:response_format
(map/0
) - An object specifying the format that the model must output.:type
:seed
(integer/0
) - If specified, system will make a best effort to sample deterministically.:stop
- Up to 4 sequences where the API will stop generating further tokens.:stream
(boolean/0
) - If set, partial message deltas will be sent.:stream_options
(map/0
) - Options for streaming response.:include_usage
(boolean/0
) - If set, an additional usage stats chunk will be streamed.
:temperature
(float/0
) - What sampling temperature to use, between 0 and 2.:top_p
(float/0
) - An alternative to sampling with temperature, called nucleus sampling.:tools
(list ofmap/0
) - A list of tools the model may call.:tool_choice
- Controls which (if any) tool is called by the model.:user
(String.t/0
) - A unique identifier representing your end-user.