View Source InstructorLite.Adapters.OpenAI (instructor_lite v0.2.0)
OpenAI adapter.
This adapter is implemented using chat completions endpoint and structured outputs.
JSON mode
Even though the adapter uses strict JSON Schema mode by default, it respects all explicitly provided keys in
params
. To switch to a less strict JSON mode, simply provide theresponse_format
key in your params.
Params
params
argument should be shaped as a Create chat completion request body.
Example
InstructorLite.instruct(%{
messages: [%{role: "user", content: "John is 25yo"}],
model: "gpt-4o-mini",
service_tier: "default"
},
response_model: %{name: :string, age: :integer},
adapter: InstructorLite.Adapters.OpenAI,
adapter_context: [api_key: Application.fetch_env!(:instructor_lite, :openai_key)]
)
{:ok, %{name: "John", age: 25}}
Summary
Functions
Updates params
with prompt based on json_schema
and notes
.
Parse chat completion endpoint response.
Updates params
with prompt for retrying a request.
Make request to OpenAI API.
Functions
Updates params
with prompt based on json_schema
and notes
.
Also specifies default gpt-4o-mini
model if not provided by a user.
Parse chat completion endpoint response.
Can return:
{:ok, parsed_json}
on success.{:error, :refusal, reason}
on refusal.{:error, :unexpected_response, response}
if response is of unexpected shape.
Updates params
with prompt for retrying a request.
Make request to OpenAI API.
Options
:api_key
(String.t/0
) - Required. OpenAI API key:http_client
(atom/0
) - Any module that followsReq.post/2
interface The default value isReq
.:http_options
(keyword/0
) - Options passed tohttp_client.post/2
The default value is[receive_timeout: 60000]
.:url
(String.t/0
) - API endpoint to use for sending requests The default value is"https://api.openai.com/v1/chat/completions"
.