View Source Instructor (Instructor v0.0.2)
Instructor.ex is a spiritual port of the great Instructor Python Library by @jxnlco. This library brings structured prompting to LLMs. Instead of receiving text as output, Instructor will coax the LLM to output valid JSON that maps directly to the provided Ecto schema. If the LLM fails to do so, or provides values that do not pass your validations, it will provide you utilities to automatically retry with the LLM to correct errors. By default it's designed to be used with the OpenAI API, however it provides an extendable adapter behavior to work with ggerganov/llama.cpp and Bumblebee (Coming Soon!).
At its simplest, usage is pretty straightforward,
defmodule SpamPredicition do
use Ecto.Schema
use Instructor.Validator
@doc """
## Field Descriptions:
- class: Whether or not the email is spam
- reason: A short, less than 10 word rationalization for the classification
- score: A confidence score between 0.0 and 1.0 for the classification
"""
@primary_key false
embedded_schema do
field(:class, Ecto.Enum, values: [:spam, :not_spam])
field(:reason, :string)
field(:score, :float)
end
@impl true
def validate_changeset(changeset) do
changeset
|> Ecto.Changeset.validate_number(:score,
greater_than_or_equal_to: 0.0,
less_than_or_equal_to: 1.0
)
end
end
is_spam? = fn text ->
Instructor.chat_completion(
model: "gpt-3.5-turbo",
response_model: SpamPredicition,
max_retries: 3,
messages: [
%{
role: "user",
content: """
You purpose is to classify customer support emails as either spam or not.
This is for a clothing retailer business.
They sell all types of clothing.
Classify the following email: #{text}
"""
}
]
)
end
is_spam?.("Hello I am a Nigerian prince and I would like to send you money")
# => {:ok, %SpamPredicition{class: :spam, reason: "Nigerian prince email scam", score: 0.98}}
Simply create an ecto schema, optionally provide a @doc
to the schema definition which we pass down to the LLM, then make a call to Instructor.chat_completion/1
with context about the task you'd like the LLM to complete.
You can also provide a validate_changeset/1
function via the use Instructor.Validator
which will provide a set of code level ecto changeset validations. You can use this in conjunction with max_retries: 3
to automatically, iteratively go back and forth with the LLM up to n
times with any validation errors so that it has a chance to fix them.
Curious to learn more? Unsure of how you'd use this? Check out our extensive set of tutorials
Configuration
To configure the default OpenAI adapter you can set the configuration,
config :openai, api_key: "sk-........"
config :openai, http_options: [recv_timeout: 10 * 60 * 1000]
To use a local LLM, you can install and run llama.cpp serer and tell instructor to use it,
config :instructor, adapter: Instructor.Adapters.Llamacpp
Summary
Functions
Create a new chat completion for the provided messages and parameters.
Functions
@spec chat_completion(Keyword.t()) :: {:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()} | {:error, String.t()}
Create a new chat completion for the provided messages and parameters.
The parameters are passed directly to the LLM adapter. By default they shadow the OpenAI API parameters. For more information on the parameters, see the OpenAI API docs.
Additionally, the following parameters are supported:
:response_model
- The Ecto schema to validate the response against.:max_retries
- The maximum number of times to retry the LLM call if it fails, or does not pass validations.(defaults to `0`)
Examples
iex> Instructor.chat_completion(%{ ...> model: "gpt-3.5-turbo", ...> response_model: Instructor.Demos.SpamPrediction, ...> messages: [ ...> %{ ...> role: "user", ...> content: "Classify the following text: Hello, I am a Nigerian prince and I would like to give you $1,000,000." ...> } ...> }) {:ok,
%Instructor.Demos.SpamPrediction{
class: :spam
score: 0.999
}}