View Source OpenaiEx User Guide
Mix.install([
# {:openai_ex, git: "https://github.com/restlessronin/openai_ex.git", tag: "v0.4.0"},
{:openai_ex, "~> 0.4.0"},
# {:openai_ex, path: Path.join(__DIR__, "..")},
{:kino, "~> 0.11.0"}
])
Introduction
OpenaiEx
is an Elixir library that provides a community-maintained OpenAI API client especially for Livebook development.
At this point, all API endpoints and features (as of Nov 11, 2023) are supported, including the Assistants API Beta, the tools support in chat completions, and the streaming version of the completion and chat completion endpoints.
There are some differences compared to other elixir openai wrappers.
- I tried to faithfully mirror the naming/structure of the official python api. For example, content that is already in memory can be uploaded as part of a request, it doesn't have to be read from a file at a local path.
- I was developing for a livebook use-case, so I don't have any config, only environment variables.
- Streaming API versions are fully supported.
To learn how to use OpenaiEx, you can refer to the relevant parts of the official OpenAI API reference documentation, which we link to throughout this document.
This file is an executable Livebook, which means you can interactively run and modify the code samples provided. We encourage you to open it in Livebook and try out the code for yourself!
Installation
You can install OpenaiEx using Mix:
In Livebook
Add the following code to the first connection cell:
Mix.install(
[
{:openai_ex, "~> 0.4.0"}
],
)
In a Mix Project
Add the following to your mix.exs file:
def deps do
[
{:openai_ex, "~> 0.4.0"}
]
end
Authentication
To authenticate with the OpenAI API, you will need an API key. We recommend storing your API key in an environment variable. Since we are using Livebook, we can store this and other environment variables as Livebook Hub Secrets.
apikey = System.fetch_env!("LB_OPENAI_API_KEY")
openai = OpenaiEx.new(apikey)
You can also specify an organization if you are a member of more than one:
# organization = System.fetch_env!("LB_OPENAI_ORGANIZATION")
# openai = OpenaiEx.new(apikey, organization)
For more information on authentication, see the OpenAI API Authentication reference.
Model
List Models
To list all available models, use the Model.list()
function:
alias OpenaiEx.Model
openai |> Model.list()
Retrieve Models
To retrieve information about a specific model, use the Model.retrieve()
function:
openai |> Model.retrieve("text-davinci-003")
For more information on using models, see the OpenAI API Models reference.
Completion
To generate a completion, you first need to define a completion request structure using the Completion.new()
function. This function takes several parameters, such as the model ID, the prompt, the maximum number of tokens, etc.
alias OpenaiEx.Completion
completion_req =
Completion.new(
model: "text-davinci-003",
prompt:
"Give me some background on the elixir language. Why was it created? What is it used for? What distinguishes it from other languages? How popular is it?",
max_tokens: 500,
temperature: 0
)
Once you have defined the completion request structure, you can generate a completion using the Completion.create()
function:
comp_response = openai |> Completion.create(completion_req)
You can also call the endpoint and have it return a stream. This returns the result as a series of tokens, which have to be put together in code.
To use the stream option, call the Completion.create()
function with stream: true
completion_stream = openai |> Completion.create(completion_req, stream: true)
completion_stream |> Stream.flat_map(& &1) |> Enum.each(fn x -> IO.puts(inspect(x)) end)
For an example of how to programmatically work with this stream, check out the Completions Bot livebook which builds a ChatBot UI using the Completion API (with and without streaming).
For more information on generating completions, see the OpenAI API Completions reference.
Chat Completion
To generate a chat completion, you need to define a chat completion request structure using the ChatCompletion.new()
function. This function takes several parameters, such as the model ID and a list of chat messages. We have a module ChatMessage
which helps create messages in the chat format.
alias OpenaiEx.ChatCompletion
alias OpenaiEx.ChatMessage
alias OpenaiEx.MsgContent
chat_req =
ChatCompletion.new(
model: "gpt-3.5-turbo",
messages: [
ChatMessage.user(
"Give me some background on the elixir language. Why was it created? What is it used for? What distinguishes it from other languages? How popular is it?"
)
]
)
You are able to pass images to the API by creating a message.
ChatMessage.user(
MsgContent.image_url(
"https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/images/starmask.png"
)
)
You can generate a chat completion using the ChatCompletion.create()
function:
chat_response = openai |> ChatCompletion.create(chat_req)
You can also call the endpoint and have it stream the response. This returns the result as a series of tokens, which have to be put together in code.
To use the stream option, call the ChatCompletion.create()
function with stream: true
chat_stream = openai |> ChatCompletion.create(chat_req, stream: true)
chat_stream |> Stream.flat_map(& &1) |> Enum.each(fn x -> IO.puts(inspect(x)) end)
For a more in-depth example of ChatCompletion
, check out the Deeplearning.AI OrderBot Livebook.
For a detailed example of the use of the streaming ChatCompletion
API, check out Streaming Orderbot, the streaming equivalent of the prior example.
For more information on generating chat completions, see the OpenAI API Chat Completions reference.
Function(Tool) Calling
In OpenAI's ChatCompletion
endpoint, you can use the function calling feature to call a custom function and pass its result as part of the conversation. Here's an example of how to use the function calling feature:
First, we set up the function specification and completion request. The function specification defines the name, description, and parameters of the function we want to call. In this example, we define a function called get_current_weather
that takes a location
parameter and an optional unit
parameter. The completion request includes the function specification, the conversation history, and the model we want to use.
tool_spec =
Jason.decode!("""
{"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
""")
rev_msgs = [
ChatMessage.user("What's the weather like in Boston today?")
]
fn_req =
ChatCompletion.new(
model: "gpt-3.5-turbo",
messages: rev_msgs |> Enum.reverse(),
tools: [tool_spec],
tool_choice: "auto"
)
Next, we call the OpenAI endpoint to get a response that includes the function call.
fn_response = openai |> ChatCompletion.create(fn_req)
We extract the function call from the response and call the appropriate function with the given parameters. In this example, we define a map of functions that maps function names to their implementations. We then use the function name and arguments from the function call to look up the appropriate function and call it with the given parameters.
fn_message = fn_response["choices"] |> Enum.at(0) |> Map.get("message")
tool_call = fn_message |> Map.get("tool_calls") |> List.first()
tool_id = tool_call |> Map.get("id")
fn_call = tool_call |> Map.get("function")
functions = %{
"get_current_weather" => fn location, unit ->
%{
"location" => location,
"temperature" => "72",
"unit" => unit,
"forecast" => ["sunny", "windy"]
}
|> Jason.encode!()
end
}
fn_name = fn_call["name"]
fn_args = fn_call["arguments"] |> Jason.decode!()
location = fn_args["location"]
unit = unless is_nil(fn_args["unit"]), do: fn_args["unit"], else: "fahrenheit"
fn_value = functions[fn_name].(location, unit)
We then pass the returned value back to the ChatCompletion endpoint with the conversation history to that point to get the final response.
latest_msgs = [ChatMessage.tool(tool_id, fn_name, fn_value) | [fn_message | rev_msgs]]
fn_req_2 =
ChatCompletion.new(
model: "gpt-3.5-turbo",
messages: latest_msgs |> Enum.reverse()
)
fn_response_2 = openai |> ChatCompletion.create(fn_req_2)
The final response includes the result of the function call integrated into the conversation.
Image
Generate Image
We define the image creation request structure using the Image.new
function
alias OpenaiEx.Image
img_req = Image.new(prompt: "A cute baby sea otter", size: "256x256", n: 2)
Then call the Image.create()
function to generate the images.
img_response = openai |> Image.create(img_req)
For more information on generating images, see the OpenAI API Image reference.
Fetch the generated images
With the information in the image response, we can fetch the images from their URLs
fetch_blob = fn url ->
Finch.build(:get, url) |> Finch.request!(OpenaiEx.Finch) |> Map.get(:body)
end
fetched_images = img_response["data"] |> Enum.map(fn i -> i["url"] |> fetch_blob.() end)
View the generated images
Finally, we can render the images using Kino
fetched_images
|> Enum.map(fn r -> r |> Kino.Image.new("image/png") |> Kino.render() end)
img_to_expmt = fetched_images |> List.first()
Edit Image
We define an image edit request structure using the Image.Edit.new()
function. This function requires an image and a mask. For the image, we will use the one that we received. Let's load the mask from a URL.
star_mask =
fetch_blob.(
"https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/images/starmask.png"
)
# star_mask = OpenaiEx.new_file(path: Path.join(__DIR__, "../assets/images/starmask.png"))
Set up the image edit request with image, mask and prompt.
img_edit_req =
Image.Edit.new(
image: img_to_expmt,
mask: star_mask,
size: "256x256",
prompt: "Image shows a smiling Otter"
)
We then call the Image.create_edit()
function
img_edit_response = openai |> Image.create_edit(img_edit_req)
and view the result
img_edit_response["data"]
|> Enum.map(fn i -> i["url"] |> fetch_blob.() |> Kino.Image.new("image/png") |> Kino.render() end)
Image Variations
We define an image variation request structure using the Image.Variation.new()
function. This function requires an image.
img_var_req = Image.Variation.new(image: img_to_expmt, size: "256x256")
Then call the Image.create_variation()
function to generate the images.
###
img_var_response = openai |> Image.create_variation(img_var_req)
img_var_response["data"]
|> Enum.map(fn i -> i["url"] |> fetch_blob.() |> Kino.Image.new("image/png") |> Kino.render() end)
For more information on images variations, see the OpenAI API Image Variations reference.
Embedding
Define the embedding request structure using Embedding.new
.
alias OpenaiEx.Embedding
emb_req =
Embedding.new(
model: "text-embedding-ada-002",
input: "The food was delicious and the waiter..."
)
Then call the Embedding.create()
function.
emb_response = openai |> Embedding.create(emb_req)
For more information on generating embeddings, see the OpenAI API Embedding reference
Audio
Transcription
To define an Audio request structure, we need to create a file parameter using Audio.File.new()
.
alias OpenaiEx.Audio
audio_url = "https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/transcribe.mp3"
audio_file = OpenaiEx.new_file(name: audio_url, content: fetch_blob.(audio_url))
# audio_file = OpenaiEx.new_file(path: Path.join(__DIR__, "../assets/transcribe.mp3"))
The file parameter is used to create the Audio request structure
audio_req = Audio.new(file: audio_file, model: "whisper-1")
We then call the Audio.transcribe()
function to create a transcription.
audio_response = openai |> Audio.transcribe(audio_req)
Translation
The translation call uses practically the same request structure, but calls the Audio.translate()
endpoint
For more information on the audio endpoints see the Openai API Audio Reference
File
List files
To request all files that belong to the user organization, call the File.list()
function
alias OpenaiEx.File
openai |> File.list()
Upload files
To upload a file, we need to create a file parameter, and then the upload request
# fine_tune_file = OpenaiEx.new_file(path: Path.join(__DIR__, "../assets/fine-tune.jsonl"))
ftf_url = "https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/fine-tune.jsonl"
fine_tune_file = OpenaiEx.new_file(name: ftf_url, content: fetch_blob.(ftf_url))
upload_req = File.new_upload(file: fine_tune_file, purpose: "fine-tune")
Then we call the File.create()
function to upload the file
upload_res = openai |> File.create(upload_req)
We can verify that the file has been uploaded by calling
openai |> File.list()
We grab the file id from the previous response value to use in the following samples
file_id = upload_res["id"]
Retrieve files
In order to retrieve meta information on a file, we simply call the File.retrieve()
function with the given id
openai |> File.retrieve(file_id)
Retrieve file content
Similarly to download the file contents, we call File.download()
openai |> File.download(file_id)
Delete file
Finally, we can delete the file by calling File.delete()
openai |> File.delete(file_id)
Verify that the file has been deleted by listing files again
openai |> File.list()
FineTuning Job
To run a fine-tuning job, we minimally need a training file. We will re-run the file creation request above.
upload_res = openai |> File.create(upload_req)
Next we call FineTuning.Job.new()
to create a new request structure
alias OpenaiEx.FineTuning
ft_req = FineTuning.Job.new(model: "davinci-002", training_file: upload_res["id"])
To begin the fine tune, we call the FineTune.create()
function
ft_res = openai |> FineTuning.Job.create(ft_req)
We can list all fine tunes by calling FineTune.list()
openai |> FineTuning.Job.list()
The function FineTune.retrieve()
gets the details of a particular fine tune.
ft_id = ft_res["id"]
openai |> FineTuning.Job.retrieve(fine_tuning_job_id: ft_id)
and FineTune.list_events()
can be called to get the events
openai |> FineTuning.Job.list_events(fine_tuning_job_id: ft_id)
To cancel a Fine Tune job, call FineTune.cancel()
openai |> FineTuning.Job.cancel(fine_tuning_job_id: ft_id)
A fine tuned model can be deleted by calling the Model.delete()
ft_model = ft_res["fine_tuned_model"]
unless is_nil(ft_model) do
openai |> Model.delete(ft_model)
end
For more information on the fine tune endpoints see the Openai API Moderation Reference
Moderation
We use the moderation API by calling Moderation.new()
to create a new request
alias OpenaiEx.Moderation
mod_req = Moderation.new(input: "I want to kill people")
The call the function Moderation.create()
mod_res = openai |> Moderation.create(mod_req)
For more information on the moderation endpoints see the Openai API Moderation Reference