View Source OpenaiEx User Guide
Mix.install([
# {:openai_ex, git: "https://github.com/restlessronin/openai_ex.git", tag: "v0.1.5"},
{:openai_ex, "== 0.1.5"},
# {:openai_ex, path: Path.join(__DIR__, "..")},
{:kino, "~> 0.9.2"}
])
introduction
Introduction
OpenaiEx
is an Elixir library that provides a community-maintained client for the OpenAI API.
The library closely follows the structure of the official OpenAI API client libraries for Python and JavaScript, making it easy to understand and reuse existing documentation.
To learn how to use OpenaiEx, you can refer to the relevant parts of the official OpenAI API reference documentation, which we link to throughout this document.
This file is an executable Livebook, which means you can interactively run and modify the code samples provided. We encourage you to open it in Livebook and try out the code for yourself!
installation
Installation
You can install OpenaiEx using Mix:
in-livebook
In Livebook
Add the following code to the first connection cell:
Mix.install(
[
{:openai_ex, "~> 0.1.5"}
],
)
in-a-mix-project
In a Mix Project
Add the following to your mix.exs file:
def deps do
[
{:openai_ex, "~> 0.1.5"}
]
end
authentication
Authentication
To authenticate with the OpenAI API, you will need an API key. We recommend storing your API key in an environment variable. Since we are using Livebook, we can store this and other environment variables as Livebook Hub Secrets.
apikey = System.fetch_env!("LB_OPENAI_API_KEY")
openai = OpenaiEx.new(apikey)
You can also specify an organization if you are a member of more than one:
# organization = System.fetch_env!("LB_OPENAI_ORGANIZATION")
# openai = OpenaiEx.new(apikey, organization)
For more information on authentication, see the OpenAI API Authentication reference.
model
Model
list-models
List Models
To list all available models, use the Model.list()
function:
alias OpenaiEx.Model
openai |> Model.list()
retrieve-models
Retrieve Models
To retrieve information about a specific model, use the Model.retrieve()
function:
openai |> Model.retrieve("text-davinci-003")
For more information on using models, see the OpenAI API Models reference.
completion
Completion
To generate a completion, you first need to define a completion request structure using the Completion.new()
function. This function takes several parameters, such as the model ID, the prompt, the maximum number of tokens, etc.
alias OpenaiEx.Completion
completion_req =
Completion.new(
model: "text-davinci-003",
prompt: "Say this is a test",
max_tokens: 100,
temperature: 0
)
Once you have defined the completion request structure, you can generate a completion using the Completion.create()
function:
comp_response = openai |> Completion.create(completion_req)
For more information on generating completions, see the OpenAI API Completions reference.
chat-completion
Chat Completion
To generate a chat completion, you need to define a chat completion request structure using the ChatCompletion.new()
function. This function takes several parameters, such as the model ID and a list of chat messages. We have a module ChatMessage
which helps create messages in the chat format.
alias OpenaiEx.ChatCompletion
alias OpenaiEx.ChatMessage
chat_req = ChatCompletion.new(model: "gpt-3.5-turbo", messages: [ChatMessage.user("Hello")])
You can generate a chat completion using the ChatCompletion.create()
function:
chat_response = openai |> ChatCompletion.create(chat_req)
For more information on generating chat completions, see the OpenAI API Chat Completions reference.
edit
Edit
First you need to define an edit request structure using the Edit.new()
function. This function takes several parameters, such as the model ID, an input and an instruction.
alias OpenaiEx.Edit
edit_req =
Edit.new(
model: "text-davinci-edit-001",
input: "What day of the wek is it?",
instruction: "Fix the spelling mistakes"
)
To generate the edit, call the Edit.create()
function.
edit_response = openai |> Edit.create(edit_req)
For more information on generating edits, see the OpenAI API Edit reference.
image
Image
generate-image
Generate Image
We define the image creation request structure using the Image.new
function
alias OpenaiEx.Image
img_req = Image.new(prompt: "A cute baby sea otter", size: "256x256", n: 2)
Then call the Image.create()
function to generate the images.
img_response = openai |> Image.create(img_req)
For more information on generating images, see the OpenAI API Image reference.
Fetch the generated images
With the information in the image response, we can fetch the images from their URLs
fetch_blob = fn url -> Tesla.client([]) |> Tesla.get!(url) |> Map.get(:body) end
fetched_images = img_response["data"] |> Enum.map(fn i -> i["url"] |> fetch_blob.() end)
View the generated images
Finally, we can render the images using Kino
fetched_images
|> Enum.map(fn r -> r |> Kino.Image.new("image/png") |> Kino.render() end)
img_to_expmt = fetched_images |> List.first()
edit-image
Edit Image
We define an image edit request structure using the Image.Edit.new()
function. This function requires an image and a mask. For the image, we will use the one that we received. Let's load the mask from a URL.
# star_mask = File.read!(Path.join(__DIR__, "../assets/images/starmask.png"))
star_mask =
fetch_blob.(
"https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/images/starmask.png"
)
Set up the image edit request with image, mask and prompt.
img_edit_req =
Image.Edit.new(
image: img_to_expmt,
mask: star_mask,
size: "256x256",
prompt: "Image shows a smiling Otter"
)
We then call the Image.create_edit()
function
img_edit_response = openai |> Image.create_edit(img_edit_req)
and view the result
img_edit_response["data"]
|> Enum.map(fn i -> i["url"] |> fetch_blob.() |> Kino.Image.new("image/png") |> Kino.render() end)
image-variations
Image Variations
We define an image variation request structure using the Image.Variation.new()
function. This function requires an image.
img_var_req = Image.Variation.new(image: img_to_expmt, size: "256x256")
Then call the Image.create_variation()
function to generate the images.
###
img_var_response = openai |> Image.create_variation(img_var_req)
img_var_response["data"]
|> Enum.map(fn i -> i["url"] |> fetch_blob.() |> Kino.Image.new("image/png") |> Kino.render() end)
For more information on images variations, see the OpenAI API Image Variations reference.
embedding
Embedding
Define the embedding request structure using Embedding.new
.
alias OpenaiEx.Embedding
emb_req =
Embedding.new(
model: "text-embedding-ada-002",
input: "The food was delicious and the waiter..."
)
Then call the Embedding.create()
function.
emb_response = openai |> Embedding.create(emb_req)
For more information on generating embeddings, see the OpenAI API Embedding reference
audio
Audio
transcription
Transcription
To define an Audio request structure, we need to create a file parameter using Audio.File.new()
.
alias OpenaiEx.Audio
# audio_file = OpenaiEx.new_file(path: Path.join(__DIR__, "../assets/transcribe.mp3"))
audio_url = "https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/transcribe.mp3"
audio_file = OpenaiEx.new_file(name: audio_url, content: fetch_blob.(audio_url))
The file parameter is used to create the Audio request structure
audio_req = Audio.new(file: audio_file, model: "whisper-1")
We then call the Audio.transcribe()
function to create a transcription.
audio_response = openai |> Audio.transcribe(audio_req)
translation
Translation
The translation call uses practically the same request structure, but calls the Audio.translate()
endpoint
For more information on the audio endpoints see the Openai API Audio Reference
file
File
list-files
List files
To request all files that belong to the user organization, call the File.list()
function
alias OpenaiEx.File
openai |> File.list()
upload-files
Upload files
To upload a file, we need to create a file parameter, and then the upload request
# fine_tune_file = OpenaiEx.new_file(path: Path.join(__DIR__, "../assets/fine-tune.jsonl"))
ftf_url = "https://raw.githubusercontent.com/restlessronin/openai_ex/main/assets/fine-tune.jsonl"
fine_tune_file = OpenaiEx.new_file(name: ftf_url, content: fetch_blob.(ftf_url))
upload_req = File.new_upload(file: fine_tune_file, purpose: "fine-tune")
Then we call the File.create()
function to upload the file
upload_res = openai |> File.create(upload_req)
We can verify that the file has been uploaded by calling
openai |> File.list()
We grab the file id from the previous response value to use in the following samples
file_id = upload_res["id"]
retrieve-files
Retrieve files
In order to retrieve meta information on a file, we simply call the File.retrieve()
function with the given id
openai |> File.retrieve(file_id)
retrieve-file-content
Retrieve file content
Similarly to download the file contents, we call File.download()
openai |> File.download(file_id)
delete-file
Delete file
Finally, we can delete the file by calling File.delete()
openai |> File.delete(file_id)
Verify that the file has been deleted by listing files again
openai |> File.list()
moderation
Moderation
We use the moderation API by calling Moderation.new()
to create a new request
alias OpenaiEx.Moderation
mod_req = Moderation.new(input: "I want to kill people")
The call the function Moderation.create()
mod_res = openai |> Moderation.create(mod_req)
For more information on the moderation endpoints see the Openai API Moderation Reference