OpenAI.Responses (openai_responses v0.2.1)
Client for the OpenAI Responses API.
This module provides functions to interact with OpenAI's Responses API, allowing you to create, retrieve, and manage AI-generated responses.
Examples
# Create a simple text response
{:ok, response} = OpenAI.Responses.create("gpt-4o", "Write a haiku about programming")
# Extract the text from the response
text = OpenAI.Responses.Helpers.output_text(response)
# Create a response with tools and options
{:ok, response} = OpenAI.Responses.create("gpt-4o", "What's the weather like in Paris?",
tools: [%{type: "web_search_preview"}],
temperature: 0.7
)
# Stream a response
stream = OpenAI.Responses.create_stream("gpt-4o", "Tell me a story")
Enum.each(stream, fn event -> IO.inspect(event) end)
Summary
Functions
Collects a complete response from a streaming response.
Creates a new response with the specified model and input.
Creates a streaming response with the specified model and input.
Deletes a specific response by ID.
Retrieves a specific response by ID.
Lists input items for a specific response.
Creates a response with structured output.
Creates a streaming response with structured output.
Creates a streaming response and returns a proper Enumerable stream of events.
Extracts text deltas from a streaming response.
Functions
@spec collect_stream(Enumerable.t()) :: map()
Collects a complete response from a streaming response.
This is a convenience function that consumes a stream and returns a complete response, similar to what would be returned by the non-streaming API. All events are processed and combined into a final response object.
Parameters
stream
- The stream from OpenAI.Responses.stream/3
Returns
- The complete response map
Examples
# Get a streaming response
stream = OpenAI.Responses.stream("gpt-4o", "Tell me a story")
# Collect all events into a single response object
response = OpenAI.Responses.collect_stream(stream)
# Process the complete response
text = OpenAI.Responses.Helpers.output_text(response)
IO.puts(text)
Creates a new response with the specified model and input.
Parameters
model
- The model ID to use (e.g., "gpt-4o")input
- The text prompt or structured input messageopts
- Optional parameters for the request:tools
- List of tools to make available to the model:instructions
- System instructions for the model:temperature
- Sampling temperature (0.0 to 2.0):max_output_tokens
- Maximum number of tokens to generate:stream
- Whether to stream the response:previous_response_id
- ID of a previous response for continuation- All other parameters supported by the API
Returns
{:ok, response}
- On success, returns the response{:error, error}
- On failure
Creates a streaming response with the specified model and input.
This function is being maintained for backward compatibility.
New code should use stream/3
instead.
Returns
- A stream of events representing the model's response
Deletes a specific response by ID.
Parameters
response_id
- The ID of the response to deleteopts
- Optional parameters for the request
Returns
{:ok, result}
- On success, returns deletion confirmation{:error, error}
- On failure
Retrieves a specific response by ID.
Parameters
response_id
- The ID of the response to retrieveopts
- Optional parameters for the request:include
- Additional data to include in the response
Returns
{:ok, response}
- On success, returns the response{:error, error}
- On failure
Lists input items for a specific response.
Parameters
response_id
- The ID of the responseopts
- Optional parameters for the request:before
- List input items before this ID:after
- List input items after this ID:limit
- Number of objects to return (1-100):order
- Sort order ("asc" or "desc")
Returns
{:ok, items}
- On success, returns the input items{:error, error}
- On failure
@spec parse(String.t(), String.t() | map() | list(), map(), keyword()) :: {:ok, map()} | {:error, any()}
Creates a response with structured output.
This function is similar to create/3
but automatically parses the response
according to the provided schema and returns the parsed data.
Parameters
model
- The model ID to use (e.g., "gpt-4o")input
- The text prompt or structured input messageschema
- The schema definition for structured outputopts
- Optional parameters for the request:schema_name
- Optional name for the schema (default: "data")- All other options supported by
create/3
Returns
{:ok, parsed_data}
- On success, returns the parsed data{:error, error}
- On failure
Examples
# Define a schema
calendar_event_schema = OpenAI.Responses.Schema.object(%{
name: :string,
date: :string,
participants: {:array, :string}
})
# Create a response with structured output
{:ok, event} = OpenAI.Responses.parse(
"gpt-4o",
"Alice and Bob are going to a science fair on Friday.",
calendar_event_schema,
schema_name: "event"
)
# Access the parsed data
IO.puts("Event: #{event["name"]} on #{event["date"]}")
IO.puts("Participants: #{Enum.join(event["participants"], ", ")}")
Creates a streaming response with structured output.
This function is similar to stream/3
but automatically parses each chunk
according to the provided schema.
Parameters
model
- The model ID to use (e.g., "gpt-4o")input
- The text prompt or structured input messageschema
- The schema definition for structured outputopts
- Optional parameters for the request:schema_name
- Optional name for the schema (default: "data")- All other options supported by
stream/3
Returns
- A stream that yields parsed data chunks
Examples
# Define a schema
math_reasoning_schema = OpenAI.Responses.Schema.object(%{
steps: {:array, OpenAI.Responses.Schema.object(%{
explanation: :string,
output: :string
})},
final_answer: :string
})
# Stream a response with structured output
stream = OpenAI.Responses.parse_stream(
"gpt-4o",
"Solve 8x + 7 = -23",
math_reasoning_schema,
schema_name: "math_reasoning"
)
# Process the stream
Enum.each(stream, fn chunk ->
IO.inspect(chunk)
end)
Creates a streaming response and returns a proper Enumerable stream of events.
This function returns a stream that yields individual events as they arrive from the API, making it suitable for real-time processing of responses.
Parameters
model
- The model ID to use (e.g., "gpt-4o")input
- The text prompt or structured input messageopts
- Optional parameters for the request (same ascreate/3
)
Examples
# Print each event as it arrives
stream = OpenAI.Responses.stream("gpt-4o", "Tell me a story")
Enum.each(stream, &IO.inspect/1)
# Process text deltas in real-time
stream = OpenAI.Responses.stream("gpt-4o", "Tell me a story")
text_stream = OpenAI.Responses.Stream.text_deltas(stream)
# This preserves streaming behavior (one chunk at a time)
text_stream
|> Stream.each(fn delta ->
IO.write(delta)
end)
|> Stream.run()
Returns
- An Enumerable stream that yields events as they arrive
@spec text_deltas(Enumerable.t()) :: Enumerable.t(String.t())
Extracts text deltas from a streaming response.
This is a convenience function that returns a stream of text chunks as they arrive, useful for real-time display of model outputs. The function ensures text is not duplicated in the final output.
Parameters
stream
- The stream from OpenAI.Responses.stream/3
Returns
- A stream of text deltas
Examples
stream = OpenAI.Responses.stream("gpt-4o", "Tell me a story")
text_stream = OpenAI.Responses.text_deltas(stream)
# Print text deltas as they arrive (real-time output)
text_stream
|> Stream.each(fn delta ->
IO.write(delta)
end)
|> Stream.run()
IO.puts("") # Add a newline at the end
# Create a typing effect
stream = OpenAI.Responses.stream("gpt-4o", "Tell me a story")
text_stream = OpenAI.Responses.text_deltas(stream)
text_stream
|> Stream.each(fn delta ->
IO.write(delta)
Process.sleep(10) # Add delay for typing effect
end)
|> Stream.run()