Standardized response struct for all LLM providers.
This struct provides a unified format for responses from any provider, whether CLI-based (like Claude Code) or API-based (like OpenAI).
Fields
content- The main text content of the responseprovider- Atom identifying the provider (e.g.,:claude_code,:openai)model- String identifying the model used (e.g., "claude-3-opus", "gpt-4")usage- Map with token usage info (prompt_tokens, completion_tokens, total_tokens)raw- The raw response from the provider for debugging/passthroughmetadata- Additional provider-specific metadata (latency, request_id, etc.)structured- Parsed/validated structured output (when requested)tool_calls- List of tool call requests from the LLM, or nil when no tools were invoked
Example
response = Response.new(
content: "Hello! How can I help you?",
provider: :openai,
model: "gpt-4",
usage: %{prompt_tokens: 10, completion_tokens: 8, total_tokens: 18}
)