LlmCore.LLM.OpenAI (llm_core v0.3.0)

Copy Markdown View Source

OpenAI-compatible API provider implementing the Provider behaviour.

Works with OpenAI, OpenRouter, Together, Groq, local vLLM — any endpoint that speaks the OpenAI chat completions format.

Configuration

Defaults to OpenAI. Override per-call via opts or globally via app config:

# Per-call
OpenAI.send(prompt, base_url: "https://openrouter.ai/api/v1",
                    api_key: System.get_env("OPENROUTER_API_KEY"),
                    model: "anthropic/claude-sonnet-4-20250514")

# Global (application config)
config :llm_core, :openai_base_url, "https://openrouter.ai/api/v1"
config :llm_core, :openai_api_key, System.get_env("OPENROUTER_API_KEY")

Auth Resolution Order

  1. opts[:api_key] (per-call)
  2. Application.get_env(:llm_core, :openai_api_key)
  3. System.get_env("OPENAI_API_KEY")

URL Resolution Order

  1. opts[:base_url] (per-call)
  2. Application.get_env(:llm_core, :openai_base_url)
  3. "https://api.openai.com/v1" (default)

Summary

Functions

Checks if an OpenAI-compatible API key is configured.

Returns the OpenAI capability map including streaming, structured output, tool use, vision, and supported models.

Returns :api — OpenAI is a cloud API provider.

Sends a prompt to the OpenAI-compatible chat completions endpoint.

Streams a response from the OpenAI-compatible chat completions endpoint.

Functions

available?()

@spec available?() :: boolean()

Checks if an OpenAI-compatible API key is configured.

capabilities()

@spec capabilities() :: LlmCore.LLM.Provider.capabilities()

Returns the OpenAI capability map including streaming, structured output, tool use, vision, and supported models.

provider_type()

@spec provider_type() :: :api

Returns :api — OpenAI is a cloud API provider.

send(prompt, opts \\ [])

@spec send(
  LlmCore.LLM.Provider.prompt(),
  keyword()
) :: {:ok, LlmCore.LLM.Response.t()} | {:error, LlmCore.LLM.Error.t()}

Sends a prompt to the OpenAI-compatible chat completions endpoint.

When opts[:tools] contains a list of LlmToolkit.Tool structs, tool definitions are encoded into the request body. If the model responds with finish_reason: "tool_calls", the returned Response.tool_calls will contain decoded LlmToolkit.Tool.Call structs.

stream(prompt, opts \\ [])

@spec stream(
  LlmCore.LLM.Provider.prompt(),
  keyword()
) :: {:ok, Enumerable.t()} | {:error, LlmCore.LLM.Error.t()}

Streams a response from the OpenAI-compatible chat completions endpoint.