Generic OpenAI-compatible model provider.
Mirrors the LiteLlm wrapper class from Google's Python ADK. Speaks the
OpenAI Chat Completions wire format (POST /chat/completions), which is
also the format emulated by the LiteLLM proxy
and every OpenAI-compatible endpoint (Groq, Together, OpenRouter, Ollama,
vLLM, LM Studio, Azure OpenAI, and 100+ others via LiteLLM).
Usage
Direct OpenAI
%ADK.Model.LiteLlm{
model_name: "gpt-4o",
api_key: System.fetch_env!("OPENAI_API_KEY"),
base_url: "https://api.openai.com/v1"
}(Or resolve via ADK.Model.Registry.resolve("gpt-4o", api_key: key) — gpt-*,
o1*, and o3* model names default to the OpenAI base URL.)
Through a LiteLLM proxy
%ADK.Model.LiteLlm{
model_name: "openai/gpt-4o",
api_key: "sk-litellm-master-key",
base_url: "http://localhost:4000"
}The proxy inspects the provider/model prefix to route to the correct
upstream. Any model string LiteLLM recognizes works here (e.g.
"anthropic/claude-3-5-sonnet-20241022", "ollama/llama3",
"groq/mixtral-8x7b-32768").
Any OpenAI-compatible endpoint
%ADK.Model.LiteLlm{
model_name: "llama3",
api_key: "none",
base_url: "http://localhost:11434/v1"
}Function calling
Tools defined via ADK.Tool.FunctionTool are serialized to OpenAI's
tools: [{type: "function", function: {...}}] schema. Assistant-emitted
tool_calls are parsed back into ADK.Types.FunctionCall parts, and
ADK.Types.FunctionResponse parts are serialized as role: "tool"
messages with tool_call_id.