All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog and this project adheres to Semantic Versioning.
Unreleased
1.3.2 - 2026-05-01
Added
- Updated model catalog — refreshed across all providers, including GPT-5.5 Pro and Deepseek v4 Pro.
1.3.1 - 2026-04-27
Added
- Alibaba provider — opt-in built-in provider for Alibaba Cloud's hosted models.
- Updated model catalog — refreshed across all providers, including GPT-5.5 and Deepseek v4 (OpenRouter).
1.3.0 - 2026-04-21
Added
- Groq provider — opt-in built-in provider for Groq's hosted models.
- Moonshot AI provider — opt-in built-in provider for Moonshot's Kimi models.
- Z.ai provider — opt-in built-in provider for Z.ai.
Omni.Codec— lossless encode/decode of messages, content blocks, and usage to JSON-safe maps for persistence.:xhighthinking level — new level between:highand:max.- Claude Opus 4.7 support — full support including adaptive thinking.
- Updated model catalog — refreshed across all providers, including Claude Opus 4.7 and Kimi K2.6.
Changed
- Google Gemini API version — now defaults to
v1betaunconditionally, removing the previousv1/v1betaoption.
Fixed
- OpenAI Completions tool use parsing — arguments were dropped when name and arguments arrived in a single streaming event.
- OpenAI structured output —
additionalProperties: falsenow applied on object schemas for strict mode compatibility. - OpenAI Responses PDF attachments — added missing
filenamefield on file content blocks. - Google Gemini API version — hardcoded
v1betapath prefix, as many features require the beta API.
1.2.1 - 2026-04-02
Added
- Dynamic tool descriptions — Override
description/1to incorporateinit/1state into the tool description at construction time.
Fixed
- Google Gemini structured output — auto-upgrade to
v1betaAPI when:outputis set, fixing 400 errors caused byresponseMimeType/responseSchemafields not existing inv1.
1.2.0 - 2026-03-23
Added
release_dateon%Model{}— optionalDate.t()field populated from models.dev release date data. Enables filtering models by release date.- New models — GPT-5.4 Mini and GPT-5.4 Nano added to model data.
Removed
- Agent extracted —
Omni.Agent,Omni.Agent.*, andOmni.MessageTreehave been extracted into the standaloneomni_agentpackage. This package is now purely the stateless LLM API layer. %Turn{}struct removed -Loop:turn_idand:turn_parentoptions removed.
Changed
%Attachment{}— removed unuseddescriptionfield. Renamedoptstometa— an application-layer map that dialects do not read or send to providers (e.g. for filenames or display labels).%Response{}—turnfield replaced bymessages(list),usage(%Usage{}), andnode_ids(list of tree node IDs, used byomni_agent).:messageis now optional. Added:cancelledstop_reason.
1.1.0 - 2026-03-06
Added
%Turn{}struct — a conversation turn carrying messages, usage, and tree position (id,parent). Used as tree nodes inMessageTreeand onResponsefor accumulated generation data.MessageTree— tree-structured conversation history with branching, navigation, andEnumerablesupport.- Agent conversation tree — agents manage a
%MessageTree{}internally. State restructured aroundsystem,tools,tree,meta,privatefields.assignsreplaced bymeta(user data) +private(runtime state). Agent.set_state/2,3— replace agent configuration fields (model, system, tools, tree, opts, meta). Always replaces, atomic, idle-only./3accepts a value or updater function.Agent.navigate/2— set the active conversation path to any turn in the tree.Agent.usage/1— cumulative token usage across all turns in the tree.- Turn data on agent events —
:turnand:doneevents carry a%Turn{}with tree position, enabling external persistence without built-in storage. tree:start option — hydrate an agent with a pre-built%MessageTree{}.Model.to_ref/1— convert a resolved%Model{}back to its{provider_id, model_id}lookup reference.
Changed
%Response{}—messagesandusagefields replaced byturnfield containing a%Turn{}. Access viaresponse.turn.messagesandresponse.turn.usage.
Fixed
- OpenAI Completions dialect — tool use name lost when provider sends
idon every SSE chunk (affected Kimi models via OpenCode/Fireworks).
1.0.0 - 2026-03-06
Complete rewrite of Omni as a production-ready, multi-provider LLM client for Elixir.
Added
- Text generation —
generate_text/3andstream_text/3top-level API. - Streaming —
StreamingResponseimplementingEnumerablewith composable event handlers, text stream extraction, cancel support, and incomplete stream detection. - Tool use — define tools with JSON schemas and handlers; automatic execution loop runs tools in parallel and feeds results back to the model.
- Agents —
Omni.AgentGenServer for stateful multi-turn conversations with lifecycle callbacks, tool approval flow, pause/resume, and prompt queuing/steering. - Structured output — constrained decoding via JSON Schema with per-dialect wire format handling and automatic validation/retry.
- Providers — behaviour-based provider system with six built-in providers
- Anthropic, OpenAI, Google Gemini, OpenRouter, Ollama, OpenCode Zen
- Multi-dialect provider support for gateways that serve models with different wire formats
- Dialects — wire format translation separated from provider identity
- Anthropic Messages, OpenAI Chat Completions, OpenAI Responses, Google Gemini, Ollama Chat
- Model catalog — hundreds of models loaded from bundled JSON data (sourced from models.dev) at startup.
- Messages and content — two-role message model (
:user,:assistant) with typed content blocks:Text,Thinking,Attachment,ToolUse,ToolResult
Versions 0.1.0 and 0.1.1, released in 2024, were early prototypes with a different architecture. Version 1.0 is a complete rewrite and is not compatible with 0.1.x.