API Reference agentic v#0.2.2
Copy MarkdownModules
Agentic — A composable AI agent runtime for Elixir.
Behaviour for agent communication protocols.
Behaviour for CLI-based local agent protocols.
Per-tool circuit breaker for agent tool execution.
Bounded concurrency semaphore using a GenServer.
Runtime config surface for agentic, loaded from
Application.get_all_env(:agentic).
Calculates LLM costs from token counts and model pricing data.
Top-level entry point for chat and embedding calls.
Unified model catalog backed by a GenServer.
Resolved credentials for a single provider.
Normalized error returned from a transport's parse_chat_response/3
(or surfaced from a transport network failure).
Unified error classification combining three sources
Generic pattern tables for classifying LLM provider errors from
response body text. Ported from openclaw's failover-matches.ts.
Transparent LLM API proxy that sits between external coding agents (Claude Code, OpenCode, Codex, Kimi, etc.) and the actual LLM providers.
Shared struct describing a single LLM model.
Behaviour describing one LLM service provider.
Anthropic Messages API provider.
Groq provider — the Phase 2 forcing function.
Ollama provider — local-first chat and embeddings.
OpenAI Chat Completions provider.
OpenRouter provider.
Snapshot of rate-limit headers returned by a provider on the most
recent response. Transports populate whichever fields they can parse;
missing fields stay nil.
Normalized chat response shape produced by every transport.
Behaviour describing one wire-protocol family used to talk to LLM
providers. A transport is pure: it knows how to translate a
canonical request shape into an HTTP request and how to parse the
HTTP response back into the shared Agentic.LLM.Response /
Agentic.LLM.Error structs. It does not perform any network I/O,
does not look up credentials, and does not implement any
provider-specific business logic.
Transport for the Anthropic Messages API
(POST {base_url}/messages).
Transport for the Ollama wire format
(POST {base_url}/api/chat, POST {base_url}/api/embed).
Transport for the OpenAI Chat Completions wire format
(POST {base_url}/chat/completions).
Provider quota / spend snapshot.
Periodically polls every enabled provider that implements
fetch_usage/1 and caches the latest snapshot. Worth's status
sidebar reads from this cache.
A single rate-limit / quota window for one provider. Anthropic has rolling 5-hour and 7-day windows; OpenRouter has a single credit pool; Groq has per-minute RPM caps. They all map to this struct.
Shared state threaded through all loop stages.
Two-tier context compression: truncation for moderate overflow, LLM-based summarization for severe overflow.
Detects plan steps and completion signals in LLM text output.
Composable pipeline engine for agent loops.
Shared utility functions for pipeline stages.
Phase state machine with per-mode validated transitions.
Defines loop profiles -- named compositions of stages and config.
Behaviour for loop pipeline stages.
Executes agent prompts via ACP (Agent Client Protocol).
Executes agent prompts via CLI-based local agent protocol.
Intercepts unfulfilled commitments in agent responses.
Checks context window usage and triggers compaction if needed.
Human-in-the-loop yield stage for :turn_by_turn mode.
Makes the LLM API call and stores the response in context.
Mode-aware routing stage. Replaces StopReasonRouter.
Injects a structured plan-request prompt for :agentic_planned mode.
Tracks plan step completion for :agentic_planned mode.
Injects a system reminder after tool calls to prevent context drift.
Executes pending tool calls and re-enters the loop.
Records session events to a transcript backend for session resumption.
Post-execution verification stage for :agentic_planned mode.
Gathers workspace context and injects it into the conversation.
Detects unfulfilled action commitments in agent responses.
Per-workspace in-process working memory.
Fact extraction from tool results and LLM responses.
Retrieves relevant context from the Knowledge store before LLM calls.
Smart model routing for Agentic with two selection modes.
Analyzes a user request using a fast, ideally free model to determine complexity, required capabilities (vision, audio, reasoning, etc.), and context requirements.
Defines model selection preferences and the scoring logic for each.
Scores and ranks catalog models based on an Analyzer.analysis() result
and a user Preference.
Behaviour for knowledge storage — entries, edges, search, and supersession.
File-based knowledge backend.
Recollect-backed knowledge storage backend.
Behaviour for CRUD operations on structured plans with step-level status tracking.
JSON file-based plan backend.
Behaviour for append-only session event logging.
JSONL file-based transcript backend.
Defines transport types for agent communication.
Generic ACP (Agent Client Protocol) implementation.
JSON-RPC 2.0 client over stdio for ACP communication.
Auto-discovery of ACP-compatible agents on the system.
Bridges ACP permission requests to Agentic tool permission system.
Agent-specific quirks and workarounds for ACP implementations.
ACP session lifecycle management.
ACP (Agent Client Protocol) type definitions and conversions.
Claude Code CLI protocol implementation.
Codex CLI protocol implementation (one-shot mode).
Protocol-specific errors
Raised when a requested protocol is not registered.
Raised when a protocol session encounters an error.
Raised when a CLI-based protocol binary is not found or not executable.
LLM protocol implementation that wraps existing callback-based LLM calls.
OpenCode CLI protocol implementation.
Registry for agent protocol implementations.
Validates that tool-requested paths stay within an explicit allowlist of roots.
OS-level sandbox capability detection.
Cross-platform sandbox wrapper for agent subprocesses.
Analyzes skill content to determine model tier requirements.
Manages bundled core skills that ship with every agent.
Parses SKILL.md files with YAML frontmatter and markdown body.
Workspace-scoped skill management.
Behaviour for storage backend implementations.
Bundles a storage backend module with its config for a specific workspace.
Local filesystem storage backend.
Behaviour for orchestration strategies.
Identity strategy. Passes opts through unchanged, matching current
Agentic.run/1 behavior exactly.
Experiment runner for head-to-head strategy comparison.
Process registry for strategy modules.
Per-workspace subagent coordinator.
Dynamic supervisor for per-workspace Coordinators.
Tool definition for delegating tasks to subagents.
Centralized telemetry helpers for Agentic.
GenServer that maintains running aggregates of orchestration telemetry events.
Tool definitions and execution for the agent loop.
Tracks which external tools are "activated" (promoted to first-class tool schemas) in the current agent session.
Gateway tools for lazy tool discovery and execution.
Memory tools for the agent: query knowledge store, write entries, and use in-process working memory (ContextKeeper).
Skill management tools for the agent: list, read, search, install, remove, analyze.
Workspace identity file detection and status.
Validates workspace paths are within the allowed base directory.
Manages workspace file structure and policy.
File templates for workspace initialization.
Mix Tasks
Sets up the Recollect test database for integration tests.