Modules
Public facade for LlmCore capabilities.
Agent struct representing a registered LLM provider with a human-friendly name.
Enforces iteration budget limits.
Executes validated tool calls via the resolver function.
Builds the messages to append for the next LLM turn.
Determines whether the outer loop should continue or stop.
Extracts tool calls from the LLM response.
Validates parsed tool calls against available tool definitions.
Carries data through the agentic iteration pipeline.
Agentic tool-calling loop.
ALF pipeline for processing a single agentic loop iteration.
ALF pipeline for orchestrated tool dispatch.
GenServer for runtime agent management and discovery.
Evaluates the dispatch recipe to produce an execution plan.
Composer that collects parallel execution results back into a single event.
Formats collected results into a single consolidated output string.
Executes a tool call directly via the resolver function.
Executes a single parallel sub-tool call.
Executes serial tool call steps sequentially.
Composer that fans out one dispatch event into N parallel call events.
Determines the dispatch strategy for a tool call.
Event struct flowing through the ToolDispatch pipeline.
Query surface for CLI-based LLM providers.
Helpers for reading and mutating llm_core.toml files.
Loads llm_core configuration files (routing, providers, etc.) from disk and updates the store.
Lightweight ETS-backed storage for runtime configuration.
Watches configuration directories for changes and triggers reloads.
Minimal execution control registry used by CLI providers to support HALT semantics.
Anthropic Claude API provider implementing LlmCore.LLM.Provider.
Generic local inference appliance provider (DGX Spark, future devices).
Helpers for running CLI-based LLM providers via Port.
Universal CLI-based LLM provider.
Configuration for a CLI-based LLM provider.
Standardized error struct for all LLM providers.
Normalizes prompts into the chat message format used by API providers.
In-process agentic provider — runs the agent loop inside the BEAM VM.
Config-driven provider resolution for the Native agentic loop.
Ollama provider implementing the LlmCore.LLM.Provider behaviour.
OpenAI-compatible API provider implementing the Provider behaviour.
Behaviour module defining the contract for LLM providers.
Standardized response struct for all LLM providers.
Helper module for parsing Server-Sent Events (SSE) from LLM APIs.
Hindsight 0.4+ integration for semantic memory capabilities.
Smart caching layer for Hindsight operations.
Circuit breaker pattern for Hindsight MCP connections.
Hindsight-specific configuration with multi-level precedence.
Auto-discovery of Hindsight API endpoints.
Retry logic with exponential backoff for Hindsight operations.
Supervisor for Hindsight MCP integration components.
Write-behind buffer for Hindsight retain operations.
Cross-project path helpers for llm_core.
ALF pipeline that normalizes a request, resolves routing, and dispatches it to the selected provider using either blocking or streaming mode.
Carries inference data through the ALF inference pipeline.
ALF pipeline orchestrating all Hindsight memory operations (retain, recall, reflect). It centralizes caching, circuit breaker gating, retries, and async buffering to match the architecture requirements of llm_core.
ALF pipeline that resolves task types to provider/agent configurations.
Internal state passed through the routing pipeline stages.
Normalized provider metadata loaded from TOML configuration and runtime discovery.
In-memory accessors for provider definitions loaded from TOML configuration.
GenServer that resolves task types to full LLM agent configurations.
Fully-realized routing decision containing the agent metadata.
Represents a routing rule entry mapping a task type to an agent alias.
In-memory representation of the routing rules loaded from YAML.
Utilities for extracting structured data from LLM responses.
Placeholder adapter returned when the optional Instructor dependency
is not available.
Helpers for working with providers that support JSON-mode outputs.
Normalizes schema declarations and validates decoded structured data.
Helper utilities for emitting telemetry events from llm_core.
Simple telemetry handler that logs llm_core pipeline events.
Translates between provider-neutral tool structs and provider-specific wire formats.
Validates tool call arguments against a tool's JSON Schema parameters.
Mix Tasks
Runs ALF routing and inference pipeline benchmarks.
Mutates llm_core.toml entries and reloads the runtime configuration.
Displays llm_core configuration loaded from llm_core.toml (merged with
overrides).
Validates llm_core.toml configuration and prints provider availability.