In-process agentic provider — runs the agent loop inside the BEAM VM.
- Reads the agent .md system prompt
- Resolves an LLM API provider (Appliance → Zai → Anthropic)
- Calls LlmCore.Agent.Loop.run with LlmToolkit.CodeTools
- Returns a standard LlmCore.LLM.Response
Zero CLI cost. Uses whatever API provider is available.
Provider Resolution Priority
- Appliance (local, free) — Qwen, etc. on DGX Spark / LM Studio
- Z.ai (plan-covered) — GLM-5.1, included in coding plan
- Anthropic (direct API) — Claude models, pay-per-token
Usage
This provider implements LlmCore.LLM.Provider and is selected by
provider routing when provider: "native" is specified or when no
CLI providers are available.
{:ok, response} = LlmCore.LLM.Native.send(task,
system_prompt_file: "/path/to/agent.md",
cwd: "/path/to/project",
model: "qwen3-vl-32b-thinking"
)
Summary
Functions
Walk a list of {module, model, opts} candidates and invoke run_fn
on each until one succeeds.
Functions
@spec try_cascade([{module(), String.t(), keyword()}], (tuple() -> term())) :: {:ok, LlmCore.LLM.Response.t(), [map()]} | {:error, term()}
Walk a list of {module, model, opts} candidates and invoke run_fn
on each until one succeeds.
{:ok, response, messages}fromrun_fn→ returned immediately.{:error, :max_iterations_reached}→ returned immediately (reasoning failure, not a provider outage — retrying elsewhere won't help).- Any other
{:error, reason}→ logs and advances to the next candidate. - Empty list →
{:error, :no_provider_succeeded}.
Exposed for direct testing; production call sites go through send/2.