LlmCore.LLM.Native (llm_core v0.3.0)

Copy Markdown View Source

In-process agentic provider — runs the agent loop inside the BEAM VM.

  1. Reads the agent .md system prompt
  2. Resolves an LLM API provider (Appliance → Zai → Anthropic)
  3. Calls LlmCore.Agent.Loop.run with LlmToolkit.CodeTools
  4. Returns a standard LlmCore.LLM.Response

Zero CLI cost. Uses whatever API provider is available.

Provider Resolution Priority

  1. Appliance (local, free) — Qwen, etc. on DGX Spark / LM Studio
  2. Z.ai (plan-covered) — GLM-5.1, included in coding plan
  3. Anthropic (direct API) — Claude models, pay-per-token

Usage

This provider implements LlmCore.LLM.Provider and is selected by provider routing when provider: "native" is specified or when no CLI providers are available.

{:ok, response} = LlmCore.LLM.Native.send(task,
  system_prompt_file: "/path/to/agent.md",
  cwd: "/path/to/project",
  model: "qwen3-vl-32b-thinking"
)

Summary

Functions

Walk a list of {module, model, opts} candidates and invoke run_fn on each until one succeeds.

Functions

try_cascade(list, run_fn)

@spec try_cascade([{module(), String.t(), keyword()}], (tuple() -> term())) ::
  {:ok, LlmCore.LLM.Response.t(), [map()]} | {:error, term()}

Walk a list of {module, model, opts} candidates and invoke run_fn on each until one succeeds.

  • {:ok, response, messages} from run_fn → returned immediately.
  • {:error, :max_iterations_reached} → returned immediately (reasoning failure, not a provider outage — retrying elsewhere won't help).
  • Any other {:error, reason} → logs and advances to the next candidate.
  • Empty list → {:error, :no_provider_succeeded}.

Exposed for direct testing; production call sites go through send/2.