Planck.AI (Planck.AI v0.1.0)

Copy Markdown View Source

Typed LLM provider abstraction built on top of req_llm.

Planck.AI provides a provider-agnostic interface for streaming and completing LLM requests. It defines the canonical types (Model, Message, Context, Tool) and streaming event protocol (Stream) consumed by Planck.Agent.

Streaming

model = Planck.AI.Models.Anthropic.all() |> hd()
context = %Planck.AI.Context{
  system: "You are a helpful assistant.",
  messages: [%Planck.AI.Message{role: :user, content: [{:text, "Hello!"}]}]
}

model
|> Planck.AI.stream(context, temperature: 0.7)
|> Enum.each(fn
  {:text_delta, text} -> IO.write(text)
  {:done, _} -> IO.puts("")
  _ -> :ok
end)

Completing

{:ok, message} = Planck.AI.complete(model, context)

Model catalog

Planck.AI.list_providers()
#=> [:anthropic, :openai, :ollama, :llama_cpp]

Planck.AI.list_models(:anthropic)
#=> [%Planck.AI.Model{id: "claude-opus-4-5", ...}, ...]

{:ok, model} = Planck.AI.get_model(:anthropic, "claude-sonnet-4-6")

Summary

Functions

Sends a request to the LLM and blocks until the full response is received.

Looks up a model by provider and id.

Returns all known models for a given provider.

Returns all supported provider atoms.

Streams a request to the LLM, returning a lazy stream of StreamEvent tuples.

Functions

complete(model, context, opts \\ [])

@spec complete(Planck.AI.Model.t(), Planck.AI.Context.t(), keyword()) ::
  {:ok, Planck.AI.Message.t()} | {:error, term()}

Sends a request to the LLM and blocks until the full response is received.

Internally consumes stream/3 and assembles a Message from the events. Returns {:ok, message} on success or {:error, reason} on failure.

Examples

{:ok, %Planck.AI.Message{} = message} = Planck.AI.complete(model, context)

get_model(provider, id, opts \\ [])

@spec get_model(atom(), String.t(), keyword()) ::
  {:ok, Planck.AI.Model.t()} | {:error, :not_found}

Looks up a model by provider and id.

Returns {:ok, model} if found, {:error, :not_found} otherwise.

Examples

iex> Planck.AI.get_model(:anthropic, "claude-sonnet-4-6")
{:ok, %Planck.AI.Model{id: "claude-sonnet-4-6", ...}}

iex> Planck.AI.get_model(:anthropic, "does-not-exist")
{:error, :not_found}

iex> Planck.AI.get_model(:llama_cpp, "mistral-7b", base_url: "http://10.0.0.5:8080")
{:ok, %Planck.AI.Model{id: "mistral-7b", ...}}

list_models(provider, opts \\ [])

@spec list_models(
  atom(),
  keyword()
) :: [Planck.AI.Model.t()]

Returns all known models for a given provider.

Cloud providers (:anthropic, :openai, :google) source their catalog from LLMDB — a bundled snapshot loaded into :persistent_term on first call.

Local providers (:ollama, :llama_cpp) query the running server at call time. Pass base_url: in opts to target a non-default server address.

Examples

iex> Planck.AI.list_models(:anthropic)
[%Planck.AI.Model{provider: :anthropic, ...}, ...]

list_providers()

@spec list_providers() :: [atom()]

Returns all supported provider atoms.

Examples

iex> Planck.AI.list_providers()
[:anthropic, :openai, :google, :ollama, :llama_cpp]

stream(model, context, opts \\ [])

Streams a request to the LLM, returning a lazy stream of StreamEvent tuples.

Keyword opts (e.g. temperature: 0.7, max_tokens: 2048) are forwarded directly to req_llm, which handles per-provider parameter translation.

Examples

Planck.AI.stream(model, context, temperature: 1.0)
|> Enum.each(fn
  {:text_delta, t} -> IO.write(t)
  _ -> :ok
end)