Lotus.AI.Tool (Lotus v0.16.5)

Copy Markdown View Source

Tool utilities for LLM interactions.

Provides two concerns:

  • from_action/2 — Converts a Lotus.AI.Action module into a ReqLLM.tool(), with support for binding context parameters.
  • run/4 — Runs the recursive tool-calling loop: sends messages to the LLM, executes tool calls, appends results, and repeats until the LLM produces a final text response or the iteration limit is reached.

Examples

# Build a tool from an action, hiding data_source from the LLM
tool = Tool.from_action(GetTableSchema, bind: %{data_source: "postgres"})

# Run the tool-calling loop
{:ok, response} = Tool.run("openai:gpt-4o", context, tools, api_key: "sk-...")

Summary

Functions

Converts an action module to a ReqLLM.tool().

Normalizes ReqLLM usage stats into a consistent format.

Runs the tool-calling loop.

Functions

from_action(action_module, opts \\ [])

@spec from_action(
  module(),
  keyword()
) :: map()

Converts an action module to a ReqLLM.tool().

Options

  • :bind - Map of parameter values to inject into every call. Bound parameters are removed from the tool's parameter schema so the LLM doesn't see or fill them.

normalize_usage(usage)

@spec normalize_usage(map() | nil) :: map()

Normalizes ReqLLM usage stats into a consistent format.

ReqLLM returns input_tokens/output_tokens, but Lotus uses prompt_tokens/completion_tokens for consistency with OpenAI conventions.

run(model, context, tools, opts \\ [])

@spec run(String.t(), struct(), [map()], keyword()) ::
  {:ok, struct()} | {:error, term()}

Runs the tool-calling loop.

Sends messages to the LLM with the given tools. If the LLM responds with tool calls, executes them, appends results to the context, and repeats. Stops when the LLM produces a text response or the iteration limit is reached.

Options

  • :api_key (required) - API key for the LLM provider
  • :temperature - LLM temperature (default: 0.2)
  • :max_iterations - Maximum number of tool-call rounds (default: 10)
  • :on_max_iterations - Callback (response -> term) called when limit is hit. Defaults to logging a warning.

Returns

  • {:ok, response} - Final LLM response
  • {:error, reason} - LLM API error