Tool utilities for LLM interactions.
Provides two concerns:
from_action/2— Converts aLotus.AI.Actionmodule into aReqLLM.tool(), with support for binding context parameters.run/4— Runs the recursive tool-calling loop: sends messages to the LLM, executes tool calls, appends results, and repeats until the LLM produces a final text response or the iteration limit is reached.
Examples
# Build a tool from an action, hiding data_source from the LLM
tool = Tool.from_action(GetTableSchema, bind: %{data_source: "postgres"})
# Run the tool-calling loop
{:ok, response} = Tool.run("openai:gpt-4o", context, tools, api_key: "sk-...")
Summary
Functions
Converts an action module to a ReqLLM.tool().
Normalizes ReqLLM usage stats into a consistent format.
Runs the tool-calling loop.
Functions
Converts an action module to a ReqLLM.tool().
Options
:bind- Map of parameter values to inject into every call. Bound parameters are removed from the tool's parameter schema so the LLM doesn't see or fill them.
Normalizes ReqLLM usage stats into a consistent format.
ReqLLM returns input_tokens/output_tokens, but Lotus uses
prompt_tokens/completion_tokens for consistency with OpenAI conventions.
Runs the tool-calling loop.
Sends messages to the LLM with the given tools. If the LLM responds with tool calls, executes them, appends results to the context, and repeats. Stops when the LLM produces a text response or the iteration limit is reached.
Options
:api_key(required) - API key for the LLM provider:temperature- LLM temperature (default:0.2):max_iterations- Maximum number of tool-call rounds (default:10):on_max_iterations- Callback(response -> term)called when limit is hit. Defaults to logging a warning.
Returns
{:ok, response}- Final LLM response{:error, reason}- LLM API error