LlmCore.LLM.Messages (llm_core v0.3.0)

Copy Markdown View Source

Normalizes prompts into the chat message format used by API providers.

Handles standard roles (:system, :user, :assistant) and the :tool role natively — tool messages are preserved with their tool_call_id rather than being collapsed into :user messages. Provider adapters are responsible for any further format translation (e.g. Anthropic wraps tool results in user role with tool_result content blocks).

Summary

Functions

Normalizes prompts into the chat message format used by API providers.

Renders prompts for CLI providers.

Returns true if the given map looks like a valid chat message.

Functions

normalize_chat(prompt)

@spec normalize_chat(String.t() | [map()] | any()) :: [map()]

Normalizes prompts into the chat message format used by API providers.

Accepts a string, a list of message maps (atom or string keys), or any term that implements String.Chars. Tool-role messages retain their tool_call_id field when present.

render_cli_prompt(prompt)

@spec render_cli_prompt(String.t() | [map()] | any()) :: String.t()

Renders prompts for CLI providers.

valid_message?(arg1)

@spec valid_message?(map()) :: boolean()

Returns true if the given map looks like a valid chat message.

Accepts both atom-keyed (%{role: :user, content: "..."}) and string-keyed (%{"role" => "user", "content" => "..."}) maps. Tool-role messages are accepted with or without a tool_call_id.