Render a list of conversation turns into an episode body suitable for ingesting into the knowledge graph.
Each turn that contains a "behaviour" message gets distilled by the
configured LLM into a first-person past-tense summary and rendered as
{agent_name}: (behaviour: {summary}) before the assistant text. Turns
without behaviour skip the LLM entirely.
Distillation per turn is best-effort: any failure (LLM error, exception) drops the behaviour line for that turn and preserves the user/assistant text — the surrounding turns still produce output.
Turns with behaviour are distilled in parallel via Task.async_stream.
See ex-format-transcript in gralkor/TEST_TREES.md.
Summary
Functions
Schema for the structured-output response the LLM returns when distilling a behaviour-containing turn.
Render turns (a list of turns; each turn a list of canonical Messages)
into the episode body string.
Types
@type turn() :: [Gralkor.Message.t()]
Functions
@spec distill_schema() :: keyword()
Schema for the structured-output response the LLM returns when distilling a behaviour-containing turn.
@spec format_transcript([turn()], distill_fn(), String.t(), String.t()) :: String.t()
Render turns (a list of turns; each turn a list of canonical Messages)
into the episode body string.
distill_fn is the LLM caller used to summarise behaviour messages. Pass
nil to skip distillation entirely (behaviour lines are silently omitted).
agent_name is required and non-blank — used to label assistant and
behaviour lines (e.g. "Susu: hello", "Susu: (behaviour: thought)").
user_name is required and non-blank — used to label user lines
(e.g. "Eli: hi"). The rendered transcript is fed to graphiti's entity
extraction; a generic "User:" label collapses every user across the
deployment into a single graph node, destroying graph quality. Every
consumer must name the human.