Filter retrieved graph facts down to those relevant to the conversation, using the configured LLM.
Two responsibilities, each its own tree:
build_interpretation_context/3— pure: assemble the LLM prompt from conversation messages and a formatted facts string, dropping oldest messages until the prompt fits the configured char budget.interpret_facts/3— call the LLM with that prompt and a structured- output schema; return the list of relevant facts the LLM selected.
See ex-interpret and ex-interpret-context in gralkor/TEST_TREES.md.
Summary
Functions
Assemble the LLM prompt from conversation messages and the formatted facts.
Run the LLM over the conversation context + facts text, returning the filtered list of relevant facts.
Schema for the structured-output response the LLM returns.
Types
Functions
@spec build_interpretation_context([Gralkor.Message.t()], String.t(), keyword()) :: String.t()
Assemble the LLM prompt from conversation messages and the formatted facts.
Drops oldest messages until the assembled prompt fits the char budget
(opts[:budget], default 8000).
@spec interpret_facts([Gralkor.Message.t()], String.t(), interpret_fn(), keyword()) :: [String.t()]
Run the LLM over the conversation context + facts text, returning the filtered list of relevant facts.
Raises if the LLM call returns {:error, _} or a non-list response.
@spec interpret_schema() :: keyword()
Schema for the structured-output response the LLM returns.
Wired up by callers that drive interpret_facts/3 via req_llm:
schema = Gralkor.Interpret.interpret_schema()
{:ok, response} = ReqLLM.generate_object(model, prompt, schema)
ReqLLM.Response.object(response).relevantFacts