Ragex.Agent.Core
(Ragex v0.13.0)
View Source
Main entry point for Ragex Agent operations.
Orchestrates the full project analysis pipeline:
- Analyze project (build knowledge graph, embeddings)
- Discover issues (dead code, duplicates, security, smells, complexity)
- Generate AI-polished report (AI may use Ragex MCP RAG tools for evidence)
- Enable conversation session for follow-up
Report generation and RAG
During step 3 the AI assistant is given access to a restricted set of
read-only Ragex MCP query tools (ToolSchema.rag_query_tools/1). This lets
the AI look up concrete code details — reading a flagged file, checking
coupling metrics, or finding callers of a complex function — to produce
evidence-based findings rather than relying solely on pre-computed statistics.
Heavy re-analysis tools are excluded so the pipeline is not re-triggered.
Usage
# Full project analysis with report
{:ok, result} = Agent.Core.analyze_project("/path/to/project")
# Skip report generation (e.g. before streaming it separately)
{:ok, result} = Agent.Core.analyze_project("/path/to/project", skip_report: true)
# Continue conversation (agent uses full tool set)
{:ok, response} = Agent.Core.chat(result.session_id, "Tell me more about the security issues")
# Get just the report
{:ok, report} = Agent.Core.get_report(result.session_id)
Summary
Functions
Analyze a project and generate an AI-polished report.
Continue a conversation with the agent in an existing session.
Clear/end a session.
Get the generated report from a session.
Get session details.
List all active agent sessions.
Quick analysis - runs all detectors without AI polishing.
Continue a conversation with streaming support.
Generate an AI audit report, optionally notifying a callback when ready.
Types
Functions
@spec analyze_project( String.t(), keyword() ) :: {:ok, analysis_result()} | {:error, term()}
Analyze a project and generate an AI-polished report.
The AI report is generated using a restricted set of read-only Ragex MCP
query tools so the AI can retrieve concrete code evidence. Pass
skip_report: true to skip report generation (e.g. when you intend to
stream it later via stream_generate_report/3).
Parameters
path- Project root pathopts- Options::provider- AI provider (:deepseek_r1, :openai, :anthropic, :ollama):model- Model name override:include_suggestions- Include refactoring suggestions (default: true):max_files- Maximum files to analyze (default: 500):skip_embeddings- Skip embedding generation (default: false):skip_report- Skip AI report generation (default: false):include_dead_code- Enable dead code analysis (default: false):exclude_patterns- Patterns to exclude (default: standard ignores)
Returns
{:ok, result}- Analysis completed with session ID and report{:error, reason}- Analysis failed
Continue a conversation with the agent in an existing session.
Parameters
session_id- Active session IDmessage- User messageopts- Options (same as analyze_project)
Returns
{:ok, response}- Agent response{:error, reason}- Chat failed
@spec clear_session(String.t()) :: :ok
Clear/end a session.
Get the generated report from a session.
If not yet generated, generates it on-demand.
Get session details.
List all active agent sessions.
Quick analysis - runs all detectors without AI polishing.
Useful for programmatic access to raw issue data.
Continue a conversation with streaming support.
Same as chat/3 but streams the final AI response in real-time via callbacks.
Intermediate tool-call steps use blocking calls, but the final text response
is streamed chunk-by-chunk.
Additional Options
:on_chunk-(chunk -> :ok)callback for real-time content/thinking delivery:on_phase-(:thinking | :answering | :done -> :ok)phase transition callback:on_tool_progress-(map() -> :ok)callback when tools are being called
Returns
Same as chat/3.
@spec stream_generate_report(String.t(), map(), keyword()) :: {:ok, String.t(), map()} | {:error, term()}
Generate an AI audit report, optionally notifying a callback when ready.
Requires the knowledge graph and embeddings to be populated first
(call analyze_project/2 with skip_report: true).
The executor runs in blocking mode so that RAG tool calls (read_file,
semantic_search, hybrid_search, etc.) are executed correctly before the
final report is written. Streaming parsers drop tool_call deltas, so
a streaming executor would mis-identify preamble text as the final report
and never execute the tool calls.
The :on_chunk callback is fired once after the blocking run completes,
with the full report content, so callers can use it as a completion signal
(e.g. to stop a spinner).
Options
:on_chunk-(chunk -> :ok)completion callback, fired once with%{content: report_string}when the report is ready:provider- AI provider override:model- Model override