Ragex.AI.Features.Cache
(Ragex v0.14.0)
View Source
Feature-aware wrapper around Ragex.AI.Cache.
Provides convenience functions that automatically use feature-specific cache TTLs and configuration from Features.Config.
Usage
alias Ragex.AI.Features.Cache
# Get cached response for validation errors
case Cache.get(:validation_error_explanation, error, context) do
{:ok, response} -> response
{:error, :not_found} ->
# Generate and cache
response = generate_ai_response(...)
Cache.put(:validation_error_explanation, error, context, response)
response
end
# Or use fetch! helper
response = Cache.fetch!(:refactor_preview_commentary, params, context, fn ->
generate_ai_response(...)
end)
Summary
Functions
Clear cache for a specific feature.
Check if caching is enabled for a specific feature.
Fetch from cache or generate if not found (with error handling).
Fetch from cache or generate if not found.
Get a cached AI response for a specific feature.
Store an AI response in the cache for a specific feature.
Get cache statistics for all features.
Warm up the cache with pre-computed responses.
Types
@type cache_result() :: {:ok, any()} | {:error, :not_found}
@type feature() :: Ragex.AI.Features.Config.feature()
Functions
@spec clear(feature()) :: :ok
Clear cache for a specific feature.
Note: Currently clears the entire AI cache. Future versions may implement per-feature cache partitioning.
Parameters
feature- Feature identifier atom
Returns
:ok
Check if caching is enabled for a specific feature.
Takes into account:
- Global AI cache enabled flag
- Feature-specific enabled flag
- Per-call overrides
Parameters
feature- Feature identifier atomopts- Options with potential overrides
Returns
trueif caching should be usedfalseotherwise
Fetch from cache or generate if not found (with error handling).
Like fetch!/4 but propagates errors from the generator function.
Returns
{:ok, response}on success (cached or generated){:error, reason}if generation fails
Fetch from cache or generate if not found.
This is the recommended way to use the cache - it handles both retrieval and storage in one call.
Parameters
feature- Feature identifier atomquery- Query or input datacontext- Context mapgenerator_fn- Function to call if cache miss (arity 0)opts- Additional options
Returns
- Cached or freshly generated response
Examples
response = Cache.fetch!(
:validation_error_explanation,
error,
context,
fn -> ValidationAI.generate_explanation(error, context) end
)
@spec get(feature(), any(), any(), keyword()) :: cache_result()
Get a cached AI response for a specific feature.
Automatically uses feature-specific TTL and configuration.
Parameters
feature- Feature identifier atomquery- Query or input datacontext- Context mapopts- Additional options (merged with feature config)
Returns
{:ok, response}if cached{:error, :not_found}if not cached or expired
Store an AI response in the cache for a specific feature.
Automatically uses feature-specific TTL and configuration.
Parameters
feature- Feature identifier atomquery- Query or input datacontext- Context mapresponse- Response to cacheopts- Additional options
Returns
:ok
@spec stats() :: map()
Get cache statistics for all features.
Returns general cache stats plus per-feature breakdown if available.
Returns
- Map of statistics
Warm up the cache with pre-computed responses.
Useful for seeding the cache with known common patterns.
Parameters
entries- List of {feature, query, context, response} tuples
Returns
:ok
Examples
Cache.warm_up([
{:validation_error_explanation, error1, context1, response1},
{:refactor_preview_commentary, params1, context1, response1}
])