ExLLM.Testing.CachingInterceptor (ex_llm v0.8.1)
View SourceInterceptor module for automatically caching provider responses.
This module provides functions to wrap adapter calls and automatically cache their responses for later use in testing with the Mock adapter.
Usage
Environment-based Auto-caching
# Enable caching for all providers
export EX_LLM_CACHE_RESPONSES=true
export EX_LLM_CACHE_DIR="/path/to/cache"
# Normal ExLLM usage will automatically cache responses
{:ok, response} = ExLLM.chat(messages, provider: :openai)
Manual Caching Wrapper
# Wrap a specific call to cache its response
{:ok, response} = ExLLM.CachingInterceptor.with_caching(:openai, fn ->
ExLLM.Providers.OpenAI.chat(messages)
end)
Batch Response Collection
# Collect responses for testing scenarios
ExLLM.CachingInterceptor.collect_test_responses(:openai, [
{[%{role: "user", content: "Hello"}], []},
{[%{role: "user", content: "What is 2+2?"}], [max_tokens: 10]},
{[%{role: "user", content: "Tell me a joke"}], [temperature: 0.8]}
])
Summary
Functions
Collects responses for common test scenarios.
Creates a comprehensive test response collection for a provider.
Enables automatic caching for a specific provider.
Wraps an adapter call to automatically cache the response.
Wraps a streaming adapter call to cache the complete stream.
Functions
Collects responses for common test scenarios.
This function executes a list of test cases and caches their responses for later use in testing.
Creates a comprehensive test response collection for a provider.
Enables automatic caching for a specific provider.
This modifies the adapter's behavior to automatically cache all responses.
Wraps an adapter call to automatically cache the response.
Wraps a streaming adapter call to cache the complete stream.