ExLLM.Retry (ex_llm v0.5.0)

View Source

Request retry logic with exponential backoff for ExLLM.

Provides configurable retry mechanisms for failed API requests with support for different backoff strategies and retry conditions.

Features

  • Exponential backoff with jitter
  • Configurable retry conditions
  • Per-provider retry policies
  • Circuit breaker pattern
  • Request deduplication

Usage

# With default retry policy
ExLLM.Retry.with_retry fn ->
  ExLLM.chat(:openai, messages)
end

# With custom retry options
ExLLM.Retry.with_retry fn ->
  ExLLM.chat(:anthropic, messages)
end,
  max_attempts: 5,
  base_delay: 1000,
  max_delay: 30_000,
  jitter: true

Summary

Functions

Calculates the delay before the next retry attempt.

Default retry policies for providers.

Checks if an error should trigger a retry.

Executes a function with retry logic for a specific provider.

Executes a function with retry logic.

Functions

calculate_delay(attempt, policy)

Calculates the delay before the next retry attempt.

get_provider_policy(provider)

Default retry policies for providers.

should_retry?(error, attempt, policy)

Checks if an error should trigger a retry.

with_provider_retry(provider, fun, opts \\ [])

Executes a function with retry logic for a specific provider.

with_retry(fun, opts \\ [])

Executes a function with retry logic.

Options

  • :max_attempts - Maximum number of attempts (default: 3)
  • :base_delay - Initial delay in milliseconds (default: 1000)
  • :max_delay - Maximum delay in milliseconds (default: 60000)
  • :multiplier - Backoff multiplier (default: 2)
  • :jitter - Add random jitter to delays (default: true)
  • :retry_on - Function to determine if error is retryable