Internal Modules Guide
View SourceThis document lists all internal modules in ExLLM that should NOT be used directly. These are implementation details subject to change without notice.
⚠️ WARNING
All modules listed here are internal to ExLLM. Always use the public API through the main ExLLM
module instead.
Core Internal Modules
Infrastructure Layer
ExLLM.Infrastructure.Cache.*
- Internal caching implementationExLLM.Infrastructure.CircuitBreaker.*
- Fault tolerance internalsExLLM.Infrastructure.Config.*
- Configuration managementExLLM.Infrastructure.Logger
- Internal loggingExLLM.Infrastructure.Retry
- Retry logic implementationExLLM.Infrastructure.Streaming.*
- Streaming implementation detailsExLLM.Infrastructure.Error
- Error structure definitions
Provider Shared Utilities
ExLLM.Providers.Shared.ConfigHelper
- Provider config utilitiesExLLM.Providers.Shared.ErrorHandler
- Error handlingExLLM.Providers.Shared.HTTPClient
- HTTP implementationExLLM.Providers.Shared.MessageFormatter
- Message formattingExLLM.Providers.Shared.ModelFetcher
- Model fetching logicExLLM.Providers.Shared.ModelUtils
- Model utilitiesExLLM.Providers.Shared.ResponseBuilder
- Response constructionExLLM.Providers.Shared.StreamingBehavior
- Streaming behaviorExLLM.Providers.Shared.StreamingCoordinator
- Stream coordinationExLLM.Providers.Shared.Validation
- Input validationExLLM.Providers.Shared.VisionFormatter
- Vision formatting
Provider Internals
ExLLM.Providers.Gemini.*
- Gemini-specific internalsExLLM.Providers.Bumblebee.*
- Bumblebee internalsExLLM.Providers.OpenAICompatible
- Base module for providers
Testing Infrastructure
ExLLM.Testing.Cache.*
- Test caching systemExLLM.Testing.ResponseCache
- Response caching for tests- All modules in
test/support/*
- Test helpers
Why These Are Internal
- Implementation Details: These modules contain implementation-specific logic that may change between versions
- No Stability Guarantees: Internal APIs can change without deprecation notices
- Complex Dependencies: Many internal modules have complex interdependencies
- Provider-Specific: Provider internals are tailored to specific API requirements
Migration Guide
If you're currently using any internal modules, here's how to migrate:
Cache Access
# ❌ Don't use internal cache modules
ExLLM.Infrastructure.Cache.get(key)
# ✅ Use the public API
# Caching is handled automatically by ExLLM
{:ok, response} = ExLLM.chat(:openai, "Hello")
Error Handling
# ❌ Don't create internal error types
ExLLM.Infrastructure.Error.api_error(500, "Error")
# ✅ Use pattern matching on public API returns
case ExLLM.chat(:openai, "Hello") do
{:error, {:api_error, status, message}} ->
# Handle error
end
Configuration
# ❌ Don't access internal config modules
ExLLM.Infrastructure.Config.ModelConfig.get_model(:openai, "gpt-4")
# ✅ Use public configuration API
{:ok, info} = ExLLM.get_model_info(:openai, "gpt-4")
HTTP Requests
# ❌ Don't use internal HTTP client
ExLLM.Providers.Shared.HTTPClient.post_json(url, body, headers)
# ✅ Use the public API which handles HTTP internally
{:ok, response} = ExLLM.chat(:openai, "Hello")
Provider Implementation
# ❌ Don't use provider internals directly
ExLLM.Providers.Anthropic.chat(messages, options)
# ✅ Use the unified public API
{:ok, response} = ExLLM.chat(:anthropic, messages, options)
For Library Contributors
If you're contributing to ExLLM:
- Keep internal modules marked with
@moduledoc false
- Don't expose internal functions in the public API
- Add new public functionality to the main
ExLLM
module - Document any new internal modules in this guide
- Ensure internal modules are properly namespaced
Questions?
If you need functionality that's only available in internal modules, please:
- Check if the public API already provides it
- Open an issue requesting the feature
- Consider contributing a PR that exposes it properly through the public API