# arcanum v0.1.0 - Table of Contents

Provider-agnostic AI inference library for Elixir. Adapters for OpenAI-compatible APIs (DeepSeek, Z.AI/Zhipu, OpenRouter, Ollama, vLLM).

## Modules

- [Arcanum](Arcanum.md): Entry point for the inference layer.
- [Arcanum.Adapters.Anthropic](Arcanum.Adapters.Anthropic.md): Inference adapter for the Anthropic Messages API.
- [Arcanum.Adapters.Ollama](Arcanum.Adapters.Ollama.md): Inference adapter for Ollama.
- [Arcanum.Adapters.OpenAI](Arcanum.Adapters.OpenAI.md): Inference adapter for OpenAI-compatible APIs.
- [Arcanum.Auth.Copilot](Arcanum.Auth.Copilot.md): GitHub Copilot OAuth device code authentication.
- [Arcanum.EnsureModel](Arcanum.EnsureModel.md): Ensures a model is loaded with the correct configuration on local
providers (LM Studio, Ollama, vLLM) before inference begins.
- [Arcanum.Gateway](Arcanum.Gateway.md): Single entry point for all inference calls.
- [Arcanum.Intent](Arcanum.Intent.md): Canonical request struct for inference calls.
- [Arcanum.ModelProfile](Arcanum.ModelProfile.md): Declares model capabilities upfront so adapters serialize correctly
on the first attempt — no runtime detection, no retry-on-error branching.
- [Arcanum.ModelProfile.Registry](Arcanum.ModelProfile.Registry.md): Fetches and caches model capabilities from models.dev.
- [Arcanum.ModelProfile.Resolver](Arcanum.ModelProfile.Resolver.md): Resolves a `ModelProfile` for a given provider and model.
- [Arcanum.Probe](Arcanum.Probe.md): Probes inference providers to determine availability.
- [Arcanum.Provider](Arcanum.Provider.md): Behaviour for LLM inference providers.
- [Arcanum.Response](Arcanum.Response.md): Canonical response struct from inference calls.
- [Arcanum.Response.Normalizer](Arcanum.Response.Normalizer.md): Profile-driven post-processing of inference responses.

