This guide walks through installing ExAtlas, configuring a provider, and spawning your first GPU pod.
1. Add the dep
# mix.exs
def deps do
[
{:ex_atlas, "~> 0.1"}
]
endFor the orchestrator and LiveDashboard features, also add:
{:phoenix_pubsub, "~> 2.1"},
{:phoenix_live_dashboard, "~> 0.8"} # optionalRun mix deps.get.
2. Configure a provider
# config/config.exs
config :ex_atlas, default_provider: :runpod
config :ex_atlas, :runpod, api_key: System.get_env("RUNPOD_API_KEY")
# Opt-in orchestrator (one-GenServer-per-pod supervision tree)
config :ex_atlas, start_orchestrator: trueResolution order for the API key:
- Per-call
api_key:option. config :ex_atlas, :runpod, api_key: ....RUNPOD_API_KEYenv var.
3. Spawn a pod
{:ok, compute} =
ExAtlas.spawn_compute(
gpu: :h100,
image: "pytorch/pytorch:2.5.0-cuda12.1-cudnn9-runtime",
ports: [{8000, :http}],
cloud_type: :secure,
auth: :bearer
)
compute.id # "pod_abc123"
compute.status # :running
compute.ports # [%{internal: 8000, external: nil, protocol: :http,
# url: "https://pod_abc123-8000.proxy.runpod.net"}]
compute.auth.token # preshared key, handed to the pod as ATLAS_PRESHARED_KEY env var4. Terminate
:ok = ExAtlas.terminate(compute.id)5. Next steps
- Transient per-user pods — the production pattern.
- Writing a provider — implementing
ExAtlas.Providerfor your own cloud. - Telemetry — wiring the emitted events into Grafana, StatsD, etc.
- Testing — conformance suite + Mock provider.