API Reference huggingface_client v#0.1.0

Copy Markdown View Source

Modules

Estimate depth from an image (monocular depth estimation).

Top-level public API for the HuggingFace Elixir client.

Agentic chat loop with tool use — an Elixir port of @huggingface/mcp-client Agent.

OTP Application supervisor for HuggingfaceClient.

Represents a configured HuggingFace Inference client.

HuggingFace Client configuration and environment variable management.

Base exception for all HuggingFace Inference errors.

Base for errors that carry HTTP request/response context. The Authorization header is automatically redacted.

Raised when the HTTP request to the Hugging Face Hub fails.

Raised when the caller supplies invalid arguments.

Raised when the HTTP request to the provider fails (4xx/5xx).

Raised when the provider returns a response that doesn't match expectations.

Raised when the inference request cannot be routed to any provider.

Agent-backed store for runtime dev overrides of provider/model mappings.

Top-level facade for all HuggingFace Hub operations.

Authentication and user info (/api/whoami-v2).

HuggingFace AutoTrain API — no-code/low-code model training.

HuggingFace Storage Buckets — S3-like object storage powered by Xet.

HuggingFace Hub local cache management.

Core HTTP client for HuggingFace Hub API interactions.

CRUD operations for HuggingFace Hub Collections.

High-level commit helpers: upload files, delete files.

HuggingFace Hub Jobs API — run Docker workloads on HF infrastructure.

HuggingFace Training Stack — configuration helpers for fine-tuning and training.

HuggingFace Dataset Viewer API.

HuggingFace Datasets Hub — access and manage dataset repositories. Delegates to HuggingfaceClient.Hub.Models which handles all repo types.

Collections API — list, create, get, add/remove items, delete.

Discussions and Pull Requests on the HuggingFace Hub.

OAuth PKCE flow for "Sign in with HuggingFace".

HuggingFace Enterprise Security & Compliance (7.2).

HuggingFace Evaluate — metrics computation API.

HuggingFace FileSystem — fsspec-compatible interface to Hub repos and Buckets.

File listing, existence checks, downloads, paths-info, and safetensors parsing.

Manage access requests for gated repositories on the HuggingFace Hub.

HuggingFace Hub — complete API client for the HF platform.

Manage Dedicated Inference Endpoints on the HuggingFace Hub.

HuggingFace Hub Inference Jobs API.

HuggingFace Kernels API — load and run custom compute kernels from the Hub.

HuggingFace Hub Leaderboards API.

Model Cards and metadata management for HuggingFace repositories.

HuggingFace Hub Integration Mixins.

Search and retrieve model metadata from the HuggingFace Hub.

Model, dataset, and Space listing and info.

PKCE OAuth 2.0 login flow for HuggingFace Hub.

HuggingFace Organization management.

Access and search research papers on the HuggingFace Hub.

Repository management on the HuggingFace Hub.

Repository creation, deletion, branch management, and commit history.

Snapshot download — mirrors snapshotDownload from @huggingface/hub.

Enhanced search across models, datasets, and spaces on the HuggingFace Hub.

Cache path utilities for HuggingFace Hub snapshots.

Manage Hugging Face Spaces — hosted ML applications.

Manage git tags and list refs (branches/tags) on HuggingFace repositories.

HuggingFace TensorBoard Logger — push TensorBoard logs to the Hub.

Community user profile APIs for the HuggingFace Hub.

Manage webhooks on the HuggingFace Hub.

HuggingFace Webhook payload handling and signature verification.

Runtime configuration for the inference layer.

Behaviour for the underlying HTTP client, enabling Mox-based testing.

Shared HTTP POST helper for TGI and TEI dedicated server clients.

Elixir client for the Hugging Face Inference API.

Fetches model metadata and provider mapping information from the HuggingFace Hub.

Behaviour that every inference provider module must implement.

Base implementation for providers that use the OpenAI-compatible chat completions endpoint.

Base implementation for providers that use an OpenAI-compatible completions endpoint.

Baseten inference provider.

Black Forest Labs (FLUX) inference provider.

Cerebras inference provider.

Clarifai inference provider.

Cohere inference provider.

DeepInfra inference provider.

Fal.ai inference provider. Uses async queue polling for image/video tasks.

Featherless.ai inference provider.

Fireworks AI inference provider.

Groq inference provider. Supports conversational and text-generation tasks.

Provider implementation for HuggingFace's own serverless Inference API.

Hyperbolic inference provider.

Nebius AI inference provider.

Novita AI inference provider.

Nscale inference provider.

NVIDIA inference provider.

OpenAI inference provider.

OVHcloud inference provider.

PublicAI inference provider.

Replicate inference provider.

SambaNova inference provider.

Scaleway inference provider.

Together.ai inference provider.

Wavespeed AI inference provider.

Zai.com inference provider.

ETS-backed GenServer that caches inferenceProviderMapping data fetched from the HuggingFace Hub API (GET /api/models/:model_id).

Server-Sent Events (SSE) parser for streaming inference responses.

Utility functions for working with streaming inference responses.

Client for HuggingFace Text Embeddings Inference (TEI) servers.

Client for HuggingFace Text Generation Inference (TGI) servers.

Shared helpers for task modules.

Audio classification. Returns label + score pairs.

Audio-to-audio transformation (source separation, enhancement).

Chat completion task — OpenAI-compatible /v1/chat/completions.

Document question answering from scanned documents.

Dense embedding / feature extraction (returns float arrays).

Fill-mask (masked language modeling) task.

Image classification. Returns [%{"label" => ..., "score" => ...}].

Image segmentation. Returns masks with labels and scores.

Image + text to image. Takes an image and a text prompt, returns a new image.

Image + text to video. Takes an image and a text prompt, returns a video.

Image-to-image transformation (e.g. style transfer, super-resolution).

Image captioning / visual to text.

Animates a still image into a short video clip.

Object detection with bounding boxes.

Extractive question answering from context.

Sentence similarity scoring (returns a list of similarity scores).

Abstractive text summarization.

Question answering over tabular data (TAPAS-style).

Tabular data classification. Returns class indices.

Tabular data regression. Returns predicted float values.

Text classification task. Returns label + score pairs.

Text generation (completion) task.

Text-to-audio generation (music, sound effects). Returns audio bytes.

Text-to-speech synthesis. Returns audio bytes.

Text-to-video generation. Returns video bytes.

Named entity recognition / token classification.

Neural machine translation.

Visual question answering (image + question → answer).

Zero-shot text classification with candidate labels.

Zero-shot image classification with candidate labels.

Shared run/stream/resolve_provider logic used by all inference task modules.

Shared type definitions for the HuggingFace Inference API.

A Jinja2-compatible template engine for rendering HuggingFace chat templates.

Jinja2 template engine for HuggingFace chat templates.

HuggingFace Library Integration Helpers.

HuggingFace MCP Client — Model Context Protocol integration.

Optional Plug middleware for Phoenix / Plug applications.

Behaviour that every inference provider module must implement.

Baseten – client-side routing, conversational task.

Black Forest Labs – text-to-image task (client-side routing).

Cerebras – conversational task.

Clarifai – conversational task.

Cohere – conversational task.

DeepInfra – conversational task.

Fal.ai – multiple tasks including image/video/speech.

Featherless AI – conversational task.

Fireworks AI – conversational task.

Groq inference provider (conversational / chat completions).

Groq provider – text-generation task.

Provider implementation for HuggingFace's own inference router / API.

Hyperbolic – conversational task.

Hyperbolic – text-to-image task.

Nebius Studio – conversational task.

Nebius Studio – feature-extraction / embeddings task.

Nebius Studio – text-to-image task.

Novita AI – conversational task.

Novita AI – text-to-video task.

Nscale – conversational task.

Nscale – text-to-image task.

NVIDIA NIM – conversational task.

OpenAI – conversational task.

OVHcloud – conversational task.

PublicAI – conversational task.

Replicate – client-side routing, multiple tasks.

SambaNova – conversational task.

SambaNova – feature-extraction task.

Scaleway – conversational task.

Together AI – conversational task.

Together AI – text-generation task.

Together AI – text-to-image task.

Wavespeed – text-to-image and text-to-video (client-side routing).

Zai.org – conversational task.

Lookup table mapping {provider_id, task} pairs to provider module atoms.

Core HTTP execution layer for all inference requests.

Resolves provider response tuples, including async/polling patterns used by providers like Fal.ai, Novita, and Black Forest Labs.

Server-Sent Events (SSE) stream parser following the W3C EventSource specification.

HuggingFace Serialization — save and load ML model weights.

DDUF (Diffusion model Distributed Unified Format) file operations.

Utilities for consuming streaming chat-completion responses.

Telemetry integration for HuggingfaceClient inference requests.

Multimodal vision-language models (VLMs).

Generate segmentation masks (SAM-style). Returns masks for objects in an image.

Classify a video clip into predefined categories.

Mix Tasks

Downloads a file or an entire repository snapshot from the HuggingFace Hub.

Saves a HuggingFace API token to ~/.cache/huggingface/token.

Removes the saved HuggingFace API token.

Fetches metadata for a model from the HuggingFace Hub and displays it, including all available inference providers.

Lists all providers registered in HuggingfaceClient.Inference.ProviderRegistry.

Lists repositories for the authenticated user or a specific author.

List and manage HuggingFace Spaces.

Uploads a file or folder to a HuggingFace Hub repository.

Displays information about the currently authenticated user.

Manage the local HuggingFace Hub cache directory.