ExLLM.Providers.Shared.RequestBuilder behaviour (ex_llm v0.8.1)
View SourceUnified request building for LLM providers.
This module provides common patterns for building API requests across different LLM providers, reducing code duplication and ensuring consistency.
Features:
- Common request body construction
- Optional parameter handling
- Provider-specific extensions via callbacks
- Function/tool formatting
- Message formatting
Summary
Callbacks
Callback for provider-specific message formatting.
Callback for provider-specific request transformations.
Functions
Add common optional parameters to a request.
Add function calling parameters to a request.
Add an optional parameter to the request if it exists in options.
Build a standard chat completion request.
Build a completion request (for non-chat models).
Build an embeddings request.
Build a request with provider-specific transformations.
Extract system message from a list of messages.
Format messages for API requests.
Format functions as tools for the newer OpenAI tools API.
Callbacks
Callback for provider-specific message formatting.
Some providers need custom message structures.
Callback for provider-specific request transformations.
Allows adapters to modify the request after standard building.
Functions
Add common optional parameters to a request.
Only adds parameters that are present in options.
Add function calling parameters to a request.
Handles both the older functions
format and newer tools
format.
Add an optional parameter to the request if it exists in options.
Build a standard chat completion request.
Options
Common options supported across providers:
:model
- Model to use:temperature
- Sampling temperature (0.0 to 2.0):max_tokens
- Maximum tokens to generate:top_p
- Nucleus sampling parameter:frequency_penalty
- Frequency penalty (-2.0 to 2.0):presence_penalty
- Presence penalty (-2.0 to 2.0):stop
- Stop sequences:user
- User identifier:functions
- Function definitions for function calling:stream
- Whether to stream the response
Examples
RequestBuilder.build_chat_request(
messages,
"gpt-4",
temperature: 0.7,
max_tokens: 1000
)
Build a completion request (for non-chat models).
Options
Similar to chat requests but with prompt
instead of messages
.
Build an embeddings request.
Options
:model
- Embedding model to use:encoding_format
- Format for the embeddings (e.g., "float", "base64"):dimensions
- Number of dimensions for the embeddings:user
- User identifier
Examples
RequestBuilder.build_embeddings_request(
["Hello world", "How are you?"],
"text-embedding-3-small",
dimensions: 512
)
Build a request with provider-specific transformations.
This is the main entry point for adapters that implement the transform_request callback.
Examples
defmodule MyAdapter do
@behaviour ExLLM.Providers.Shared.RequestBuilder
def build_request(messages, model, options) do
RequestBuilder.build_provider_request(__MODULE__, messages, model, options)
end
@impl true
def transform_request(request, options) do
# Add provider-specific fields
Map.put(request, "custom_field", "value")
end
end
Extract system message from a list of messages.
Returns {system_content, other_messages} tuple. Some providers (like Anthropic) handle system messages differently.
Format messages for API requests.
Ensures messages have the correct structure and handles various input formats.
Format functions as tools for the newer OpenAI tools API.