Alchemind behaviour (Alchemind v0.1.0-rc1)

View Source

Alchemind provides a unified interface for interacting with various LLM providers.

This module defines the base behaviours and types for working with different LLM implementations in a unified way, similar to LiteLLM or LlamaIndex.

Summary

Callbacks

Completes a conversation with the LLM provider.

Completes a conversation with the LLM provider, with optional streaming.

Defines the client behaviour for LLM providers.

Converts text to speech.

Transcribes audio to text.

Functions

Completes a conversation using the specified client, with optional streaming.

Creates a new client for the specified provider.

Converts text to speech using the specified client.

Transcribes audio to text using the specified client.

Types

completion_choice()

@type completion_choice() :: %{
  :index => non_neg_integer(),
  :message => message(),
  optional(:finish_reason) => String.t()
}

completion_error()

@type completion_error() :: %{
  error: %{
    :message => String.t(),
    optional(:type) => String.t(),
    optional(:code) => String.t()
  }
}

completion_response()

@type completion_response() :: %{
  id: String.t(),
  object: String.t(),
  created: pos_integer(),
  model: String.t(),
  choices: [completion_choice()]
}

completion_result()

@type completion_result() ::
  {:ok, completion_response()} | {:error, completion_error() | any()}

message()

@type message() :: %{role: role(), content: String.t() | nil}

role()

@type role() :: :system | :user | :assistant

stream_callback()

@type stream_callback() :: (stream_delta() -> any())

stream_delta()

@type stream_delta() :: %{
  optional(:id) => String.t(),
  optional(:model) => String.t(),
  optional(:content) => String.t(),
  optional(:role) => role(),
  optional(:finish_reason) => String.t()
}

Callbacks

complete(client, messages, opts)

@callback complete(
  client :: term(),
  messages :: [message()],
  opts :: keyword()
) :: completion_result()

Completes a conversation with the LLM provider.

Each provider module must implement this callback to handle completion requests.

complete(client, messages, callback, opts)

@callback complete(
  client :: term(),
  messages :: [message()],
  callback :: stream_callback(),
  opts :: keyword()
) :: completion_result()

Completes a conversation with the LLM provider, with optional streaming.

Each provider module must implement this callback to handle streaming completion requests. If a callback is provided, streaming is enabled.

new(opts)

@callback new(opts :: keyword()) :: {:ok, term()} | {:error, term()}

Defines the client behaviour for LLM providers.

Each provider must implement a client that conforms to this behaviour.

speech(client, input, opts)

(optional)
@callback speech(
  client :: term(),
  input :: String.t(),
  opts :: keyword()
) :: {:ok, binary()} | {:error, term()}

Converts text to speech.

Optional callback that providers can implement to support text-to-speech conversion.

transcribe(client, audio_binary, opts)

(optional)
@callback transcribe(
  client :: term(),
  audio_binary :: binary(),
  opts :: keyword()
) :: {:ok, String.t()} | {:error, term()}

Transcribes audio to text.

Optional callback that providers can implement to support audio transcription.

Functions

complete(client, messages, callback_or_opts \\ [], opts \\ [])

@spec complete(term(), [message()], stream_callback() | keyword(), keyword()) ::
  completion_result()

Completes a conversation using the specified client, with optional streaming.

Parameters

  • client: Client created with new/2
  • messages: List of messages in the conversation
  • callback_or_opts: Either a callback function for streaming or options for the request
  • opts: Additional options for the completion request (when callback is provided)

Options

  • :model - The model to use (required unless specified in the client)
  • :temperature - Controls randomness (0.0 to 2.0)
  • :max_tokens - Maximum number of tokens to generate

Examples

Without streaming:

iex> {:ok, client} = Alchemind.new(Alchemind.OpenAI, api_key: "sk-...")
iex> messages = [
...>   %{role: :system, content: "You are a helpful assistant."},
...>   %{role: :user, content: "Hello, world!"}
...> ]
iex> Alchemind.complete(client, messages, model: "gpt-4o", temperature: 0.7)

With streaming:

iex> {:ok, client} = Alchemind.new(Alchemind.OpenAI, api_key: "sk-...")
iex> messages = [
...>   %{role: :system, content: "You are a helpful assistant."},
...>   %{role: :user, content: "Hello, world!"}
...> ]
iex> callback = fn delta -> IO.write(delta.content) end
iex> Alchemind.complete(client, messages, callback, model: "gpt-4o", temperature: 0.7)

Returns

  • {:ok, response} - Successful completion with response data
  • {:error, reason} - Error with reason

new(provider, opts \\ [])

@spec new(
  module(),
  keyword()
) :: {:ok, term()} | {:error, term()}

Creates a new client for the specified provider.

Parameters

  • provider: Module implementing Alchemind provider behaviour
  • opts: Provider-specific options (like api_key, base_url, etc.)

Examples

iex> Alchemind.new(Alchemind.OpenAI, api_key: "sk-...")
{:ok, %Alchemind.OpenAI.Client{...}}

Returns

  • {:ok, client} - Client for the specified provider
  • {:error, reason} - Error with reason

speech(client, input, opts \\ [])

@spec speech(term(), String.t(), keyword()) :: {:ok, binary()} | {:error, term()}

Converts text to speech using the specified client.

Parameters

  • client: Client created with new/2
  • input: Text to convert to speech
  • opts: Options for the speech request

Options

Options are provider-specific. For OpenAI:

  • :model - OpenAI text-to-speech model to use (default: "gpt-4o-mini-tts")
  • :voice - Voice to use (default: "alloy")
  • :response_format - Format of the audio (default: "mp3")
  • :speed - Speed of the generated audio (optional)

Examples

iex> {:ok, client} = Alchemind.new(Alchemind.OpenAI, api_key: "sk-...")
iex> Alchemind.speech(client, "Hello, world!", voice: "echo")
{:ok, <<binary audio data>>}

Returns

  • {:ok, audio_binary} - Successful speech generation with audio binary
  • {:error, reason} - Error with reason

transcribe(client, audio_binary, opts \\ [])

@spec transcribe(term(), binary(), keyword()) :: {:ok, String.t()} | {:error, term()}

Transcribes audio to text using the specified client.

Parameters

  • client: Client created with new/2
  • audio_binary: Binary audio data
  • opts: Options for the transcription request

Options

Options are provider-specific. For OpenAI:

  • :model - Transcription model to use (default: "whisper-1")
  • :language - Language of the audio (default: nil, auto-detect)
  • :prompt - Optional text to guide the model's transcription
  • :response_format - Format of the transcript (default: "json")
  • :temperature - Controls randomness (0.0 to 1.0, default: 0)

Examples

iex> {:ok, client} = Alchemind.new(Alchemind.OpenAI, api_key: "sk-...")
iex> audio_binary = File.read!("audio.mp3")
iex> Alchemind.transcribe(client, audio_binary, language: "en")
{:ok, "This is a transcription of the audio."}

Returns

  • {:ok, text} - Successful transcription with text
  • {:error, reason} - Error with reason