README
View Source💫 Overview
LlmEx is a powerful, expressive, and modular Elixir client for Large Language Models. It provides a clean, consistent interface to interact with various LLM providers while embracing Elixir's functional programming paradigm and the BEAM ecosystem's strengths.
Built by the community, for the community - LlmEx aims to be the definitive open source solution for LLM integration in Elixir applications.
With LlmEx, you can:
- 🚀 Interact with multiple LLM providers using a consistent interface
- 🔧 Define and use tools with LLMs (function calling) through a declarative API
- 📺 Stream responses for real-time interactions
- 🧩 Create composable, pipeable client configurations
- ⚡ Handle Server-Sent Events (SSE) with ease
- 🔄 Integrate seamlessly with Phoenix LiveView, GenServers, and more
📦 Installation
Add llm_ex
to your list of dependencies in mix.exs
:
def deps do
[
{:llm_ex, "~> 0.1.0"}
]
end
🌟 Key Features
- Provider-agnostic interface: Use the same code with different LLM providers
- Elegant client builder API: Build clients using a clean, fluent interface
- First-class tool support: Define tools that LLMs can use to perform actions
- Declarative tool definitions: Express tool schemas in a clear, concise way
- Streaming support: Stream responses in real-time for better UX
- Modular design: Small, focused modules that are easy to test and maintain
- Pure Elixir: No external dependencies beyond HTTP clients
🚀 Getting Started
Creating Your Client Module
LlmEx uses a simple behavior-based approach. Create your own client in seconds:
defmodule MyApp.LlmClient do
use LlmEx.ClientBehaviour
# Specify which provider to use (can be overridden in options)
@provider LlmEx.Providers.OllamaClient
@doc """
Get a simple response from the LLM.
"""
def get_response(prompt) do
with_system("You are a helpful assistant.")
|> with_user_message(prompt)
|> chat()
end
@doc """
Get a streamed response sent to the calling process.
"""
def get_streamed_response(prompt) do
with_system("You are a helpful assistant.")
|> with_user_message(prompt)
|> stream_chat(self())
# Handle the streamed response in the calling process
listen_for_response()
end
defp listen_for_response do
receive do
{:llm_content, content} ->
IO.write(content)
listen_for_response()
{:llm_done, _} ->
IO.puts("\nResponse complete")
after
30_000 -> IO.puts("Timeout")
end
end
@doc """
Use a calculator tool to solve math problems.
"""
def solve_math_problem(problem) do
with_system("You are a math assistant that can solve problems.")
|> with_tool(Calculator)
|> with_user_message(problem)
|> chat()
end
end
Then use it in your application:
# Simple response
MyApp.LlmClient.get_response("What is Elixir?")
# Streamed response
MyApp.LlmClient.get_streamed_response("Tell me a story about a time-traveling programmer.")
# With tool usage
MyApp.LlmClient.solve_math_problem("What is 25 * 16?")
Defining Tools
LlmEx offers two ways to define tools that LLMs can use: a standard approach and a declarative approach.
Standard Approach with ToolBehaviour
Use the LlmEx.Tools.ToolBehaviour
module to implement tools:
defmodule MyApp.Tools.Calculator do
@behaviour LlmEx.Tools.ToolBehaviour
# Implement the required callback
@impl LlmEx.Tools.ToolBehaviour
def tool_definition do
%{
"name" => "calculator",
"description" => "A calculator that can perform basic operations.",
"inputSchema" => %{
"type" => "object",
"properties" => %{
"operation" => %{
"type" => "string",
"enum" => ["add", "subtract", "multiply", "divide"]
},
"a" => %{"type" => "number"},
"b" => %{"type" => "number"}
},
"required" => ["operation", "a", "b"]
}
}
end
# Initialize the tool (for stateful tools)
@impl LlmEx.Tools.ToolBehaviour
def init(_opts) do
{:ok, %{}}
end
# Handle tool calls
@impl LlmEx.Tools.ToolBehaviour
def handle_call(params, state) do
a = params["a"]
b = params["b"]
result = case params["operation"] do
"add" -> a + b
"subtract" -> a - b
"multiply" -> a * b
"divide" when b != 0 -> a / b
"divide" -> "Cannot divide by zero"
_ -> "Invalid operation"
end
response = %{
"content" => [%{"type" => "text", "text" => "The result is: #{result}"}],
"isError" => false
}
{:ok, response, state}
end
end
Declarative Approach
The declarative approach provides a more concise and expressive way to define tools:
defmodule MyApp.Tools.DeclarativeCalculator do
use LlmEx.Tools.ToolBehaviour
# Define the tool metadata
tool_name "calculator"
tool_description "A calculator that can perform basic operations."
# Define parameters
param :operation, :string,
description: "The operation to perform",
enum: ["add", "subtract", "multiply", "divide"],
required: true
param :a, :number,
description: "First number",
required: true
param :b, :number,
description: "Second number",
required: true
# Initialize the tool (required by the behaviour)
@impl LlmEx.Tools.ToolBehaviour
def init(_opts) do
{:ok, %{}} # Initialize with empty state
end
# Handle tool calls
@impl LlmEx.Tools.ToolBehaviour
def handle_call(params, state) do
a = params["a"]
b = params["b"]
result = case params["operation"] do
"add" -> a + b
"subtract" -> a - b
"multiply" -> a * b
"divide" when b != 0 -> a / b
"divide" -> "Cannot divide by zero"
_ -> "Invalid operation"
end
response = %{
"content" => [%{"type" => "text", "text" => "The result is: #{result}"}],
"isError" => false
}
{:ok, response, state}
end
end
📚 API Reference
Using the ClientBehaviour
The LlmEx.ClientBehaviour
module provides these functions:
with_provider(state, provider)
- Set the LLM providerwith_system(state, system_message)
- Set the system messagewith_user_message(state, content)
- Add a user messagewith_assistant_message(state, content)
- Add an assistant messagewith_tool(state, tool)
- Add a tool to the clientwith_prompt(state, prompt)
- Set a single promptchat(state, opts \\ [])
- Send a chat request and wait for the responsestream_chat(state, pid, opts \\ [])
- Stream a chat response to the given process
Defining Tools
Tools must implement the LlmEx.Tools.ToolBehaviour
which requires:
tool_definition/0
- Returns the tool definition (name, description, input schema)init/1
- Initializes the tool statehandle_call/2
- Handles a tool call with parameters and state
🔍 Supported Providers
- Ollama: Local models through the Ollama API
- Anthropic: Claude models through Anthropic's API
- More Coming Soon: OpenAI, Mistral, and many more!
🤝 Contributing
Contributions are welcome and appreciated! LlmEx is an open source project that aims to provide the best possible LLM experience for Elixir developers.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
Please make sure your code follows the Elixir style guide and includes tests.
📜 License
Distributed under the MIT License. See LICENSE
for more information.
🙏 Acknowledgements
- The Elixir community for its supportive and collaborative spirit
- All contributors who have helped make this project better
- José Valim and the Elixir core team for creating such an amazing language