Gemini (GeminiEx v0.0.1)
View SourceGemini Elixir Client
A comprehensive Elixir client for Google's Gemini AI API with dual authentication support, advanced streaming capabilities, type safety, and built-in telemetry.
Features
- 🔐 Dual Authentication: Seamless support for both Gemini API keys and Vertex AI OAuth/Service Accounts
- ⚡ Advanced Streaming: Production-grade Server-Sent Events streaming with real-time processing
- 🛡️ Type Safety: Complete type definitions with runtime validation
- 📊 Built-in Telemetry: Comprehensive observability and metrics out of the box
- 💬 Chat Sessions: Multi-turn conversation management with state persistence
- 🎭 Multimodal: Full support for text, image, audio, and video content
- 🚀 Production Ready: Robust error handling, retry logic, and performance optimizations
Quick Start
Installation
Add to your mix.exs
:
def deps do
[
{:gemini, "~> 0.0.1"}
]
end
Basic Configuration
Configure your API key in config/runtime.exs
:
import Config
config :gemini,
api_key: System.get_env("GEMINI_API_KEY")
Or set the environment variable:
export GEMINI_API_KEY="your_api_key_here"
Simple Usage
# Basic text generation
{:ok, response} = Gemini.generate("Tell me about Elixir programming")
{:ok, text} = Gemini.extract_text(response)
IO.puts(text)
# With options
{:ok, response} = Gemini.generate("Explain quantum computing", [
model: "gemini-1.5-pro",
temperature: 0.7,
max_output_tokens: 1000
])
Streaming
# Start a streaming session
{:ok, stream_id} = Gemini.stream_generate("Write a long story", [
on_chunk: fn chunk -> IO.write(chunk) end,
on_complete: fn -> IO.puts("\n✅ Complete!") end
])
Authentication
This client supports two authentication methods:
1. Gemini API Key (Simple)
Best for development and simple applications:
# Environment variable (recommended)
export GEMINI_API_KEY="your_api_key"
# Application config
config :gemini, api_key: "your_api_key"
# Per-request override
Gemini.generate("Hello", api_key: "specific_key")
2. Vertex AI (Production)
Best for production Google Cloud applications:
# Service Account JSON file
export VERTEX_SERVICE_ACCOUNT="/path/to/service-account.json"
export VERTEX_PROJECT_ID="your-gcp-project"
export VERTEX_LOCATION="us-central1"
# Application config
config :gemini, :auth,
type: :vertex_ai,
credentials: %{
service_account_key: System.get_env("VERTEX_SERVICE_ACCOUNT"),
project_id: System.get_env("VERTEX_PROJECT_ID"),
location: "us-central1"
}
Error Handling
The client provides detailed error information with recovery suggestions:
case Gemini.generate("Hello world") do
{:ok, response} ->
{:ok, text} = Gemini.extract_text(response)
{:error, %Gemini.Error{type: :rate_limit} = error} ->
IO.puts("Rate limited. Retry after: #{error.retry_after}")
{:error, %Gemini.Error{type: :authentication} = error} ->
IO.puts("Auth error: #{error.message}")
{:error, error} ->
IO.puts("Unexpected error: #{inspect(error)}")
end
Advanced Features
Multimodal Content
content = [
%{type: "text", text: "What's in this image?"},
%{type: "image", source: %{type: "base64", data: base64_image}}
]
{:ok, response} = Gemini.generate(content)
Model Management
# List available models
{:ok, models} = Gemini.list_models()
# Get model details
{:ok, model_info} = Gemini.get_model("gemini-1.5-pro")
# Count tokens
{:ok, token_count} = Gemini.count_tokens("Your text", model: "gemini-1.5-pro")
This module provides backward-compatible access to the Gemini API while routing requests through the unified coordinator for maximum flexibility and performance.
Summary
Functions
Start a new chat session.
Configure authentication for the client.
Count tokens in the given content.
Extract text from a GenerateContentResponse or raw streaming data.
Generate content using the configured authentication.
Get information about a specific model.
Get stream status.
List available models.
Check if a model exists.
Send a message in a chat session.
Start the streaming manager (for compatibility).
Start a managed streaming session.
Generate content with streaming response (synchronous collection).
Subscribe to streaming events.
Generate text content and return only the text.
Functions
Start a new chat session.
Configure authentication for the client.
Examples
# Gemini API
Gemini.configure(:gemini, %{api_key: "your_api_key"})
# Vertex AI
Gemini.configure(:vertex_ai, %{
service_account_key: "/path/to/key.json",
project_id: "your-project",
location: "us-central1"
})
@spec count_tokens( String.t() | [Gemini.Types.Content.t()], keyword() ) :: {:ok, map()} | {:error, Gemini.Error.t()}
Count tokens in the given content.
@spec extract_text(Gemini.Types.Response.GenerateContentResponse.t() | map()) :: {:ok, String.t()} | {:error, String.t()}
Extract text from a GenerateContentResponse or raw streaming data.
@spec generate( String.t() | [Gemini.Types.Content.t()], keyword() ) :: {:ok, Gemini.Types.Response.GenerateContentResponse.t()} | {:error, Gemini.Error.t()}
Generate content using the configured authentication.
@spec get_model(String.t()) :: {:ok, map()} | {:error, Gemini.Error.t()}
Get information about a specific model.
@spec get_stream_status(String.t()) :: {:ok, map()} | {:error, Gemini.Error.t()}
Get stream status.
@spec list_models(keyword()) :: {:ok, map()} | {:error, Gemini.Error.t()}
List available models.
Check if a model exists.
@spec send_message(map(), String.t()) :: {:ok, Gemini.Types.Response.GenerateContentResponse.t(), map()} | {:error, Gemini.Error.t()}
Send a message in a chat session.
Start the streaming manager (for compatibility).
@spec start_stream( String.t() | [Gemini.Types.Content.t()], keyword() ) :: {:ok, String.t()} | {:error, Gemini.Error.t()}
Start a managed streaming session.
@spec stream_generate( String.t() | [Gemini.Types.Content.t()], keyword() ) :: {:ok, [map()]} | {:error, Gemini.Error.t()}
Generate content with streaming response (synchronous collection).
@spec subscribe_stream(String.t()) :: :ok | {:error, Gemini.Error.t()}
Subscribe to streaming events.
@spec text( String.t() | [Gemini.Types.Content.t()], keyword() ) :: {:ok, String.t()} | {:error, Gemini.Error.t()}
Generate text content and return only the text.