Gemini Elixir Client

View Source

CI Elixir OTP Hex.pm Documentation License

A comprehensive Elixir client for Google's Gemini AI API with dual authentication support, advanced streaming capabilities, type safety, and built-in telemetry.

โœจ Features

  • ๐Ÿ” Dual Authentication: Seamless support for both Gemini API keys and Vertex AI OAuth/Service Accounts
  • โšก Advanced Streaming: Production-grade Server-Sent Events streaming with real-time processing
  • ๐Ÿ›ก๏ธ Type Safety: Complete type definitions with runtime validation
  • ๐Ÿ“Š Built-in Telemetry: Comprehensive observability and metrics out of the box
  • ๐Ÿ’ฌ Chat Sessions: Multi-turn conversation management with state persistence
  • ๐ŸŽญ Multimodal: Full support for text, image, audio, and video content
  • ๐Ÿš€ Production Ready: Robust error handling, retry logic, and performance optimizations
  • ๐Ÿ”ง Flexible Configuration: Environment variables, application config, and per-request overrides

๐Ÿ“ฆ Installation

Add gemini to your list of dependencies in mix.exs:

def deps do
  [
    {:gemini, "~> 0.0.1"}
  ]
end

๐Ÿš€ Quick Start

Basic Configuration

Configure your API key in config/runtime.exs:

import Config

config :gemini,
  api_key: System.get_env("GEMINI_API_KEY")

Or set the environment variable:

export GEMINI_API_KEY="your_api_key_here"

Simple Content Generation

# Basic text generation
{:ok, response} = Gemini.generate("Tell me about Elixir programming")
{:ok, text} = Gemini.extract_text(response)
IO.puts(text)

# With options
{:ok, response} = Gemini.generate("Explain quantum computing", [
  model: "gemini-1.5-pro",
  temperature: 0.7,
  max_output_tokens: 1000
])

Advanced Streaming

# Start a streaming session
{:ok, stream_id} = Gemini.stream_generate("Write a long story about AI", [
  on_chunk: fn chunk -> IO.write(chunk) end,
  on_complete: fn -> IO.puts("\nโœ… Stream complete!") end,
  on_error: fn error -> IO.puts("โŒ Error: #{inspect(error)}") end
])

# Stream management
Gemini.Streaming.pause_stream(stream_id)
Gemini.Streaming.resume_stream(stream_id)
Gemini.Streaming.stop_stream(stream_id)

Multi-turn Conversations

# Create a chat session
{:ok, session} = Gemini.create_chat_session([
  model: "gemini-1.5-pro",
  system_instruction: "You are a helpful programming assistant."
])

# Send messages
{:ok, response1} = Gemini.send_message(session, "What is functional programming?")
{:ok, response2} = Gemini.send_message(session, "Show me an example in Elixir")

# Get conversation history
history = Gemini.get_conversation_history(session)

๐Ÿ” Authentication

# Environment variable (recommended)
export GEMINI_API_KEY="your_api_key"

# Application config
config :gemini, api_key: "your_api_key"

# Per-request override
Gemini.generate("Hello", api_key: "specific_key")
# Service Account JSON file
export VERTEX_SERVICE_ACCOUNT="/path/to/service-account.json"
export VERTEX_PROJECT_ID="your-gcp-project"
export VERTEX_LOCATION="us-central1"

# Application config
config :gemini, :auth,
  type: :vertex_ai,
  credentials: %{
    service_account_key: System.get_env("VERTEX_SERVICE_ACCOUNT"),
    project_id: System.get_env("VERTEX_PROJECT_ID"),
    location: "us-central1"
  }

๐Ÿ“š Documentation

๐Ÿ—๏ธ Architecture

The library features a modular, layered architecture:

  • Authentication Layer: Multi-strategy auth with automatic credential resolution
  • Coordination Layer: Unified API coordinator for all operations
  • Streaming Layer: Advanced SSE processing with state management
  • HTTP Layer: Dual client system for standard and streaming requests
  • Type Layer: Comprehensive schemas with runtime validation

๐Ÿ”ง Advanced Usage

Custom Model Configuration

# List available models
{:ok, models} = Gemini.list_models()

# Get model details
{:ok, model_info} = Gemini.get_model("gemini-1.5-pro")

# Count tokens
{:ok, token_count} = Gemini.count_tokens("Your text here", model: "gemini-1.5-pro")

Multimodal Content

# Text with images
content = [
  %{type: "text", text: "What's in this image?"},
  %{type: "image", source: %{type: "base64", data: base64_image}}
]

{:ok, response} = Gemini.generate(content)

Error Handling

case Gemini.generate("Hello world") do
  {:ok, response} -> 
    # Handle success
    {:ok, text} = Gemini.extract_text(response)
    
  {:error, %Gemini.Error{type: :rate_limit} = error} -> 
    # Handle rate limiting
    IO.puts("Rate limited. Retry after: #{error.retry_after}")
    
  {:error, %Gemini.Error{type: :authentication} = error} -> 
    # Handle auth errors
    IO.puts("Auth error: #{error.message}")
    
  {:error, error} -> 
    # Handle other errors
    IO.puts("Unexpected error: #{inspect(error)}")
end

๐Ÿงช Testing

# Run all tests
mix test

# Run with coverage
mix test --cover

# Run integration tests (requires API key)
GEMINI_API_KEY="your_key" mix test --only integration

๐Ÿค Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • Google AI team for the Gemini API
  • Elixir community for excellent tooling and libraries
  • Contributors and maintainers