Ragex.AI.Provider.Ollama (Ragex v0.11.0)

View Source

Ollama provider for running local LLMs.

Supports local Ollama server for models like llama2, mistral, codellama, etc.

Configuration

config :ragex, :ai_providers,
  ollama: [
    endpoint: "http://localhost:11434",
    model: "codellama",
    options: [
      temperature: 0.7
    ]
  ]

No API Key Required

Ollama runs locally, so no API key is needed.

Supported Models

Any model installed in your local Ollama instance:

  • llama2 - Meta's Llama 2
  • mistral - Mistral AI's model
  • codellama - Code-specialized Llama
  • phi - Microsoft's small language model
  • And many more from https://ollama.ai/library

Installation

Install Ollama from https://ollama.ai and pull a model:

ollama pull codellama

API Documentation

https://github.com/ollama/ollama/blob/main/docs/api.md