config() = #{endpoint => string(), chat_endpoint => string(), model => binary(), stream => boolean(), temperature => float(), max_tokens => integer(), system_prompt => binary(), additional_options => map()}
message() = #{role => binary(), content => binary()}
messages() = [message()]
ollama_result() = {ok, binary()} | {error, term()}
chat/1 | Chat completion using messages format with default/environment configuration. |
chat/2 | Chat completion using messages format with custom configuration. |
default_config/0 | Get default hardcoded configuration. |
format_prompt/2 | Format a prompt template with given arguments. |
generate/1 | Generate text using a simple prompt with default/environment configuration. |
generate/2 | Generate text using a simple prompt with custom configuration. |
generate_with_context/2 | Generate text with additional context using default/environment configuration. |
generate_with_context/3 | Generate text with additional context using custom configuration. |
get_env_config/0 | Get configuration from environment variables with fallback to defaults. |
merge_config/2 | Merge two configurations, with the second one taking precedence. |
print_result/1 | Print the result of an Ollama operation to stdout. |
chat(Messages::messages()) -> ollama_result()
Chat completion using messages format with default/environment configuration.
chat(Messages::messages(), Config::config()) -> ollama_result()
Chat completion using messages format with custom configuration.
default_config() -> config()
Get default hardcoded configuration.
format_prompt(Template::string(), Args::list()) -> binary()
Format a prompt template with given arguments. Similar to io_lib:format but returns binary.
generate(Prompt::string() | binary()) -> ollama_result()
Generate text using a simple prompt with default/environment configuration.
generate(Prompt::string() | binary(), Config::config()) -> ollama_result()
Generate text using a simple prompt with custom configuration.
generate_with_context(Context::string() | binary(), Prompt::string() | binary()) -> ollama_result()
Generate text with additional context using default/environment configuration.
generate_with_context(Context::string() | binary(), Prompt::string() | binary(), Config::config()) -> ollama_result()
Generate text with additional context using custom configuration.
get_env_config() -> config()
Get configuration from environment variables with fallback to defaults. Environment variables: - OLLAMA_ENDPOINT: Ollama API endpoint (default: http://localhost:11434/api/generate) - OLLAMA_CHAT_ENDPOINT: Ollama Chat API endpoint (default: http://localhost:11434/api/chat) - OLLAMA_MODEL: Model name to use (default: llama2) - OLLAMA_TEMPERATURE: Temperature for generation (default: 0.7) - OLLAMA_MAX_TOKENS: Maximum tokens to generate (default: 1000) - OLLAMA_STREAM: Whether to stream responses (default: false) - OLLAMA_SYSTEM_PROMPT: System prompt to use
Merge two configurations, with the second one taking precedence.
print_result(X1::ollama_result()) -> ok | error
Print the result of an Ollama operation to stdout. Returns 'ok' if successful, 'error' otherwise.
Generated by EDoc