View Source HyperLLM.Provider.LlamaCPP (hyper_llm v0.6.0)
Provider implementation for LlamaCPP.
LlamaCPP server that implements the OpenAI API format. https://github.com/ggerganov/llama.cpp/tree/master/examples/server
Configuration
api_key
- The API key for the LlamaCPP API (optional).
base_url
- The base URL for the LlamaCPP API.
config :hyper_llm,
llama_cpp: [
api_key: "llamacpp",
base_url: "http://localhost:8080"
]
Summary
Functions
See HyperLLM.Chat.completion/3
for more information.
Check if a model is supported by the provider.
Functions
@spec completion( HyperLLM.Provider.completion_params(), HyperLLM.Provider.completion_config() ) :: {:ok, binary()} | {:error, binary()}
See HyperLLM.Chat.completion/3
for more information.
Check if a model is supported by the provider.
All models are supported as they are loaded into the server directly.