LMStudio (lmstudio v0.1.0)
Elixir client for LM Studio's chat completions API with streaming support.
Summary
Functions
Convenience function for single user message.
Sends a chat completion request with optional streaming.
Lists available models from LM Studio.
Streams a completion response to a process.
Functions
Convenience function for single user message.
Sends a chat completion request with optional streaming.
Parameters
- messages: List of message maps with :role and :content keys
- opts: Keyword list of options
- :model - Model to use (default: from config)
- :temperature - Temperature for sampling (default: from config)
- :max_tokens - Maximum tokens to generate (default: from config)
- :stream - Whether to stream the response (default: false)
- :system_prompt - Convenience option to prepend a system message
- :stream_callback - Function to handle streaming chunks (required if stream: true)
Examples
# Non-streaming request
{:ok, response} = LMStudio.complete([
%{role: "user", content: "Hello, how are you?"}
])
# Streaming request
LMStudio.complete(
[%{role: "user", content: "Tell me a story"}],
stream: true,
stream_callback: fn chunk -> IO.write(chunk) end
)
# With system prompt
LMStudio.complete(
[%{role: "user", content: "What is Elixir?"}],
system_prompt: "You are a helpful programming assistant."
)
Lists available models from LM Studio.
Streams a completion response to a process.
Parameters
- messages: List of message maps with :role and :content keys
- pid: The process to send tokens to
- opts: Keyword list of options (same as complete/2 but without stream_callback)
Message Format
The target process will receive messages in the format:
{:lmstudio_token, token}
for each token{:lmstudio_done, :ok}
when streaming is complete{:lmstudio_error, reason}
if an error occurs
Example
pid = spawn(fn ->
receive_tokens()
end)
LMStudio.stream_to_process(
[%{role: "user", content: "Hello!"}],
pid
)
defp receive_tokens do
receive do
{:lmstudio_token, token} ->
IO.puts(token)
receive_tokens()
{:lmstudio_done, :ok} ->
IO.puts("\nDone!")
{:lmstudio_error, reason} ->
IO.puts("Error: #{inspect(reason)}")
end
end