Slop.Streaming (slop v0.1.0)

View Source

Provides helpers for streaming responses in SLOP protocol endpoints.

The SLOP protocol supports streaming responses through various mechanisms:

  1. Server-Sent Events (SSE)
  2. Chunked transfers
  3. WebSockets

This module provides helpers for implementing streaming in Phoenix controllers that implement SLOP endpoints.

Usage

defmodule MyAppWeb.SlopChatController do
  use Slop.Controller, :chat
  import Slop.Streaming

  def handle_chat(conn, params) do
    # Start streaming response
    conn = start_streaming(conn)

    # Process messages and stream chunks
    messages = params["messages"] || []

    # Stream back tokens one by one (simulating AI response)
    response = "Hello, I am an AI assistant. How can I help you today?"

    for word <- String.split(response) do
      :timer.sleep(100) # Simulate thinking time
      stream_chunk(conn, %{
        type: "content",
        content: word <> " "
      })
    end

    # Signal completion
    stream_chunk(conn, %{
      type: "done",
      content: ""
    })

    # Return the connection
    conn
  end
end

Summary

Functions

Starts a chunked transfer response for streaming in simple HTTP chunked mode.

Starts streaming a response using Server-Sent Events (SSE).

Sends a chunk of data in the standard SLOP streaming format.

Helper for streaming a complete AI response word by word.

Functions

start_chunked_streaming(conn)

Starts a chunked transfer response for streaming in simple HTTP chunked mode.

Returns a connection that's ready for streaming chunks.

start_streaming(conn)

Starts streaming a response using Server-Sent Events (SSE).

Returns a connection that's ready for streaming chunks.

stream_chunk(conn, data)

Sends a chunk of data in the standard SLOP streaming format.

The chunk should be a map that will be converted to JSON. The map should include a type field that indicates the type of chunk:

  • "content" - A piece of content being streamed
  • "done" - Signals that the stream is complete
  • "error" - Indicates an error occurred
  • "thinking" - Indicates the AI is processing (optional)
  • "tool_start" - Indicates a tool is being called (optional)
  • "tool_result" - Contains the result of a tool call (optional)

Examples

stream_chunk(conn, %{type: "content", content: "Hello, "})
stream_chunk(conn, %{type: "content", content: "world!"})
stream_chunk(conn, %{type: "done", content: ""})

stream_response(conn, response, word_transformer \\ &(&1 <> " "))

Helper for streaming a complete AI response word by word.

This function takes care of the streaming logic, including starting the stream, sending content chunks, and ending the stream.

Example

def handle_chat(conn, params) do
  messages = params["messages"] || []

  # In a real implementation, call an AI API that streams tokens
  response = "Hello, I am an AI assistant. How can I help you today?"

  stream_response(conn, response, fn word, _index ->
    :timer.sleep(100) # Simulate thinking time
    word <> " "
  end)
end