batched_communication v0.1.4 BatchedCommunication View Source

Mostly-transparent batching of remote messages in Erlang/Elixir cluster.

Hex.pm Build Status Coverage Status

Features & Design

For high throughput use cases ErlangVM’s default remote messaging may not be optimal. Messages that are not sensitive to latency can be sent in compressed batches to save network bandwidth and to reduce TCP overhead.

  • BatchedCommunication.cast/2, BatchedCommunication.call/3, etc. (which behave similarly to GenServer.cast/2, GenServer.call/3, etc.) are provided.

  • Remote messages sent using BatchedCommunication are relayed to the following processes:

    • A BatchedCommunication.Sender process on the sender node, which buffers messages for a while, (optionally) compresses it, and sends the batch to the Receiver process on the destination node.
    • A BatchedCommunication.Receiver process on the receiver node, which decodes the received batch and dispatches messages to each destination process.
  • There are 32 Senders and 32 Receivers in each node (for concurrency within each node) and the actual Sender and Receiver processes are chosen by hash values of receiver node and sender node, respectively. Thus message passing using BatchedCommunication preserves order of messages between each pair of processes (just as the original Erlang message passing does), since messages go through the same set of Sender and Receiver processes.

  • The following configuration parameters can be modified at runtime:

    • maximum wait time before sending messages as a batch
    • maximum number of accumulated messages (during wait time) in a batch
    • whether to compress batched messages or not
  • Of course local messages (i.e., message sent within a single node) are delivered immediately, bypassing the batching mechanism described above.

Link to this section Summary

Functions

Sends the same message to multiple destination processes

Makes a synchronous request to the given destination process

Sends an asynchronous message to the given destination process

Changes whether to compress each batch of messages or not

Sets maximum number of messages to accumulate in one batch

Sets maximum wait time (in milliseconds) before sending messages as a batch

Collect statistics of batches sent from this node to the specified node during the specified duration (in milliseconds)

Gets the current configurations

Sends a reply to a client that has sent a synchronous request

Sends message to the destination process dest with a batching mechanism

Link to this section Types

Link to this type batch_stats() View Source
batch_stats() ::
  {n_messages :: pos_integer(), raw_bytes :: pos_integer(),
   sent_bytes :: pos_integer()}
Link to this type configurations() View Source
configurations() :: %{
  max_wait_time: pos_integer(),
  max_messages_per_batch: pos_integer(),
  compression: BatchedCommunication.Compression.t()
}
Link to this type dest() View Source
dest() :: pid() | atom() | {atom(), node()}
Link to this type message() View Source
message() :: any()

Link to this section Functions

Link to this function broadcast(dests, message) View Source
broadcast([dest()], message()) :: :ok

Sends the same message to multiple destination processes.

Link to this function call(dest, msg, t \\ 5000) View Source
call(
  dest(),
  message(),
  timeout() | {:clean_timeout, timeout()} | {:dirty_timeout, timeout()}
) :: reply()

Makes a synchronous request to the given destination process.

When you want to batch multiple messages to the same destination node, you can use this function as a replacement for GenServer.call/3.

Link to this function cast(dest, msg) View Source
cast(dest(), message()) :: :ok

Sends an asynchronous message to the given destination process.

When you want to batch multiple messages to the same destination node, you can use this function as a replacement for GenServer.cast/2.

Link to this function change_compression(compression) View Source
change_compression(BatchedCommunication.Compression.t()) :: :ok

Changes whether to compress each batch of messages or not.

Currently supported values are :gzip and :raw (no compression). Defaults to :gzip.

Link to this function change_max_messages_per_batch(max) View Source
change_max_messages_per_batch(pos_integer()) :: :ok

Sets maximum number of messages to accumulate in one batch.

When a BatchedCommunication.Sender process has messages more than or equal to this maximum, it immediately (i.e., without waiting for timer; see also change_max_wait_time/1) sends the messages in one batch. Defaults to 100 messages.

Link to this function change_max_wait_time(time) View Source
change_max_wait_time(pos_integer()) :: :ok

Sets maximum wait time (in milliseconds) before sending messages as a batch.

When a BatchedCommunication.Sender process receives a message for a particular destination node, it starts a timer with the maximum wait time. When the timer fires the accumulated messages are sent in one batch. Defaults to 100 milliseconds.

Link to this function collect_sending_stats(dest_node, duration) View Source
collect_sending_stats(node(), pos_integer()) :: [batch_stats()]

Collect statistics of batches sent from this node to the specified node during the specified duration (in milliseconds).

Each element of the returned list is a 3-tuple that consists of

  • number of messages in a batch
  • byte size of the batch before comprression
  • byte size of the batch after compression (this is equal to the previous one if :raw compression option is used)
Link to this function get_configurations() View Source
get_configurations() :: configurations()

Gets the current configurations.

Link to this function reply(a0, reply) View Source
reply({pid(), reference()}, reply()) :: :ok

Sends a reply to a client that has sent a synchronous request.

When you want to batch multiple messages to the same destination node, you can use this function as a replacement for GenServer.reply/2.

Link to this function send(dest, message) View Source
send(dest(), message()) :: :ok

Sends message to the destination process dest with a batching mechanism.