raft_fleet v0.3.1 RaftFleet View Source

Public interface functions of RaftFleet.

Link to this section Summary

Functions

Activates Node.self()

Queries the current nodes which have been activated using activate/1 in the cluster

Registers a new consensus group identified by name

Executes a command on the replicated value identified by name

Queries already registered consensus groups

Deactivates Node.self()

Executes a read-only query on the replicated value identified by name

Removes an existing consensus group identified by name

Called when an application is started

Tries to find the current leader of the consensus group specified by name

Link to this section Functions

Link to this function activate(zone) View Source
activate(RaftFleet.ZoneId.t) :: :ok | {:error, :not_inactive}

Activates Node.self().

When :raft_fleet is started as an OTP application, the node is not active; to host consensus group members each node must be explicitly activated. zone is an ID of data center zone which this node belongs to. zone is used to determine which nodes to replicate data: RaftFleet tries to place members of a consensus group across multiple zones for maximum availability.

Node activation by calling this function should be done after the node is fully connected to the other existing nodes; otherwise there is a possibility (although it is small) that the cluster forms partitioned subset of active nodes.

Link to this function active_nodes() View Source
active_nodes() :: %{optional(RaftFleet.ZoneId.t) => [node]}

Queries the current nodes which have been activated using activate/1 in the cluster.

This function sends a query to a leader of the “cluster consensus group”, which is managed internally by raft_fleet. The returned value is grouped by zone IDs which have been passed to activate/1. This function exits if no active node exists in the cluster.

Link to this function add_consensus_group(name, n_replica, rv_config) View Source
add_consensus_group(atom, pos_integer, RaftedValue.Config.t) ::
  :ok |
  {:error, :already_added | :no_leader | any}

Registers a new consensus group identified by name.

name is used as a registered name for member processes of the new consensus group. n_replica is the number of replicas (Raft member processes implemented as RaftedValue.Server). For explanation of rv_config see RaftedValue.make_config/2.

If you configure raft_fleet to persist Raft logs & snapshots (see :persistence_dir_parent in RaftFleet.Config) and the consensus group with name had been removed by remove_consensus_group/1, then add_consensus_group/3 will restore the state of the consensus group from the snapshot and log files.

Link to this function command(name, command_arg, timeout \\ 500, retry \\ 3, retry_interval \\ 1000) View Source
command(atom, RaftedValue.Data.command_arg, pos_integer, non_neg_integer, pos_integer) ::
  {:ok, RaftedValue.Data.command_ret} |
  {:error, :no_leader}

Executes a command on the replicated value identified by name.

The target consensus group identified by name must be registered beforehand using add_consensus_group/3. This function automatically resolves the leader process of the consensus group, caches PID of the current leader in local ETS table and send the given command to the leader.

timeout is used in each synchronous messaging. In order to tolerate temporal absences of leaders during Raft leader elections, it retries requests up to retry. Before retrying requests this function sleeps for retry_interval milliseconds. Thus for worst case this function blocks the caller for timeout * (retry + 1) + retry_interval * retry. Note that for complete masking of leader elections retry_interval * retry must be sufficiently longer than the time scale for leader elections (:election_timeout in RaftedValue.Config.t).

See also RaftedValue.command/4.

Link to this function consensus_groups() View Source
consensus_groups() :: %{optional(atom) => pos_integer}

Queries already registered consensus groups.

This function sends a query to a leader of the “cluster consensus group”, which is managed internally by raft_fleet. The returned value is a map whose keys and values are consensus group name and number of replicas of the group. This function exits if no active node exists in the cluster.

Link to this function deactivate() View Source
deactivate() :: :ok | {:error, :inactive}

Deactivates Node.self().

Call this function before you remove an ErlangVM from your cluster. Note that calling this function does not immediately remove consensus member processes in this node; these processes will be gradually migrated to other nodes by periodic rebalancing.

Link to this function query(name, query_arg, timeout \\ 500, retry \\ 3, retry_interval \\ 1000) View Source
query(atom, RaftedValue.Data.query_arg, pos_integer, non_neg_integer, pos_integer) ::
  {:ok, RaftedValue.Data.query_ret} |
  {:error, :no_leader}

Executes a read-only query on the replicated value identified by name.

See command/5 for explanations of name, timeout, retry and retry_interval. See also RaftedValue.query/3.

Link to this function remove_consensus_group(name) View Source
remove_consensus_group(atom) ::
  :ok |
  {:error, :not_found | :no_leader}

Removes an existing consensus group identified by name.

Removing a consensus group will eventually trigger terminations of all members of the group. The replicated value held by the group will be discarded.

Note that remove_consensus_group/1 does not immediately terminate existing member processes; they will be terminated afterward by background worker process (see also :balancing_interval in RaftFleet.Config). Note also that, if Raft logs and snapshots has been created (see :persistence_dir_parent in RaftFleet.Config), remove_consensus_group/1 does not remove these files.

Called when an application is started.

This function is called when an the application is started using Application.start/2 (and functions on top of that, such as Application.ensure_started/2). This function should start the top-level process of the application (which should be the top supervisor of the application’s supervision tree if the application follows the OTP design principles around supervision).

start_type defines how the application is started:

  • :normal - used if the startup is a normal startup or if the application is distributed and is started on the current node because of a failover from another mode and the application specification key :start_phases is :undefined.
  • {:takeover, node} - used if the application is distributed and is started on the current node because of a failover on the node node.
  • {:failover, node} - used if the application is distributed and is started on the current node because of a failover on node node, and the application specification key :start_phases is not :undefined.

start_args are the arguments passed to the application in the :mod specification key (e.g., mod: {MyApp, [:my_args]}).

This function should either return {:ok, pid} or {:ok, pid, state} if startup is successful. pid should be the PID of the top supervisor. state can be an arbitrary term, and if omitted will default to []; if the application is later stopped, state is passed to the stop/1 callback (see the documentation for the c:stop/1 callback for more information).

use Application provides no default implementation for the start/2 callback.

Callback implementation for Application.start/2.

Link to this function whereis_leader(name) View Source
whereis_leader(atom) :: nil | pid

Tries to find the current leader of the consensus group specified by name.