Module sthrottle

This modules provides a concurrency limiting service.

Behaviours: gen_fsm.

Description

This modules provides a concurrency limiting service. A process joins a queue and remains there until the number of active processes goes below the limit. The queue can be actively managed using an squeue callback module, and passively managed using head or tail drop. Processes that die while in the queue are automatically removed and active processes that die are replaced by processes in the queue. The concurrency limit can be altered using the built in algorithm and/or a custom (manual) feedback loop.

To join the queue a process calls ask/1, which will block until the process can start the task. Once the task is completed done/2 releases the lock.

To increase the concurrency limit (up to the maximum) call positive/1, and to decrease (up to the minimum) call negative/1.

To use the built in feedback loop call signal/3 on the result of another queue attempt (e.g. sbroker:ask/1). This will reduce the concurrency limit when processes are dropped. It will also increase the concurrency limit when two queue attempts in a row are successful with a sojourn time of 0.

Data Types

name()

name() = {local, atom()} | {global, any()} | {via, module(), any()}

queue_spec()

queue_spec() = {module(), any(), out | out_r, non_neg_integer() | infinity, drop | drop_r}

throttle()

throttle() = pid() | atom() | {atom(), node()} | {global, any()} | {via, module(), any()}

Function Index

ask/1Tries to gain access to a work lock.
async_ask/1Sends an asynchronous request to gain access to a work lock.
cancel/2Cancels an asynchronous request.
done/2Releases the lock represented by Ref.
erase/1Removes process dictionary entries relating to the lock Ref.
negative/1Applies negative feedback to the throttle.
positive/1Applies positive feedback to the throttle.
signal/3Send a signal to the throttle based on a queue attempt response.
start_link/0Starts a throttle with default limits and queues.
start_link/1Starts a registered throttle with default limits and queue.
start_link/4Starts a throttle with custom limits and queue.
start_link/5Starts a registered throttle with custom queues.

Function Details

ask/1

ask(Throttle) -> {go, Ref, Pid, SojournTime} | {drop, SojournTime}

Tries to gain access to a work lock. Returns {go, Ref, Pid, SojournTime} on success or {drop SojournTime} on failure.

Ref is the lock reference, which is a reference(). Pid is the pid() of the throttle. SojournTime is the time spent in the queue in milliseconds.

A process should stop the task if Pid exits as the lock is lost. Usually this can achieved by using a rest_for_one or one_for_all supervisor that will shutdown the worker if the throttle exits. If this is not in place a monitor, or link (warning: the throttle does not trap exits), can be used.

The Pid should be used as the Throttle in future calls that use the lock, such as done/2 and signal/3.

async_ask/1

async_ask(Throttle) -> ARef

Sends an asynchronous request to gain access to a work lock. Returns a reference(), ARef, which can be used to identify the reply containing the result of the request, or to cancel the request using cancel/1.

The reply is of the form {ARef, {go, Ref, Pid, SojournTime} or {ARef, {drop, SojournTime}}.

Multiple asynchronous requests can be made from a single process to a throttle and no guarantee is made of the order of replies. If the throttle exits or is on a disconnected node there is no guarantee of a reply and so the caller should take appriopriate steps to handle this scenario.

See also: cancel/2.

cancel/2

cancel(Throttle, ARef) -> ok | {error, not_found}

Cancels an asynchronous request. Returns ok on success and {error, not_found} if the request does not exist. In the later case a caller may wish to check its message queue for an existing reply.

See also: async_ask/1.

done/2

done(Throttle, Ref) -> ok | {error, not_found}

Releases the lock represented by Ref. Returns ok on success and {error, not_found} if the request does not exist.

See also: ask/1.

erase/1

erlang:erase(Ref) -> ok

Removes process dictionary entries relating to the lock Ref.

signal/3 may use the process dictionary to store state. This is cleaned up by signal/3 when it returns {done, SojournTime} and by done/2. However if the throttle exits and owner of the lock does not then this function should be called to prevent a leak.

This function can also be used to forget any data used by signal/3 while the lock is still active. Future calls to signal/3 on the same lock will still work and may re-add an entry to the process dictionary.

negative/1

negative(Throttle) -> ok

Applies negative feedback to the throttle. Decreases the concurrency limit by 1, up to the minimum.

positive/1

positive(Throttle) -> ok

Applies positive feedback to the throttle. Increases the concurrency limit by 1, up to the maximum.

signal/3

signal(Throttle, Ref, Response) -> Response

Send a signal to the throttle based on a queue attempt response. Returns the response if the response is a go tuple. Returns a new response, which might be the same, if the response is a drop tuple:

{done, SojournTime} means the concurrency lock is lost and SojournTime is the sojourn time from the initial response.

{not_found, SojournTime} means the concurrency lock did not exist on the throttle and SojournTime is the sojourn time from the initial response.

This function is designed to control the concurrency limit of a throttle process based on a queue attempt on a different queue (such as the result of sbroker:ask/1), not on queue attempts on the throttle itself.

start_link/0

start_link() -> {ok, Pid} | {error, Reason}

Starts a throttle with default limits and queues. The default queue uses squeue_timeout with a timeout of 5000, which means that items are dropped if they spend longer than 5000ms in the queue. The queue has a size of infinity and uses out to dequeue items. The tick interval is 200, so the active queue management timeout strategy is applied at least every 200ms. The minimum (and initial) concurrency limit is 0 and the maximum is infinity.

start_link/1

start_link(Name) -> {ok, Pid} | {error, Reason}

Starts a registered throttle with default limits and queue.

See also: start_link/1.

start_link/4

start_link(Min, Max, AskingSpec, Interval) -> {ok, Pid} | {error, Reason}

Starts a throttle with custom limits and queue.

The first argument, Min is the minimum, (and initial), concurrency limit and is a non_neg_integer(). The second argument, Max, is the maximum concurrency limit, is a non_neg_integer() or infinity and must be greater than or equal to Min.

The third argument, QueueSpec, is the queue specification for the queue. Processes that call ask/1 (or async_ask/1) join this queue until they gain a lock or are dropped. The fourth argument, Interval, is the interval in milliseconds that the queue is polled. This ensures that the active queue management strategy is applied even if no processes are enqueued/dequeued.

A queue specifcation takes the following form: {Module, Args, Out, Size, Drop}. Module is the squeue callback module and Args are its arguments. The queue is created using squeue:new(Module, Arg). Out defines the method of dequeuing, it is either the atom out (dequeue items from the head, i.e. FIFO), or the atom out_r (dequeue items from the tail, i.e. LIFO). Size is the maximum size of the queue, it is either a non_neg_integer() or infinity. Drop defines the strategy to take when the maximum size, Size, of the queue is exceeded. It is either the atom drop (drop from the head of the queue, i.e. head drop) or drop_r (drop from the tail of the queue, i.e. tail drop)

start_link/5

start_link(Name, Min, Max, AskingSpec, Interval) -> {ok, Pid} | {error, Reason}

Starts a registered throttle with custom queues.

See also: start_link/3.


Generated by EDoc, Jan 17 2015, 16:58:31.