View Source Evision.CUDA.Stream (Evision v0.1.12)

Link to this section Summary

Types

t()

Type that represents an Evision.CUDA.Stream struct.

Functions

Return
  • retval: void*

Python prototype (for reference):

Adds a callback to be called on the host after all currently enqueued items in the stream have completed.

Returns true if the current stream queue is finished. Otherwise, it returns false.

Return

Python prototype (for reference):

Variant 1:

creates a new Stream using the cudaFlags argument to determine the behaviors of the stream

Makes a compute stream wait on an event.

Blocks the current CPU thread until all operations in the stream are complete.

Link to this section Types

@type t() :: %Evision.CUDA.Stream{ref: reference()}

Type that represents an Evision.CUDA.Stream struct.

  • ref. reference()

    The underlying erlang resource variable.

Link to this section Functions

@spec cudaPtr(t()) :: :ok | {:error, String.t()}
Return
  • retval: void*

Python prototype (for reference):

cudaPtr() -> retval
@spec null() :: t() | {:error, String.t()}

Adds a callback to be called on the host after all currently enqueued items in the stream have completed.

Return

Note: Callbacks must not make any CUDA API calls. Callbacks must not perform any synchronization that may depend on outstanding device work or other callbacks that are not mandated to run earlier. Callbacks without a mandated order (in independent streams) execute in undefined order and may be serialized.

Python prototype (for reference):

Null() -> retval
@spec queryIfComplete(t()) :: boolean() | {:error, String.t()}

Returns true if the current stream queue is finished. Otherwise, it returns false.

Return
  • retval: bool

Python prototype (for reference):

queryIfComplete() -> retval
@spec stream() :: t() | {:error, String.t()}
Return

Python prototype (for reference):

Stream() -> <cuda_Stream object>
@spec stream(integer()) :: t() | {:error, String.t()}
@spec stream(reference()) :: t() | {:error, String.t()}

Variant 1:

creates a new Stream using the cudaFlags argument to determine the behaviors of the stream

Positional Arguments
  • cudaFlags: size_t
Return

Note: The cudaFlags parameter is passed to the underlying api cudaStreamCreateWithFlags() and supports the same parameter values.

// creates an OpenCV cuda::Stream that manages an asynchronous, non-blocking,
// non-default CUDA stream
cv::cuda::Stream cvStream(cudaStreamNonBlocking);

Python prototype (for reference):

Stream(cudaFlags) -> <cuda_Stream object>

Variant 2:

Positional Arguments
  • allocator: Ptr<GpuMat::Allocator>
Return

Python prototype (for reference):

Stream(allocator) -> <cuda_Stream object>
@spec waitEvent(t(), Evision.CUDA.Event.t()) :: :ok | {:error, String.t()}

Makes a compute stream wait on an event.

Positional Arguments

Python prototype (for reference):

waitEvent(event) -> None
@spec waitForCompletion(t()) :: :ok | {:error, String.t()}

Blocks the current CPU thread until all operations in the stream are complete.

Python prototype (for reference):

waitForCompletion() -> None