View Source Evision.CUDA.HostMem (Evision v0.1.25)
Link to this section Summary
Types
Type that represents an Evision.CUDA.HostMem
struct.
Functions
channels
clone
createMatHeader
depth
elemSize1
elemSize
empty
HostMem
Variant 1:
HostMem
Variant 1:
HostMem
Variant 1:
HostMem
HostMem
Maps CPU memory to GPU address space and creates the cuda::GpuMat header without reference counting for it.
reshape
reshape
size
step1
swap
type
Link to this section Types
@type t() :: %Evision.CUDA.HostMem{ref: reference()}
Type that represents an Evision.CUDA.HostMem
struct.
ref.
reference()
The underlying erlang resource variable.
Link to this section Functions
channels
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
int
Python prototype (for reference only):
channels() -> retval
clone
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
Evision.CUDA.HostMem
Python prototype (for reference only):
clone() -> retval
create
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
- rows:
int
- cols:
int
- type:
int
Python prototype (for reference only):
create(rows, cols, type) -> None
@spec createMatHeader(t()) :: Evision.Mat.t() | {:error, String.t()}
createMatHeader
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
Evision.Mat
Python prototype (for reference only):
createMatHeader() -> retval
depth
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
int
Python prototype (for reference only):
depth() -> retval
elemSize1
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
size_t
Python prototype (for reference only):
elemSize1() -> retval
elemSize
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
size_t
Python prototype (for reference only):
elemSize() -> retval
empty
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
bool
Python prototype (for reference only):
empty() -> retval
HostMem
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem
Python prototype (for reference only):
HostMem([, alloc_type]) -> <cuda_HostMem object>
@spec hostMem([{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
@spec hostMem(Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
@spec hostMem(Evision.CUDA.GpuMat.t()) :: t() | {:error, String.t()}
Variant 1:
HostMem
Positional Arguments
- arr:
Evision.Mat
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem
Python prototype (for reference only):
HostMem(arr[, alloc_type]) -> <cuda_HostMem object>
Variant 2:
HostMem
Positional Arguments
- arr:
Evision.CUDA.GpuMat
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem
Python prototype (for reference only):
HostMem(arr[, alloc_type]) -> <cuda_HostMem object>
Variant 3:
HostMem
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem
Python prototype (for reference only):
HostMem([, alloc_type]) -> <cuda_HostMem object>
@spec hostMem(Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
@spec hostMem(Evision.CUDA.GpuMat.t(), [{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
@spec hostMem( {number(), number()}, integer() ) :: t() | {:error, String.t()}
Variant 1:
HostMem
Positional Arguments
- size:
Size
- type:
int
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem
Python prototype (for reference only):
HostMem(size, type[, alloc_type]) -> <cuda_HostMem object>
Variant 2:
HostMem
Positional Arguments
- arr:
Evision.Mat
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem
Python prototype (for reference only):
HostMem(arr[, alloc_type]) -> <cuda_HostMem object>
Variant 3:
HostMem
Positional Arguments
- arr:
Evision.CUDA.GpuMat
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem
Python prototype (for reference only):
HostMem(arr[, alloc_type]) -> <cuda_HostMem object>
@spec hostMem({number(), number()}, integer(), [{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
@spec hostMem(integer(), integer(), integer()) :: t() | {:error, String.t()}
Variant 1:
HostMem
Positional Arguments
- rows:
int
- cols:
int
- type:
int
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem
Python prototype (for reference only):
HostMem(rows, cols, type[, alloc_type]) -> <cuda_HostMem object>
Variant 2:
HostMem
Positional Arguments
- size:
Size
- type:
int
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem
Python prototype (for reference only):
HostMem(size, type[, alloc_type]) -> <cuda_HostMem object>
@spec hostMem(integer(), integer(), integer(), [{atom(), term()}, ...] | nil) :: t() | {:error, String.t()}
HostMem
Positional Arguments
- rows:
int
- cols:
int
- type:
int
Keyword Arguments
- alloc_type:
HostMem_AllocType
.
Return
- self:
Evision.CUDA.HostMem
Python prototype (for reference only):
HostMem(rows, cols, type[, alloc_type]) -> <cuda_HostMem object>
Maps CPU memory to GPU address space and creates the cuda::GpuMat header without reference counting for it.
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
bool
This can be done only if memory was allocated with the SHARED flag and if it is supported by the hardware. Laptops often share video and CPU memory, so address spaces can be mapped, which eliminates an extra copy.
Python prototype (for reference only):
isContinuous() -> retval
reshape
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
- cn:
int
Keyword Arguments
- rows:
int
.
Return
- retval:
Evision.CUDA.HostMem
Python prototype (for reference only):
reshape(cn[, rows]) -> retval
reshape
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
- cn:
int
Keyword Arguments
- rows:
int
.
Return
- retval:
Evision.CUDA.HostMem
Python prototype (for reference only):
reshape(cn[, rows]) -> retval
size
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
Size
Python prototype (for reference only):
size() -> retval
step1
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
size_t
Python prototype (for reference only):
step1() -> retval
swap
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
- b:
Evision.CUDA.HostMem
Python prototype (for reference only):
swap(b) -> None
type
Positional Arguments
- self:
Evision.CUDA.HostMem.t()
Return
- retval:
int
Python prototype (for reference only):
type() -> retval