View Source Evision.DNN.Model (Evision v0.1.15)
Link to this section Summary
Types
Type that represents an Evision.DNN.Model
struct.
Functions
Variant 1:
Create model from deep learning network.
Create model from deep learning network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.
Given the @p input frame, create input blob, run net and return the output @p blobs.
Given the @p input frame, create input blob, run net and return the output @p blobs.
Set flag crop for frame.
Set mean value for frame.
Set preprocessing parameters for frame.
Set scalefactor value for frame.
Set input size for frame.
Positional Arguments
width:
int
.
Set flag swapRB for frame.
Positional Arguments
- backendId:
dnn_Backend
Return
- retval:
Evision.DNN.Model
Python prototype (for reference):
Positional Arguments
- targetId:
dnn_Target
Return
- retval:
Evision.DNN.Model
Python prototype (for reference):
Link to this section Types
@type t() :: %Evision.DNN.Model{ref: reference()}
Type that represents an Evision.DNN.Model
struct.
ref.
reference()
The underlying erlang resource variable.
Link to this section Functions
@spec model(Evision.DNN.Net.t()) :: t() | {:error, String.t()}
@spec model(binary()) :: t() | {:error, String.t()}
Variant 1:
Create model from deep learning network.
Positional Arguments
network:
Evision.DNN.Net
.Net object.
Return
- self:
Evision.DNN.Model
Python prototype (for reference):
Model(network) -> <dnn_Model object>
Variant 2:
Create model from deep learning network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.
Positional Arguments
model:
String
.Binary file contains trained weights.
Keyword Arguments
config:
String
.Text file contains network configuration.
Return
- self:
Evision.DNN.Model
Python prototype (for reference):
Model(model[, config]) -> <dnn_Model object>
Create model from deep learning network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.
Positional Arguments
model:
String
.Binary file contains trained weights.
Keyword Arguments
config:
String
.Text file contains network configuration.
Return
- self:
Evision.DNN.Model
Python prototype (for reference):
Model(model[, config]) -> <dnn_Model object>
@spec predict(t(), Evision.Mat.maybe_mat_in()) :: [Evision.Mat.t()] | {:error, String.t()}
Given the @p input frame, create input blob, run net and return the output @p blobs.
Positional Arguments
- frame:
Evision.Mat
Return
outs:
[Evision.Mat]
.Allocated output blobs, which will store results of the computation.
Python prototype (for reference):
predict(frame[, outs]) -> outs
@spec predict(t(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) :: [Evision.Mat.t()] | {:error, String.t()}
Given the @p input frame, create input blob, run net and return the output @p blobs.
Positional Arguments
- frame:
Evision.Mat
Return
outs:
[Evision.Mat]
.Allocated output blobs, which will store results of the computation.
Python prototype (for reference):
predict(frame[, outs]) -> outs
Set flag crop for frame.
Positional Arguments
crop:
bool
.Flag which indicates whether image will be cropped after resize or not.
Return
- retval:
Evision.DNN.Model
Python prototype (for reference):
setInputCrop(crop) -> retval
@spec setInputMean( t(), {number()} | {number(), number()} | {number() | number() | number()} | {number(), number(), number(), number()} ) :: t() | {:error, String.t()}
Set mean value for frame.
Positional Arguments
mean:
Scalar
.Scalar with mean values which are subtracted from channels.
Return
- retval:
Evision.DNN.Model
Python prototype (for reference):
setInputMean(mean) -> retval
Set preprocessing parameters for frame.
Keyword Arguments
scale:
double
.Multiplier for frame values.
size:
Size
.New input size.
mean:
Scalar
.Scalar with mean values which are subtracted from channels.
swapRB:
bool
.Flag which indicates that swap first and last channels.
crop:
bool
.Flag which indicates whether image will be cropped after resize or not. blob(n, c, y, x) = scale * resize( frame(y, x, c) ) - mean(c) )
Python prototype (for reference):
setInputParams([, scale[, size[, mean[, swapRB[, crop]]]]]) -> None
Set scalefactor value for frame.
Positional Arguments
scale:
double
.Multiplier for frame values.
Return
- retval:
Evision.DNN.Model
Python prototype (for reference):
setInputScale(scale) -> retval
Set input size for frame.
Positional Arguments
size:
Size
.New input size.
Return
- retval:
Evision.DNN.Model
Note: If shape of the new blob less than 0, then frame size not change.
Python prototype (for reference):
setInputSize(size) -> retval
Positional Arguments
width:
int
.New input width.
height:
int
.New input height.
Return
- retval:
Evision.DNN.Model
Has overloading in C++
Python prototype (for reference):
setInputSize(width, height) -> retval
Set flag swapRB for frame.
Positional Arguments
swapRB:
bool
.Flag which indicates that swap first and last channels.
Return
- retval:
Evision.DNN.Model
Python prototype (for reference):
setInputSwapRB(swapRB) -> retval
Positional Arguments
- backendId:
dnn_Backend
Return
- retval:
Evision.DNN.Model
Python prototype (for reference):
setPreferableBackend(backendId) -> retval
Positional Arguments
- targetId:
dnn_Target
Return
- retval:
Evision.DNN.Model
Python prototype (for reference):
setPreferableTarget(targetId) -> retval