View Source Evision.DNN.ClassificationModel (Evision v0.1.18)
Link to this section Summary
Types
Type that represents an Evision.DNN.ClassificationModel
struct.
Functions
Variant 1:
Create model from deep learning network.
Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.
classify
Get enable/disable softmax post processing option.
Given the @p input frame, create input blob, run net and return the output @p blobs.
Given the @p input frame, create input blob, run net and return the output @p blobs.
Set enable/disable softmax post processing option.
Set flag crop for frame.
Set mean value for frame.
Set preprocessing parameters for frame.
Set preprocessing parameters for frame.
Set scalefactor value for frame.
Set input size for frame.
setInputSize
Set flag swapRB for frame.
setPreferableBackend
setPreferableTarget
Link to this section Types
@type t() :: %Evision.DNN.ClassificationModel{ref: reference()}
Type that represents an Evision.DNN.ClassificationModel
struct.
ref.
reference()
The underlying erlang resource variable.
Link to this section Functions
@spec classificationModel(Evision.DNN.Net.t()) :: t() | {:error, String.t()}
@spec classificationModel(binary()) :: t() | {:error, String.t()}
Variant 1:
Create model from deep learning network.
Positional Arguments
network:
Evision.DNN.Net
.Net object.
Return
Python prototype (for reference only):
ClassificationModel(network) -> <dnn_ClassificationModel object>
Variant 2:
Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.
Positional Arguments
model:
String
.Binary file contains trained weights.
Keyword Arguments
config:
String
.Text file contains network configuration.
Return
Python prototype (for reference only):
ClassificationModel(model[, config]) -> <dnn_ClassificationModel object>
Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.
Positional Arguments
model:
String
.Binary file contains trained weights.
Keyword Arguments
config:
String
.Text file contains network configuration.
Return
Python prototype (for reference only):
ClassificationModel(model[, config]) -> <dnn_ClassificationModel object>
@spec classify(t(), Evision.Mat.maybe_mat_in()) :: {integer(), number()} | {:error, String.t()}
classify
Positional Arguments
- self:
Evision.DNN.ClassificationModel.t()
- frame:
Evision.Mat
Return
- classId:
int
- conf:
float
Has overloading in C++
Python prototype (for reference only):
classify(frame) -> classId, conf
Get enable/disable softmax post processing option.
Positional Arguments
- self:
Evision.DNN.ClassificationModel.t()
Return
- retval:
bool
This option defaults to false, softmax post processing is not applied within the classify() function.
Python prototype (for reference only):
getEnableSoftmaxPostProcessing() -> retval
@spec predict(t(), Evision.Mat.maybe_mat_in()) :: [Evision.Mat.t()] | {:error, String.t()}
Given the @p input frame, create input blob, run net and return the output @p blobs.
Positional Arguments
- self:
Evision.DNN.ClassificationModel.t()
- frame:
Evision.Mat
Return
outs:
[Evision.Mat]
.Allocated output blobs, which will store results of the computation.
Python prototype (for reference only):
predict(frame[, outs]) -> outs
@spec predict(t(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) :: [Evision.Mat.t()] | {:error, String.t()}
Given the @p input frame, create input blob, run net and return the output @p blobs.
Positional Arguments
- self:
Evision.DNN.ClassificationModel.t()
- frame:
Evision.Mat
Return
outs:
[Evision.Mat]
.Allocated output blobs, which will store results of the computation.
Python prototype (for reference only):
predict(frame[, outs]) -> outs
Set enable/disable softmax post processing option.
Positional Arguments
self:
Evision.DNN.ClassificationModel.t()
enable:
bool
.Set enable softmax post processing within the classify() function.
Return
- retval:
Evision.DNN.ClassificationModel
If this option is true, softmax is applied after forward inference within the classify() function to convert the confidences range to [0.0-1.0]. This function allows you to toggle this behavior. Please turn true when not contain softmax layer in model.
Python prototype (for reference only):
setEnableSoftmaxPostProcessing(enable) -> retval
@spec setInputCrop(t(), boolean()) :: Evision.DNN.Model.t() | {:error, String.t()}
Set flag crop for frame.
Positional Arguments
self:
Evision.DNN.ClassificationModel.t()
crop:
bool
.Flag which indicates whether image will be cropped after resize or not.
Return
- retval:
Evision.DNN.Model
Python prototype (for reference only):
setInputCrop(crop) -> retval
@spec setInputMean( t(), {number()} | {number(), number()} | {number() | number() | number()} | {number(), number(), number(), number()} ) :: Evision.DNN.Model.t() | {:error, String.t()}
Set mean value for frame.
Positional Arguments
self:
Evision.DNN.ClassificationModel.t()
mean:
Scalar
.Scalar with mean values which are subtracted from channels.
Return
- retval:
Evision.DNN.Model
Python prototype (for reference only):
setInputMean(mean) -> retval
Set preprocessing parameters for frame.
Positional Arguments
- self:
Evision.DNN.ClassificationModel.t()
Keyword Arguments
scale:
double
.Multiplier for frame values.
size:
Size
.New input size.
mean:
Scalar
.Scalar with mean values which are subtracted from channels.
swapRB:
bool
.Flag which indicates that swap first and last channels.
crop:
bool
.Flag which indicates whether image will be cropped after resize or not. blob(n, c, y, x) = scale * resize( frame(y, x, c) ) - mean(c) )
Python prototype (for reference only):
setInputParams([, scale[, size[, mean[, swapRB[, crop]]]]]) -> None
Set preprocessing parameters for frame.
Positional Arguments
- self:
Evision.DNN.ClassificationModel.t()
Keyword Arguments
scale:
double
.Multiplier for frame values.
size:
Size
.New input size.
mean:
Scalar
.Scalar with mean values which are subtracted from channels.
swapRB:
bool
.Flag which indicates that swap first and last channels.
crop:
bool
.Flag which indicates whether image will be cropped after resize or not. blob(n, c, y, x) = scale * resize( frame(y, x, c) ) - mean(c) )
Python prototype (for reference only):
setInputParams([, scale[, size[, mean[, swapRB[, crop]]]]]) -> None
@spec setInputScale(t(), number()) :: Evision.DNN.Model.t() | {:error, String.t()}
Set scalefactor value for frame.
Positional Arguments
self:
Evision.DNN.ClassificationModel.t()
scale:
double
.Multiplier for frame values.
Return
- retval:
Evision.DNN.Model
Python prototype (for reference only):
setInputScale(scale) -> retval
@spec setInputSize( t(), {number(), number()} ) :: Evision.DNN.Model.t() | {:error, String.t()}
Set input size for frame.
Positional Arguments
self:
Evision.DNN.ClassificationModel.t()
size:
Size
.New input size.
Return
- retval:
Evision.DNN.Model
Note: If shape of the new blob less than 0, then frame size not change.
Python prototype (for reference only):
setInputSize(size) -> retval
@spec setInputSize(t(), integer(), integer()) :: Evision.DNN.Model.t() | {:error, String.t()}
setInputSize
Positional Arguments
self:
Evision.DNN.ClassificationModel.t()
width:
int
.New input width.
height:
int
.New input height.
Return
- retval:
Evision.DNN.Model
Has overloading in C++
Python prototype (for reference only):
setInputSize(width, height) -> retval
@spec setInputSwapRB(t(), boolean()) :: Evision.DNN.Model.t() | {:error, String.t()}
Set flag swapRB for frame.
Positional Arguments
self:
Evision.DNN.ClassificationModel.t()
swapRB:
bool
.Flag which indicates that swap first and last channels.
Return
- retval:
Evision.DNN.Model
Python prototype (for reference only):
setInputSwapRB(swapRB) -> retval
@spec setPreferableBackend(t(), integer()) :: Evision.DNN.Model.t() | {:error, String.t()}
setPreferableBackend
Positional Arguments
- self:
Evision.DNN.ClassificationModel.t()
- backendId:
dnn_Backend
Return
- retval:
Evision.DNN.Model
Python prototype (for reference only):
setPreferableBackend(backendId) -> retval
@spec setPreferableTarget(t(), integer()) :: Evision.DNN.Model.t() | {:error, String.t()}
setPreferableTarget
Positional Arguments
- self:
Evision.DNN.ClassificationModel.t()
- targetId:
dnn_Target
Return
- retval:
Evision.DNN.Model
Python prototype (for reference only):
setPreferableTarget(targetId) -> retval