View Source Evision.DNN.ClassificationModel (Evision v0.1.14)
Link to this section Summary
Types
Type that represents an Evision.DNN.ClassificationModel
struct.
Functions
Variant 1:
Create model from deep learning network.
Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.
Get enable/disable softmax post processing option.
Set enable/disable softmax post processing option.
Link to this section Types
@type t() :: %Evision.DNN.ClassificationModel{ref: reference()}
Type that represents an Evision.DNN.ClassificationModel
struct.
ref.
reference()
The underlying erlang resource variable.
Link to this section Functions
@spec classificationModel(Evision.DNN.Net.t()) :: t() | {:error, String.t()}
@spec classificationModel(binary()) :: t() | {:error, String.t()}
Variant 1:
Create model from deep learning network.
Positional Arguments
network:
Evision.DNN.Net
.Net object.
Return
Python prototype (for reference):
ClassificationModel(network) -> <dnn_ClassificationModel object>
Variant 2:
Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.
Positional Arguments
model:
String
.Binary file contains trained weights.
Keyword Arguments
config:
String
.Text file contains network configuration.
Return
Python prototype (for reference):
ClassificationModel(model[, config]) -> <dnn_ClassificationModel object>
Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.
Positional Arguments
model:
String
.Binary file contains trained weights.
Keyword Arguments
config:
String
.Text file contains network configuration.
Return
Python prototype (for reference):
ClassificationModel(model[, config]) -> <dnn_ClassificationModel object>
@spec classify(t(), Evision.Mat.maybe_mat_in()) :: {integer(), number()} | {:error, String.t()}
Positional Arguments
- frame:
Evision.Mat
Return
- classId:
int
- conf:
float
Has overloading in C++
Python prototype (for reference):
classify(frame) -> classId, conf
Get enable/disable softmax post processing option.
Return
- retval:
bool
This option defaults to false, softmax post processing is not applied within the classify() function.
Python prototype (for reference):
getEnableSoftmaxPostProcessing() -> retval
Set enable/disable softmax post processing option.
Positional Arguments
enable:
bool
.Set enable softmax post processing within the classify() function.
Return
- retval:
Evision.DNN.ClassificationModel
If this option is true, softmax is applied after forward inference within the classify() function to convert the confidences range to [0.0-1.0]. This function allows you to toggle this behavior. Please turn true when not contain softmax layer in model.
Python prototype (for reference):
setEnableSoftmaxPostProcessing(enable) -> retval