View Source Evision.DNN.ClassificationModel (Evision v0.1.10)

Link to this section Summary

Types

t()

Type that represents an Evision.DNN.ClassificationModel struct.

Functions

Variant 1:

Create model from deep learning network.

Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.

Positional Arguments
Return
  • classId: int
  • conf: float

Has overloading in C++

Get enable/disable softmax post processing option.

Set enable/disable softmax post processing option.

Link to this section Types

@type t() :: %Evision.DNN.ClassificationModel{ref: reference()}

Type that represents an Evision.DNN.ClassificationModel struct.

  • ref. reference()

    The underlying erlang resource variable.

Link to this section Functions

Link to this function

classificationModel(network)

View Source
@spec classificationModel(Evision.DNN.Net.t()) :: t() | {:error, String.t()}
@spec classificationModel(binary()) :: t() | {:error, String.t()}

Variant 1:

Create model from deep learning network.

Positional Arguments
Return

Python prototype (for reference):

ClassificationModel(network) -> <dnn_ClassificationModel object>

Variant 2:

Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.

Positional Arguments
  • model: String.

    Binary file contains trained weights.

Keyword Arguments
  • config: String.

    Text file contains network configuration.

Return

Python prototype (for reference):

ClassificationModel(model[, config]) -> <dnn_ClassificationModel object>
Link to this function

classificationModel(model, opts)

View Source
@spec classificationModel(binary(), [{atom(), term()}, ...] | nil) ::
  t() | {:error, String.t()}

Create classification model from network represented in one of the supported formats. An order of @p model and @p config arguments does not matter.

Positional Arguments
  • model: String.

    Binary file contains trained weights.

Keyword Arguments
  • config: String.

    Text file contains network configuration.

Return

Python prototype (for reference):

ClassificationModel(model[, config]) -> <dnn_ClassificationModel object>
@spec classify(t(), Evision.Mat.maybe_mat_in()) ::
  {integer(), number()} | {:error, String.t()}
Positional Arguments
Return
  • classId: int
  • conf: float

Has overloading in C++

Python prototype (for reference):

classify(frame) -> classId, conf
Link to this function

getEnableSoftmaxPostProcessing(self)

View Source
@spec getEnableSoftmaxPostProcessing(t()) :: boolean() | {:error, String.t()}

Get enable/disable softmax post processing option.

Return
  • retval: bool

This option defaults to false, softmax post processing is not applied within the classify() function.

Python prototype (for reference):

getEnableSoftmaxPostProcessing() -> retval
Link to this function

setEnableSoftmaxPostProcessing(self, enable)

View Source
@spec setEnableSoftmaxPostProcessing(t(), boolean()) :: t() | {:error, String.t()}

Set enable/disable softmax post processing option.

Positional Arguments
  • enable: bool.

    Set enable softmax post processing within the classify() function.

Return

If this option is true, softmax is applied after forward inference within the classify() function to convert the confidences range to [0.0-1.0]. This function allows you to toggle this behavior. Please turn true when not contain softmax layer in model.

Python prototype (for reference):

setEnableSoftmaxPostProcessing(enable) -> retval