View Source Evision.ML.ANNMLP (Evision v0.1.14)

Link to this section Summary

Types

t()

Type that represents an Evision.ML.ANNMLP struct.

Functions

Computes error on the training or test dataset

Computes error on the training or test dataset

Clears the algorithm state

Creates empty model

Return
  • retval: bool

Python prototype (for reference):

Return
  • retval: double

@see setAnnealFinalT/2

Return
  • retval: double

@see setAnnealInitialT/2

Return

Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.

Return
  • retval: cv::Mat

Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. @sa setLayerSizes

Return
  • retval: double

@see setRpropDW0/2

Return
  • retval: double

@see setRpropDWMax/2

Return
  • retval: double

@see setRpropDWMin/2

Return
  • retval: double

@see setRpropDWMinus/2

Return
  • retval: double

@see setRpropDWPlus/2

Return
  • retval: TermCriteria

@see setTermCriteria/2

Return
  • retval: int

Returns current training method

Returns the number of variables in training samples

Positional Arguments
  • layerIdx: int
Return

Python prototype (for reference):

Returns true if the model is classifier

Returns true if the model is trained

Loads and creates a serialized ANN from a file

Predicts response(s) for the provided sample(s)

Predicts response(s) for the provided sample(s)

Reads algorithm parameters from a file storage

Positional Arguments

Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).

Positional Arguments
  • type: int.

Positional Arguments
  • type: int.

Positional Arguments
  • val: double

@see getAnnealCoolingRatio/1

Positional Arguments
  • val: double

@see getAnnealFinalT/1

Positional Arguments
  • val: double

@see getAnnealInitialT/1

Positional Arguments
  • val: int

@see getAnnealItePerStep/1

Positional Arguments
  • val: double

@see getBackpropWeightScale/1

Positional Arguments

Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Default value is empty Mat. @sa getLayerSizes

Positional Arguments
  • val: double

@see getRpropDW0/1

Positional Arguments
  • val: double

@see getRpropDWMax/1

Positional Arguments
  • val: double

@see getRpropDWMin/1

Positional Arguments
  • val: double

@see getRpropDWMinus/1

Positional Arguments
  • val: double

@see getRpropDWPlus/1

Positional Arguments
  • val: TermCriteria

@see getTermCriteria/1

Positional Arguments
  • method: int.

Positional Arguments
  • method: int.

Trains the statistical model

Trains the statistical model

Trains the statistical model

simplified API for language bindings

simplified API for language bindings

Link to this section Types

@type t() :: %Evision.ML.ANNMLP{ref: reference()}

Type that represents an Evision.ML.ANNMLP struct.

  • ref. reference()

    The underlying erlang resource variable.

Link to this section Functions

Link to this function

calcError(self, data, test)

View Source
@spec calcError(t(), Evision.ML.TrainData.t(), boolean()) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Computes error on the training or test dataset

Positional Arguments
  • data: Evision.ML.TrainData.

    the training data

  • test: bool.

    if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.

Return
  • retval: float

  • resp: Evision.Mat.

    the optional output responses.

The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).

Python prototype (for reference):

calcError(data, test[, resp]) -> retval, resp
Link to this function

calcError(self, data, test, opts)

View Source
@spec calcError(
  t(),
  Evision.ML.TrainData.t(),
  boolean(),
  [{atom(), term()}, ...] | nil
) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Computes error on the training or test dataset

Positional Arguments
  • data: Evision.ML.TrainData.

    the training data

  • test: bool.

    if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.

Return
  • retval: float

  • resp: Evision.Mat.

    the optional output responses.

The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).

Python prototype (for reference):

calcError(data, test[, resp]) -> retval, resp
@spec clear(t()) :: :ok | {:error, String.t()}

Clears the algorithm state

Python prototype (for reference):

clear() -> None
@spec create() :: t() | {:error, String.t()}

Creates empty model

Return

Use StatModel::train to train the model, Algorithm::load\<ANN_MLP>(filename) to load the pre-trained model. Note that the train method has optional flags: ANN_MLP::TrainFlags.

Python prototype (for reference):

create() -> retval
@spec empty(t()) :: boolean() | {:error, String.t()}
Return
  • retval: bool

Python prototype (for reference):

empty() -> retval
Link to this function

getAnnealCoolingRatio(self)

View Source
@spec getAnnealCoolingRatio(t()) :: number() | {:error, String.t()}
Return
  • retval: double

@see setAnnealCoolingRatio/2

Python prototype (for reference):

getAnnealCoolingRatio() -> retval
@spec getAnnealFinalT(t()) :: number() | {:error, String.t()}
Return
  • retval: double

@see setAnnealFinalT/2

Python prototype (for reference):

getAnnealFinalT() -> retval
@spec getAnnealInitialT(t()) :: number() | {:error, String.t()}
Return
  • retval: double

@see setAnnealInitialT/2

Python prototype (for reference):

getAnnealInitialT() -> retval
Link to this function

getAnnealItePerStep(self)

View Source
@spec getAnnealItePerStep(t()) :: integer() | {:error, String.t()}
Return
  • retval: int

@see setAnnealItePerStep/2

Python prototype (for reference):

getAnnealItePerStep() -> retval
Link to this function

getBackpropMomentumScale(self)

View Source
@spec getBackpropMomentumScale(t()) :: number() | {:error, String.t()}
Return
  • retval: double

@see setBackpropMomentumScale/2

Python prototype (for reference):

getBackpropMomentumScale() -> retval
Link to this function

getBackpropWeightScale(self)

View Source
@spec getBackpropWeightScale(t()) :: number() | {:error, String.t()}
Return
  • retval: double

@see setBackpropWeightScale/2

Python prototype (for reference):

getBackpropWeightScale() -> retval
@spec getDefaultName(t()) :: binary() | {:error, String.t()}
Return

Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.

Python prototype (for reference):

getDefaultName() -> retval
@spec getLayerSizes(t()) :: Evision.Mat.t() | {:error, String.t()}
Return
  • retval: cv::Mat

Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. @sa setLayerSizes

Python prototype (for reference):

getLayerSizes() -> retval
@spec getRpropDW0(t()) :: number() | {:error, String.t()}
Return
  • retval: double

@see setRpropDW0/2

Python prototype (for reference):

getRpropDW0() -> retval
@spec getRpropDWMax(t()) :: number() | {:error, String.t()}
Return
  • retval: double

@see setRpropDWMax/2

Python prototype (for reference):

getRpropDWMax() -> retval
@spec getRpropDWMin(t()) :: number() | {:error, String.t()}
Return
  • retval: double

@see setRpropDWMin/2

Python prototype (for reference):

getRpropDWMin() -> retval
@spec getRpropDWMinus(t()) :: number() | {:error, String.t()}
Return
  • retval: double

@see setRpropDWMinus/2

Python prototype (for reference):

getRpropDWMinus() -> retval
@spec getRpropDWPlus(t()) :: number() | {:error, String.t()}
Return
  • retval: double

@see setRpropDWPlus/2

Python prototype (for reference):

getRpropDWPlus() -> retval
@spec getTermCriteria(t()) :: {integer(), integer(), number()} | {:error, String.t()}
Return
  • retval: TermCriteria

@see setTermCriteria/2

Python prototype (for reference):

getTermCriteria() -> retval
@spec getTrainMethod(t()) :: integer() | {:error, String.t()}
Return
  • retval: int

Returns current training method

Python prototype (for reference):

getTrainMethod() -> retval
@spec getVarCount(t()) :: integer() | {:error, String.t()}

Returns the number of variables in training samples

Return
  • retval: int

Python prototype (for reference):

getVarCount() -> retval
Link to this function

getWeights(self, layerIdx)

View Source
@spec getWeights(t(), integer()) :: Evision.Mat.t() | {:error, String.t()}
Positional Arguments
  • layerIdx: int
Return

Python prototype (for reference):

getWeights(layerIdx) -> retval
@spec isClassifier(t()) :: boolean() | {:error, String.t()}

Returns true if the model is classifier

Return
  • retval: bool

Python prototype (for reference):

isClassifier() -> retval
@spec isTrained(t()) :: boolean() | {:error, String.t()}

Returns true if the model is trained

Return
  • retval: bool

Python prototype (for reference):

isTrained() -> retval
@spec load(binary()) :: t() | {:error, String.t()}

Loads and creates a serialized ANN from a file

Positional Arguments
  • filepath: String.

    path to serialized ANN

Return

Use ANN::save to serialize and store an ANN to disk. Load the ANN from this file again, by calling this function with the path to the file.

Python prototype (for reference):

load(filepath) -> retval
@spec predict(t(), Evision.Mat.maybe_mat_in()) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Predicts response(s) for the provided sample(s)

Positional Arguments
  • samples: Evision.Mat.

    The input samples, floating-point matrix

Keyword Arguments
  • flags: int.

    The optional flags, model-dependent. See cv::ml::StatModel::Flags.

Return
  • retval: float

  • results: Evision.Mat.

    The optional output matrix of results.

Python prototype (for reference):

predict(samples[, results[, flags]]) -> retval, results
Link to this function

predict(self, samples, opts)

View Source
@spec predict(t(), Evision.Mat.maybe_mat_in(), [{atom(), term()}, ...] | nil) ::
  {number(), Evision.Mat.t()} | {:error, String.t()}

Predicts response(s) for the provided sample(s)

Positional Arguments
  • samples: Evision.Mat.

    The input samples, floating-point matrix

Keyword Arguments
  • flags: int.

    The optional flags, model-dependent. See cv::ml::StatModel::Flags.

Return
  • retval: float

  • results: Evision.Mat.

    The optional output matrix of results.

Python prototype (for reference):

predict(samples[, results[, flags]]) -> retval, results
@spec read(t(), Evision.FileNode.t()) :: :ok | {:error, String.t()}

Reads algorithm parameters from a file storage

Positional Arguments

Python prototype (for reference):

read(fn_) -> None
@spec save(t(), binary()) :: :ok | {:error, String.t()}
Positional Arguments

Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).

Python prototype (for reference):

save(filename) -> None
Link to this function

setActivationFunction(self, type)

View Source
@spec setActivationFunction(t(), integer()) :: :ok | {:error, String.t()}
Positional Arguments
  • type: int.

    The type of activation function. See ANN_MLP::ActivationFunctions.

Keyword Arguments
  • param1: double.

    The first parameter of the activation function, \f$\alpha\f$. Default value is 0.

  • param2: double.

    The second parameter of the activation function, \f$\beta\f$. Default value is 0.

Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.

Python prototype (for reference):

setActivationFunction(type[, param1[, param2]]) -> None
Link to this function

setActivationFunction(self, type, opts)

View Source
@spec setActivationFunction(t(), integer(), [{atom(), term()}, ...] | nil) ::
  :ok | {:error, String.t()}
Positional Arguments
  • type: int.

    The type of activation function. See ANN_MLP::ActivationFunctions.

Keyword Arguments
  • param1: double.

    The first parameter of the activation function, \f$\alpha\f$. Default value is 0.

  • param2: double.

    The second parameter of the activation function, \f$\beta\f$. Default value is 0.

Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.

Python prototype (for reference):

setActivationFunction(type[, param1[, param2]]) -> None
Link to this function

setAnnealCoolingRatio(self, val)

View Source
@spec setAnnealCoolingRatio(t(), number()) :: :ok | {:error, String.t()}
Positional Arguments
  • val: double

@see getAnnealCoolingRatio/1

Python prototype (for reference):

setAnnealCoolingRatio(val) -> None
Link to this function

setAnnealFinalT(self, val)

View Source
@spec setAnnealFinalT(t(), number()) :: :ok | {:error, String.t()}
Positional Arguments
  • val: double

@see getAnnealFinalT/1

Python prototype (for reference):

setAnnealFinalT(val) -> None
Link to this function

setAnnealInitialT(self, val)

View Source
@spec setAnnealInitialT(t(), number()) :: :ok | {:error, String.t()}
Positional Arguments
  • val: double

@see getAnnealInitialT/1

Python prototype (for reference):

setAnnealInitialT(val) -> None
Link to this function

setAnnealItePerStep(self, val)

View Source
@spec setAnnealItePerStep(t(), integer()) :: :ok | {:error, String.t()}
Positional Arguments
  • val: int

@see getAnnealItePerStep/1

Python prototype (for reference):

setAnnealItePerStep(val) -> None
Link to this function

setBackpropMomentumScale(self, val)

View Source
@spec setBackpropMomentumScale(t(), number()) :: :ok | {:error, String.t()}
Positional Arguments
  • val: double

@see getBackpropMomentumScale/1

Python prototype (for reference):

setBackpropMomentumScale(val) -> None
Link to this function

setBackpropWeightScale(self, val)

View Source
@spec setBackpropWeightScale(t(), number()) :: :ok | {:error, String.t()}
Positional Arguments
  • val: double

@see getBackpropWeightScale/1

Python prototype (for reference):

setBackpropWeightScale(val) -> None
Link to this function

setLayerSizes(self, layer_sizes)

View Source
@spec setLayerSizes(t(), Evision.Mat.maybe_mat_in()) :: :ok | {:error, String.t()}
Positional Arguments

Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Default value is empty Mat. @sa getLayerSizes

Python prototype (for reference):

setLayerSizes(_layer_sizes) -> None
@spec setRpropDW0(t(), number()) :: :ok | {:error, String.t()}
Positional Arguments
  • val: double

@see getRpropDW0/1

Python prototype (for reference):

setRpropDW0(val) -> None
Link to this function

setRpropDWMax(self, val)

View Source
@spec setRpropDWMax(t(), number()) :: :ok | {:error, String.t()}
Positional Arguments
  • val: double

@see getRpropDWMax/1

Python prototype (for reference):

setRpropDWMax(val) -> None
Link to this function

setRpropDWMin(self, val)

View Source
@spec setRpropDWMin(t(), number()) :: :ok | {:error, String.t()}
Positional Arguments
  • val: double

@see getRpropDWMin/1

Python prototype (for reference):

setRpropDWMin(val) -> None
Link to this function

setRpropDWMinus(self, val)

View Source
@spec setRpropDWMinus(t(), number()) :: :ok | {:error, String.t()}
Positional Arguments
  • val: double

@see getRpropDWMinus/1

Python prototype (for reference):

setRpropDWMinus(val) -> None
Link to this function

setRpropDWPlus(self, val)

View Source
@spec setRpropDWPlus(t(), number()) :: :ok | {:error, String.t()}
Positional Arguments
  • val: double

@see getRpropDWPlus/1

Python prototype (for reference):

setRpropDWPlus(val) -> None
Link to this function

setTermCriteria(self, val)

View Source
@spec setTermCriteria(t(), {integer(), integer(), number()}) ::
  :ok | {:error, String.t()}
Positional Arguments
  • val: TermCriteria

@see getTermCriteria/1

Python prototype (for reference):

setTermCriteria(val) -> None
Link to this function

setTrainMethod(self, method)

View Source
@spec setTrainMethod(t(), integer()) :: :ok | {:error, String.t()}
Positional Arguments
  • method: int.

    Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.

Keyword Arguments
  • param1: double.

    passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.

  • param2: double.

    passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL.

Sets training method and common parameters.

Python prototype (for reference):

setTrainMethod(method[, param1[, param2]]) -> None
Link to this function

setTrainMethod(self, method, opts)

View Source
@spec setTrainMethod(t(), integer(), [{atom(), term()}, ...] | nil) ::
  :ok | {:error, String.t()}
Positional Arguments
  • method: int.

    Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.

Keyword Arguments
  • param1: double.

    passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.

  • param2: double.

    passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL.

Sets training method and common parameters.

Python prototype (for reference):

setTrainMethod(method[, param1[, param2]]) -> None
@spec train(t(), Evision.ML.TrainData.t()) :: boolean() | {:error, String.t()}

Trains the statistical model

Positional Arguments
  • trainData: Evision.ML.TrainData.

    training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.

Keyword Arguments
  • flags: int.

    optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).

Return
  • retval: bool

Python prototype (for reference):

train(trainData[, flags]) -> retval
Link to this function

train(self, trainData, opts)

View Source
@spec train(t(), Evision.ML.TrainData.t(), [{atom(), term()}, ...] | nil) ::
  boolean() | {:error, String.t()}

Trains the statistical model

Positional Arguments
  • trainData: Evision.ML.TrainData.

    training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.

Keyword Arguments
  • flags: int.

    optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).

Return
  • retval: bool

Python prototype (for reference):

train(trainData[, flags]) -> retval
Link to this function

train(self, samples, layout, responses)

View Source
@spec train(t(), Evision.Mat.maybe_mat_in(), integer(), Evision.Mat.maybe_mat_in()) ::
  boolean() | {:error, String.t()}

Trains the statistical model

Positional Arguments
  • samples: Evision.Mat.

    training samples

  • layout: int.

    See ml::SampleTypes.

  • responses: Evision.Mat.

    vector of responses associated with the training samples.

Return
  • retval: bool

Python prototype (for reference):

train(samples, layout, responses) -> retval
@spec write(t(), Evision.FileStorage.t()) :: :ok | {:error, String.t()}

simplified API for language bindings

Positional Arguments
Keyword Arguments

Has overloading in C++

Python prototype (for reference):

write(fs[, name]) -> None
@spec write(t(), Evision.FileStorage.t(), [{atom(), term()}, ...] | nil) ::
  :ok | {:error, String.t()}

simplified API for language bindings

Positional Arguments
Keyword Arguments

Has overloading in C++

Python prototype (for reference):

write(fs[, name]) -> None