View Source Evision.ML.EM (Evision v0.1.7)
Link to this section Summary
cv
Clears the algorithm state
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.
Reads algorithm parameters from a file storage
Positional Arguments
- filename:
String
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).
simplified API for language bindings
simplified API for language bindings
cv.ml
Computes error on the training or test dataset
Computes error on the training or test dataset
Creates empty %EM model. The model should be trained then using StatModel::train(traindata, flags) method. Alternatively, you can use one of the EM::train* methods or load it from file using Algorithm::load\<EM>(filename).
Python prototype (for reference):
@see setClustersNumber
@see setCovarianceMatrixType
Returns covariation matrices
Returns the cluster centers (means of the Gaussian mixture)
@see setTermCriteria
Returns the number of variables in training samples
Returns weights of the mixtures
Returns true if the model is classifier
Returns true if the model is trained
Loads and creates a serialized EM from a file
Loads and creates a serialized EM from a file
Returns a likelihood logarithm value and an index of the most probable mixture component for the given sample.
Returns a likelihood logarithm value and an index of the most probable mixture component for the given sample.
Returns posterior probabilities for the provided samples
Returns posterior probabilities for the provided samples
Positional Arguments
- val:
int
@copybrief getClustersNumber @see getClustersNumber
Positional Arguments
- val:
int
@copybrief getCovarianceMatrixType @see getCovarianceMatrixType
Positional Arguments
- val:
TermCriteria
@copybrief getTermCriteria @see getTermCriteria
Trains the statistical model
Trains the statistical model
Trains the statistical model
Estimate the Gaussian mixture parameters from a samples set.
Estimate the Gaussian mixture parameters from a samples set.
Estimate the Gaussian mixture parameters from a samples set.
Estimate the Gaussian mixture parameters from a samples set.
Estimate the Gaussian mixture parameters from a samples set.
Estimate the Gaussian mixture parameters from a samples set.
Functions
Raising version of calcError/3
.
Raising version of calcError/4
.
Raising version of clear/1
.
Raising version of empty/1
.
Raising version of getClustersNumber/1
.
Raising version of getCovarianceMatrixType/1
.
Raising version of getCovs/1
.
Raising version of getDefaultName/1
.
Raising version of getMeans/1
.
Raising version of getTermCriteria/1
.
Raising version of getVarCount/1
.
Raising version of getWeights/1
.
Raising version of isClassifier/1
.
Raising version of isTrained/1
.
Raising version of load/1
.
Raising version of load/2
.
Raising version of predict2/2
.
Raising version of predict2/3
.
Raising version of predict/2
.
Raising version of predict/3
.
Raising version of read/2
.
Raising version of save/2
.
Raising version of setClustersNumber/2
.
Raising version of setCovarianceMatrixType/2
.
Raising version of setTermCriteria/2
.
Raising version of train/2
.
Raising version of train/3
.
Raising version of train/4
.
Raising version of trainE/3
.
Raising version of trainE/4
.
Raising version of trainEM/2
.
Raising version of trainEM/3
.
Raising version of trainM/3
.
Raising version of trainM/4
.
Raising version of write/2
.
Raising version of write/3
.
Link to this section cv
Clears the algorithm state
Python prototype (for reference):
clear() -> None
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.
Python prototype (for reference):
getDefaultName() -> retval
Reads algorithm parameters from a file storage
Positional Arguments
- fn_:
FileNode
Python prototype (for reference):
read(fn_) -> None
Positional Arguments
- filename:
String
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).
Python prototype (for reference):
save(filename) -> None
simplified API for language bindings
Positional Arguments
- fs:
Ptr<FileStorage>
Keyword Arguments
- name:
String
.
Has overloading in C++
Python prototype (for reference):
write(fs[, name]) -> None
simplified API for language bindings
Positional Arguments
- fs:
Ptr<FileStorage>
Keyword Arguments
- name:
String
.
Has overloading in C++
Python prototype (for reference):
write(fs[, name]) -> None
Link to this section cv.ml
Computes error on the training or test dataset
Positional Arguments
data:
Ptr<TrainData>
.the training data
test:
bool
.if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.
Return
resp:
Evision.Mat
.the optional output responses.
The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).
Python prototype (for reference):
calcError(data, test[, resp]) -> retval, resp
Computes error on the training or test dataset
Positional Arguments
data:
Ptr<TrainData>
.the training data
test:
bool
.if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.
Return
resp:
Evision.Mat
.the optional output responses.
The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).
Python prototype (for reference):
calcError(data, test[, resp]) -> retval, resp
Creates empty %EM model. The model should be trained then using StatModel::train(traindata, flags) method. Alternatively, you can use one of the EM::train* methods or load it from file using Algorithm::load\<EM>(filename).
Python prototype (for reference):
create() -> retval
Python prototype (for reference):
empty() -> retval
@see setClustersNumber
Python prototype (for reference):
getClustersNumber() -> retval
@see setCovarianceMatrixType
Python prototype (for reference):
getCovarianceMatrixType() -> retval
Returns covariation matrices
Return
- covs:
[Evision.Mat]
.
Returns vector of covariation matrices. Number of matrices is the number of gaussian mixtures, each matrix is a square floating-point matrix NxN, where N is the space dimensionality.
Python prototype (for reference):
getCovs([, covs]) -> covs
Returns the cluster centers (means of the Gaussian mixture)
Returns matrix with the number of rows equal to the number of mixtures and number of columns equal to the space dimensionality.
Python prototype (for reference):
getMeans() -> retval
@see setTermCriteria
Python prototype (for reference):
getTermCriteria() -> retval
Returns the number of variables in training samples
Python prototype (for reference):
getVarCount() -> retval
Returns weights of the mixtures
Returns vector with the number of elements equal to the number of mixtures.
Python prototype (for reference):
getWeights() -> retval
Returns true if the model is classifier
Python prototype (for reference):
isClassifier() -> retval
Returns true if the model is trained
Python prototype (for reference):
isTrained() -> retval
Loads and creates a serialized EM from a file
Positional Arguments
filepath:
String
.path to serialized EM
Keyword Arguments
nodeName:
String
.name of node containing the classifier
Use EM::save to serialize and store an EM to disk. Load the EM from this file again, by calling this function with the path to the file. Optionally specify the node for the file containing the classifier
Python prototype (for reference):
load(filepath[, nodeName]) -> retval
Loads and creates a serialized EM from a file
Positional Arguments
filepath:
String
.path to serialized EM
Keyword Arguments
nodeName:
String
.name of node containing the classifier
Use EM::save to serialize and store an EM to disk. Load the EM from this file again, by calling this function with the path to the file. Optionally specify the node for the file containing the classifier
Python prototype (for reference):
load(filepath[, nodeName]) -> retval
Returns a likelihood logarithm value and an index of the most probable mixture component for the given sample.
Positional Arguments
sample:
Evision.Mat
.A sample for classification. It should be a one-channel matrix of \f$1 \times dims\f$ or \f$dims \times 1\f$ size.
Return
probs:
Evision.Mat
.Optional output matrix that contains posterior probabilities of each component given the sample. It has \f$1 \times nclusters\f$ size and CV_64FC1 type.
The method returns a two-element double vector. Zero element is a likelihood logarithm value for the sample. First element is an index of the most probable mixture component for the given sample.
Python prototype (for reference):
predict2(sample[, probs]) -> retval, probs
Returns a likelihood logarithm value and an index of the most probable mixture component for the given sample.
Positional Arguments
sample:
Evision.Mat
.A sample for classification. It should be a one-channel matrix of \f$1 \times dims\f$ or \f$dims \times 1\f$ size.
Return
probs:
Evision.Mat
.Optional output matrix that contains posterior probabilities of each component given the sample. It has \f$1 \times nclusters\f$ size and CV_64FC1 type.
The method returns a two-element double vector. Zero element is a likelihood logarithm value for the sample. First element is an index of the most probable mixture component for the given sample.
Python prototype (for reference):
predict2(sample[, probs]) -> retval, probs
Returns posterior probabilities for the provided samples
Positional Arguments
samples:
Evision.Mat
.The input samples, floating-point matrix
Keyword Arguments
flags:
int
.This parameter will be ignored
Return
results:
Evision.Mat
.The optional output \f$ nSamples \times nClusters\f$ matrix of results. It contains posterior probabilities for each sample from the input
Python prototype (for reference):
predict(samples[, results[, flags]]) -> retval, results
Returns posterior probabilities for the provided samples
Positional Arguments
samples:
Evision.Mat
.The input samples, floating-point matrix
Keyword Arguments
flags:
int
.This parameter will be ignored
Return
results:
Evision.Mat
.The optional output \f$ nSamples \times nClusters\f$ matrix of results. It contains posterior probabilities for each sample from the input
Python prototype (for reference):
predict(samples[, results[, flags]]) -> retval, results
Positional Arguments
- val:
int
@copybrief getClustersNumber @see getClustersNumber
Python prototype (for reference):
setClustersNumber(val) -> None
Positional Arguments
- val:
int
@copybrief getCovarianceMatrixType @see getCovarianceMatrixType
Python prototype (for reference):
setCovarianceMatrixType(val) -> None
Positional Arguments
- val:
TermCriteria
@copybrief getTermCriteria @see getTermCriteria
Python prototype (for reference):
setTermCriteria(val) -> None
Trains the statistical model
Positional Arguments
trainData:
Ptr<TrainData>
.training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.
Keyword Arguments
flags:
int
.optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).
Python prototype (for reference):
train(trainData[, flags]) -> retval
Trains the statistical model
Positional Arguments
trainData:
Ptr<TrainData>
.training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.
Keyword Arguments
flags:
int
.optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).
Python prototype (for reference):
train(trainData[, flags]) -> retval
Trains the statistical model
Positional Arguments
samples:
Evision.Mat
.training samples
layout:
int
.See ml::SampleTypes.
responses:
Evision.Mat
.vector of responses associated with the training samples.
Python prototype (for reference):
train(samples, layout, responses) -> retval
Estimate the Gaussian mixture parameters from a samples set.
Positional Arguments
samples:
Evision.Mat
.Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
means0:
Evision.Mat
.Initial means \f$a_k\f$ of mixture components. It is a one-channel matrix of \f$nclusters \times dims\f$ size. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
Keyword Arguments
covs0:
Evision.Mat
.The vector of initial covariance matrices \f$S_k\f$ of mixture components. Each of covariance matrices is a one-channel matrix of \f$dims \times dims\f$ size. If the matrices do not have CV_64F type they will be converted to the inner matrices of such type for the further computing.
weights0:
Evision.Mat
.Initial weights \f$\pi_k\f$ of mixture components. It should be a one-channel floating-point matrix with \f$1 \times nclusters\f$ or \f$nclusters \times 1\f$ size.
Return
logLikelihoods:
Evision.Mat
.The optional output matrix that contains a likelihood logarithm value for each sample. It has \f$nsamples \times 1\f$ size and CV_64FC1 type.
labels:
Evision.Mat
.The optional output "class label" for each sample: \f$\texttt{labels}i=\texttt{arg max}_k(p{i,k}), i=1..N\f$ (indices of the most probable mixture component for each sample). It has \f$nsamples \times 1\f$ size and CV_32SC1 type.
probs:
Evision.Mat
.The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has \f$nsamples \times nclusters\f$ size and CV_64FC1 type.
This variation starts with Expectation step. You need to provide initial means \f$a_k\f$ of mixture components. Optionally you can pass initial weights \f$\pi_k\f$ and covariance matrices \f$S_k\f$ of mixture components.
Python prototype (for reference):
trainE(samples, means0[, covs0[, weights0[, logLikelihoods[, labels[, probs]]]]]) -> retval, logLikelihoods, labels, probs
Estimate the Gaussian mixture parameters from a samples set.
Positional Arguments
samples:
Evision.Mat
.Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
means0:
Evision.Mat
.Initial means \f$a_k\f$ of mixture components. It is a one-channel matrix of \f$nclusters \times dims\f$ size. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
Keyword Arguments
covs0:
Evision.Mat
.The vector of initial covariance matrices \f$S_k\f$ of mixture components. Each of covariance matrices is a one-channel matrix of \f$dims \times dims\f$ size. If the matrices do not have CV_64F type they will be converted to the inner matrices of such type for the further computing.
weights0:
Evision.Mat
.Initial weights \f$\pi_k\f$ of mixture components. It should be a one-channel floating-point matrix with \f$1 \times nclusters\f$ or \f$nclusters \times 1\f$ size.
Return
logLikelihoods:
Evision.Mat
.The optional output matrix that contains a likelihood logarithm value for each sample. It has \f$nsamples \times 1\f$ size and CV_64FC1 type.
labels:
Evision.Mat
.The optional output "class label" for each sample: \f$\texttt{labels}i=\texttt{arg max}_k(p{i,k}), i=1..N\f$ (indices of the most probable mixture component for each sample). It has \f$nsamples \times 1\f$ size and CV_32SC1 type.
probs:
Evision.Mat
.The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has \f$nsamples \times nclusters\f$ size and CV_64FC1 type.
This variation starts with Expectation step. You need to provide initial means \f$a_k\f$ of mixture components. Optionally you can pass initial weights \f$\pi_k\f$ and covariance matrices \f$S_k\f$ of mixture components.
Python prototype (for reference):
trainE(samples, means0[, covs0[, weights0[, logLikelihoods[, labels[, probs]]]]]) -> retval, logLikelihoods, labels, probs
Estimate the Gaussian mixture parameters from a samples set.
Positional Arguments
samples:
Evision.Mat
.Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
Return
logLikelihoods:
Evision.Mat
.The optional output matrix that contains a likelihood logarithm value for each sample. It has \f$nsamples \times 1\f$ size and CV_64FC1 type.
labels:
Evision.Mat
.The optional output "class label" for each sample: \f$\texttt{labels}i=\texttt{arg max}_k(p{i,k}), i=1..N\f$ (indices of the most probable mixture component for each sample). It has \f$nsamples \times 1\f$ size and CV_32SC1 type.
probs:
Evision.Mat
.The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has \f$nsamples \times nclusters\f$ size and CV_64FC1 type.
This variation starts with Expectation step. Initial values of the model parameters will be estimated by the k-means algorithm. Unlike many of the ML models, %EM is an unsupervised learning algorithm and it does not take responses (class labels or function values) as input. Instead, it computes the Maximum Likelihood Estimate of the Gaussian mixture parameters from an input sample set, stores all the parameters inside the structure: \f$p_{i,k}\f$ in probs, \f$a_k\f$ in means , \f$S_k\f$ in covs[k], \f$\pi_k\f$ in weights , and optionally computes the output "class label" for each sample: \f$\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N\f$ (indices of the most probable mixture component for each sample). The trained model can be used further for prediction, just like any other classifier. The trained model is similar to the NormalBayesClassifier.
Python prototype (for reference):
trainEM(samples[, logLikelihoods[, labels[, probs]]]) -> retval, logLikelihoods, labels, probs
Estimate the Gaussian mixture parameters from a samples set.
Positional Arguments
samples:
Evision.Mat
.Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
Return
logLikelihoods:
Evision.Mat
.The optional output matrix that contains a likelihood logarithm value for each sample. It has \f$nsamples \times 1\f$ size and CV_64FC1 type.
labels:
Evision.Mat
.The optional output "class label" for each sample: \f$\texttt{labels}i=\texttt{arg max}_k(p{i,k}), i=1..N\f$ (indices of the most probable mixture component for each sample). It has \f$nsamples \times 1\f$ size and CV_32SC1 type.
probs:
Evision.Mat
.The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has \f$nsamples \times nclusters\f$ size and CV_64FC1 type.
This variation starts with Expectation step. Initial values of the model parameters will be estimated by the k-means algorithm. Unlike many of the ML models, %EM is an unsupervised learning algorithm and it does not take responses (class labels or function values) as input. Instead, it computes the Maximum Likelihood Estimate of the Gaussian mixture parameters from an input sample set, stores all the parameters inside the structure: \f$p_{i,k}\f$ in probs, \f$a_k\f$ in means , \f$S_k\f$ in covs[k], \f$\pi_k\f$ in weights , and optionally computes the output "class label" for each sample: \f$\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N\f$ (indices of the most probable mixture component for each sample). The trained model can be used further for prediction, just like any other classifier. The trained model is similar to the NormalBayesClassifier.
Python prototype (for reference):
trainEM(samples[, logLikelihoods[, labels[, probs]]]) -> retval, logLikelihoods, labels, probs
Estimate the Gaussian mixture parameters from a samples set.
Positional Arguments
samples:
Evision.Mat
.Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
probs0:
Evision.Mat
.the probabilities
Return
logLikelihoods:
Evision.Mat
.The optional output matrix that contains a likelihood logarithm value for each sample. It has \f$nsamples \times 1\f$ size and CV_64FC1 type.
labels:
Evision.Mat
.The optional output "class label" for each sample: \f$\texttt{labels}i=\texttt{arg max}_k(p{i,k}), i=1..N\f$ (indices of the most probable mixture component for each sample). It has \f$nsamples \times 1\f$ size and CV_32SC1 type.
probs:
Evision.Mat
.The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has \f$nsamples \times nclusters\f$ size and CV_64FC1 type.
This variation starts with Maximization step. You need to provide initial probabilities \f$p_{i,k}\f$ to use this option.
Python prototype (for reference):
trainM(samples, probs0[, logLikelihoods[, labels[, probs]]]) -> retval, logLikelihoods, labels, probs
Estimate the Gaussian mixture parameters from a samples set.
Positional Arguments
samples:
Evision.Mat
.Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
probs0:
Evision.Mat
.the probabilities
Return
logLikelihoods:
Evision.Mat
.The optional output matrix that contains a likelihood logarithm value for each sample. It has \f$nsamples \times 1\f$ size and CV_64FC1 type.
labels:
Evision.Mat
.The optional output "class label" for each sample: \f$\texttt{labels}i=\texttt{arg max}_k(p{i,k}), i=1..N\f$ (indices of the most probable mixture component for each sample). It has \f$nsamples \times 1\f$ size and CV_32SC1 type.
probs:
Evision.Mat
.The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has \f$nsamples \times nclusters\f$ size and CV_64FC1 type.
This variation starts with Maximization step. You need to provide initial probabilities \f$p_{i,k}\f$ to use this option.
Python prototype (for reference):
trainM(samples, probs0[, logLikelihoods[, labels[, probs]]]) -> retval, logLikelihoods, labels, probs
Link to this section Functions
Raising version of calcError/3
.
Raising version of calcError/4
.
Raising version of clear/1
.
Raising version of create/0
.
Raising version of empty/1
.
Raising version of getClustersNumber/1
.
Raising version of getCovarianceMatrixType/1
.
Raising version of getCovs/1
.
Raising version of getDefaultName/1
.
Raising version of getMeans/1
.
Raising version of getTermCriteria/1
.
Raising version of getVarCount/1
.
Raising version of getWeights/1
.
Raising version of isClassifier/1
.
Raising version of isTrained/1
.
Raising version of load/1
.
Raising version of load/2
.
Raising version of predict2/2
.
Raising version of predict2/3
.
Raising version of predict/2
.
Raising version of predict/3
.
Raising version of read/2
.
Raising version of save/2
.
Raising version of setClustersNumber/2
.
Raising version of setCovarianceMatrixType/2
.
Raising version of setTermCriteria/2
.
Raising version of train/2
.
Raising version of train/3
.
Raising version of train/4
.
Raising version of trainE/3
.
Raising version of trainE/4
.
Raising version of trainEM/2
.
Raising version of trainEM/3
.
Raising version of trainM/3
.
Raising version of trainM/4
.
Raising version of write/2
.
Raising version of write/3
.