View Source Evision.ML.KNearest (Evision v0.1.9)
Link to this section Summary
cv
Clears the algorithm state
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.
Reads algorithm parameters from a file storage
Positional Arguments
- filename:
String
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).
simplified API for language bindings
simplified API for language bindings
cv.ml
Computes error on the training or test dataset
Computes error on the training or test dataset
Creates the empty model
Python prototype (for reference):
Finds the neighbors and predicts responses for input vectors.
Finds the neighbors and predicts responses for input vectors.
@see setAlgorithmType
@see setDefaultK
@see setEmax
@see setIsClassifier
Returns the number of variables in training samples
Returns true if the model is classifier
Returns true if the model is trained
Loads and creates a serialized knearest from a file
Predicts response(s) for the provided sample(s)
Predicts response(s) for the provided sample(s)
Positional Arguments
- val:
int
@copybrief getAlgorithmType @see getAlgorithmType
Positional Arguments
- val:
int
@copybrief getDefaultK @see getDefaultK
Positional Arguments
- val:
int
@copybrief getEmax @see getEmax
Positional Arguments
- val:
bool
@copybrief getIsClassifier @see getIsClassifier
Trains the statistical model
Trains the statistical model
Trains the statistical model
Functions
Raising version of calcError/3
.
Raising version of calcError/4
.
Raising version of clear/1
.
Raising version of empty/1
.
Raising version of findNearest/3
.
Raising version of findNearest/4
.
Raising version of getAlgorithmType/1
.
Raising version of getDefaultK/1
.
Raising version of getDefaultName/1
.
Raising version of getEmax/1
.
Raising version of getIsClassifier/1
.
Raising version of getVarCount/1
.
Raising version of isClassifier/1
.
Raising version of isTrained/1
.
Raising version of load/1
.
Raising version of predict/2
.
Raising version of predict/3
.
Raising version of read/2
.
Raising version of save/2
.
Raising version of setAlgorithmType/2
.
Raising version of setDefaultK/2
.
Raising version of setEmax/2
.
Raising version of setIsClassifier/2
.
Raising version of train/2
.
Raising version of train/3
.
Raising version of train/4
.
Raising version of write/2
.
Raising version of write/3
.
Link to this section cv
Clears the algorithm state
Python prototype (for reference):
clear() -> None
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.
Python prototype (for reference):
getDefaultName() -> retval
Reads algorithm parameters from a file storage
Positional Arguments
- fn_:
FileNode
Python prototype (for reference):
read(fn_) -> None
Positional Arguments
- filename:
String
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).
Python prototype (for reference):
save(filename) -> None
simplified API for language bindings
Positional Arguments
- fs:
Ptr<FileStorage>
Keyword Arguments
- name:
String
.
Has overloading in C++
Python prototype (for reference):
write(fs[, name]) -> None
simplified API for language bindings
Positional Arguments
- fs:
Ptr<FileStorage>
Keyword Arguments
- name:
String
.
Has overloading in C++
Python prototype (for reference):
write(fs[, name]) -> None
Link to this section cv.ml
Computes error on the training or test dataset
Positional Arguments
data:
Ptr<TrainData>
.the training data
test:
bool
.if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.
Return
resp:
Evision.Mat
.the optional output responses.
The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).
Python prototype (for reference):
calcError(data, test[, resp]) -> retval, resp
Computes error on the training or test dataset
Positional Arguments
data:
Ptr<TrainData>
.the training data
test:
bool
.if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.
Return
resp:
Evision.Mat
.the optional output responses.
The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).
Python prototype (for reference):
calcError(data, test[, resp]) -> retval, resp
Creates the empty model
The static method creates empty %KNearest classifier. It should be then trained using StatModel::train method.
Python prototype (for reference):
create() -> retval
Python prototype (for reference):
empty() -> retval
Finds the neighbors and predicts responses for input vectors.
Positional Arguments
samples:
Evision.Mat
.Input samples stored by rows. It is a single-precision floating-point matrix of
<number_of_samples> * k
size.k:
int
.Number of used nearest neighbors. Should be greater than 1.
Return
results:
Evision.Mat
.Vector with results of prediction (regression or classification) for each input sample. It is a single-precision floating-point vector with
<number_of_samples>
elements.neighborResponses:
Evision.Mat
.Optional output values for corresponding neighbors. It is a single- precision floating-point matrix of
<number_of_samples> * k
size.dist:
Evision.Mat
.Optional output distances from the input vectors to the corresponding neighbors. It is a single-precision floating-point matrix of
<number_of_samples> * k
size.
For each input vector (a row of the matrix samples), the method finds the k nearest neighbors. In case of regression, the predicted result is a mean value of the particular vector's neighbor responses. In case of classification, the class is determined by voting. For each input vector, the neighbors are sorted by their distances to the vector. In case of C++ interface you can use output pointers to empty matrices and the function will allocate memory itself. If only a single input vector is passed, all output matrices are optional and the predicted value is returned by the method. The function is parallelized with the TBB library.
Python prototype (for reference):
findNearest(samples, k[, results[, neighborResponses[, dist]]]) -> retval, results, neighborResponses, dist
Finds the neighbors and predicts responses for input vectors.
Positional Arguments
samples:
Evision.Mat
.Input samples stored by rows. It is a single-precision floating-point matrix of
<number_of_samples> * k
size.k:
int
.Number of used nearest neighbors. Should be greater than 1.
Return
results:
Evision.Mat
.Vector with results of prediction (regression or classification) for each input sample. It is a single-precision floating-point vector with
<number_of_samples>
elements.neighborResponses:
Evision.Mat
.Optional output values for corresponding neighbors. It is a single- precision floating-point matrix of
<number_of_samples> * k
size.dist:
Evision.Mat
.Optional output distances from the input vectors to the corresponding neighbors. It is a single-precision floating-point matrix of
<number_of_samples> * k
size.
For each input vector (a row of the matrix samples), the method finds the k nearest neighbors. In case of regression, the predicted result is a mean value of the particular vector's neighbor responses. In case of classification, the class is determined by voting. For each input vector, the neighbors are sorted by their distances to the vector. In case of C++ interface you can use output pointers to empty matrices and the function will allocate memory itself. If only a single input vector is passed, all output matrices are optional and the predicted value is returned by the method. The function is parallelized with the TBB library.
Python prototype (for reference):
findNearest(samples, k[, results[, neighborResponses[, dist]]]) -> retval, results, neighborResponses, dist
@see setAlgorithmType
Python prototype (for reference):
getAlgorithmType() -> retval
@see setDefaultK
Python prototype (for reference):
getDefaultK() -> retval
@see setEmax
Python prototype (for reference):
getEmax() -> retval
@see setIsClassifier
Python prototype (for reference):
getIsClassifier() -> retval
Returns the number of variables in training samples
Python prototype (for reference):
getVarCount() -> retval
Returns true if the model is classifier
Python prototype (for reference):
isClassifier() -> retval
Returns true if the model is trained
Python prototype (for reference):
isTrained() -> retval
Loads and creates a serialized knearest from a file
Positional Arguments
filepath:
String
.path to serialized KNearest
Use KNearest::save to serialize and store an KNearest to disk. Load the KNearest from this file again, by calling this function with the path to the file.
Python prototype (for reference):
load(filepath) -> retval
Predicts response(s) for the provided sample(s)
Positional Arguments
samples:
Evision.Mat
.The input samples, floating-point matrix
Keyword Arguments
flags:
int
.The optional flags, model-dependent. See cv::ml::StatModel::Flags.
Return
results:
Evision.Mat
.The optional output matrix of results.
Python prototype (for reference):
predict(samples[, results[, flags]]) -> retval, results
Predicts response(s) for the provided sample(s)
Positional Arguments
samples:
Evision.Mat
.The input samples, floating-point matrix
Keyword Arguments
flags:
int
.The optional flags, model-dependent. See cv::ml::StatModel::Flags.
Return
results:
Evision.Mat
.The optional output matrix of results.
Python prototype (for reference):
predict(samples[, results[, flags]]) -> retval, results
Positional Arguments
- val:
int
@copybrief getAlgorithmType @see getAlgorithmType
Python prototype (for reference):
setAlgorithmType(val) -> None
Positional Arguments
- val:
int
@copybrief getDefaultK @see getDefaultK
Python prototype (for reference):
setDefaultK(val) -> None
Positional Arguments
- val:
int
@copybrief getEmax @see getEmax
Python prototype (for reference):
setEmax(val) -> None
Positional Arguments
- val:
bool
@copybrief getIsClassifier @see getIsClassifier
Python prototype (for reference):
setIsClassifier(val) -> None
Trains the statistical model
Positional Arguments
trainData:
Ptr<TrainData>
.training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.
Keyword Arguments
flags:
int
.optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).
Python prototype (for reference):
train(trainData[, flags]) -> retval
Trains the statistical model
Positional Arguments
trainData:
Ptr<TrainData>
.training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.
Keyword Arguments
flags:
int
.optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).
Python prototype (for reference):
train(trainData[, flags]) -> retval
Trains the statistical model
Positional Arguments
samples:
Evision.Mat
.training samples
layout:
int
.See ml::SampleTypes.
responses:
Evision.Mat
.vector of responses associated with the training samples.
Python prototype (for reference):
train(samples, layout, responses) -> retval
Link to this section Functions
Raising version of calcError/3
.
Raising version of calcError/4
.
Raising version of clear/1
.
Raising version of create/0
.
Raising version of empty/1
.
Raising version of findNearest/3
.
Raising version of findNearest/4
.
Raising version of getAlgorithmType/1
.
Raising version of getDefaultK/1
.
Raising version of getDefaultName/1
.
Raising version of getEmax/1
.
Raising version of getIsClassifier/1
.
Raising version of getVarCount/1
.
Raising version of isClassifier/1
.
Raising version of isTrained/1
.
Raising version of load/1
.
Raising version of predict/2
.
Raising version of predict/3
.
Raising version of read/2
.
Raising version of save/2
.
Raising version of setAlgorithmType/2
.
Raising version of setDefaultK/2
.
Raising version of setEmax/2
.
Raising version of setIsClassifier/2
.
Raising version of train/2
.
Raising version of train/3
.
Raising version of train/4
.
Raising version of write/2
.
Raising version of write/3
.