View Source Evision.SIFT (Evision v0.1.8)

Link to this section Summary

cv

Variant 1:

Positional Arguments
  • images: [Evision.Mat].

Variant 1:

Positional Arguments
  • images: [Evision.Mat].

##### Keyword Arguments

##### Keyword Arguments

Python prototype (for reference):

Python prototype (for reference):

Python prototype (for reference):

Variant 1:

Positional Arguments
  • images: [Evision.Mat].

Variant 1:

Positional Arguments
  • images: [Evision.Mat].

Positional Arguments
Keyword Arguments
  • useProvidedKeypoints: bool.
Return

Detects keypoints and computes the descriptors

Positional Arguments
Keyword Arguments
  • useProvidedKeypoints: bool.
Return

Detects keypoints and computes the descriptors

Python prototype (for reference):

Python prototype (for reference):

Variant 1:

Positional Arguments
  • arg1: FileNode

Python prototype (for reference):

Variant 1:

Positional Arguments
  • fs: Ptr<FileStorage>
Keyword Arguments

Python prototype (for reference):

Positional Arguments
  • fs: Ptr<FileStorage>
Keyword Arguments

Python prototype (for reference):

Link to this section cv

Link to this function

compute(self, images, keypoints)

View Source

Variant 1:

Positional Arguments
  • images: [Evision.Mat].

    Image set.

Return
  • keypoints: [vector_KeyPoint].

    Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).

  • descriptors: [Evision.Mat].

    Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.

Has overloading in C++

Python prototype (for reference):

compute(images, keypoints[, descriptors]) -> keypoints, descriptors

Variant 2:

Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant).

Positional Arguments
Return
  • keypoints: [KeyPoint].

    Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).

  • descriptors: Evision.Mat.

    Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.

Python prototype (for reference):

compute(image, keypoints[, descriptors]) -> keypoints, descriptors
Link to this function

compute(self, images, keypoints, opts)

View Source

Variant 1:

Positional Arguments
  • images: [Evision.Mat].

    Image set.

Return
  • keypoints: [vector_KeyPoint].

    Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).

  • descriptors: [Evision.Mat].

    Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.

Has overloading in C++

Python prototype (for reference):

compute(images, keypoints[, descriptors]) -> keypoints, descriptors

Variant 2:

Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant).

Positional Arguments
Return
  • keypoints: [KeyPoint].

    Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: SIFT duplicates keypoint with several dominant orientations (for each orientation).

  • descriptors: Evision.Mat.

    Computed descriptors. In the second variant of the method descriptors[i] are descriptors computed for a keypoints[i]. Row j is the keypoints (or keypoints[i]) is the descriptor for keypoint j-th keypoint.

Python prototype (for reference):

compute(image, keypoints[, descriptors]) -> keypoints, descriptors

##### Keyword Arguments

  • nfeatures: int.

    The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast)

  • nOctaveLayers: int.

    The number of layers in each octave. 3 is the value used in D. Lowe paper. The number of octaves is computed automatically from the image resolution.

  • contrastThreshold: double.

    The contrast threshold used to filter out weak features in semi-uniform (low-contrast) regions. The larger the threshold, the less features are produced by the detector.

  • edgeThreshold: double.

    The threshold used to filter out edge-like features. Note that the its meaning is different from the contrastThreshold, i.e. the larger the edgeThreshold, the less features are filtered out (more features are retained).

  • sigma: double.

    The sigma of the Gaussian applied to the input image at the octave #0. If your image is captured with a weak camera with soft lenses, you might want to reduce the number.

Note: The contrast threshold will be divided by nOctaveLayers when the filtering is applied. When nOctaveLayers is set to default and if you want to use the value used in D. Lowe paper, 0.03, set this argument to 0.09.

Python prototype (for reference):

create([, nfeatures[, nOctaveLayers[, contrastThreshold[, edgeThreshold[, sigma]]]]]) -> retval

##### Keyword Arguments

  • nfeatures: int.

    The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast)

  • nOctaveLayers: int.

    The number of layers in each octave. 3 is the value used in D. Lowe paper. The number of octaves is computed automatically from the image resolution.

  • contrastThreshold: double.

    The contrast threshold used to filter out weak features in semi-uniform (low-contrast) regions. The larger the threshold, the less features are produced by the detector.

  • edgeThreshold: double.

    The threshold used to filter out edge-like features. Note that the its meaning is different from the contrastThreshold, i.e. the larger the edgeThreshold, the less features are filtered out (more features are retained).

  • sigma: double.

    The sigma of the Gaussian applied to the input image at the octave #0. If your image is captured with a weak camera with soft lenses, you might want to reduce the number.

Note: The contrast threshold will be divided by nOctaveLayers when the filtering is applied. When nOctaveLayers is set to default and if you want to use the value used in D. Lowe paper, 0.03, set this argument to 0.09.

Python prototype (for reference):

create([, nfeatures[, nOctaveLayers[, contrastThreshold[, edgeThreshold[, sigma]]]]]) -> retval
Link to this function

create(nfeatures, nOctaveLayers, contrastThreshold, edgeThreshold, sigma, descriptorType)

View Source

Create SIFT with specified descriptorType.

Positional Arguments
  • nfeatures: int.

    The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast)

  • nOctaveLayers: int.

    The number of layers in each octave. 3 is the value used in D. Lowe paper. The number of octaves is computed automatically from the image resolution.

  • contrastThreshold: double.

    The contrast threshold used to filter out weak features in semi-uniform (low-contrast) regions. The larger the threshold, the less features are produced by the detector.

  • edgeThreshold: double.

    The threshold used to filter out edge-like features. Note that the its meaning is different from the contrastThreshold, i.e. the larger the edgeThreshold, the less features are filtered out (more features are retained).

  • sigma: double.

    The sigma of the Gaussian applied to the input image at the octave #0. If your image is captured with a weak camera with soft lenses, you might want to reduce the number.

  • descriptorType: int.

    The type of descriptors. Only CV_32F and CV_8U are supported.

Note: The contrast threshold will be divided by nOctaveLayers when the filtering is applied. When nOctaveLayers is set to default and if you want to use the value used in D. Lowe paper, 0.03, set this argument to 0.09.

Python prototype (for reference):

create(nfeatures, nOctaveLayers, contrastThreshold, edgeThreshold, sigma, descriptorType) -> retval

Python prototype (for reference):

defaultNorm() -> retval

Python prototype (for reference):

descriptorSize() -> retval

Python prototype (for reference):

descriptorType() -> retval

Variant 1:

Positional Arguments
  • images: [Evision.Mat].

    Image set.

Keyword Arguments
  • masks: [Evision.Mat].

    Masks for each input image specifying where to look for keypoints (optional). masks[i] is a mask for images[i].

Return
  • keypoints: [vector_KeyPoint].

    The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .

Has overloading in C++

Python prototype (for reference):

detect(images[, masks]) -> keypoints

Variant 2:

Detects keypoints in an image (first variant) or image set (second variant).

Positional Arguments
Keyword Arguments
  • mask: Evision.Mat.

    Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.

Return
  • keypoints: [KeyPoint].

    The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .

Python prototype (for reference):

detect(image[, mask]) -> keypoints
Link to this function

detect(self, images, opts)

View Source

Variant 1:

Positional Arguments
  • images: [Evision.Mat].

    Image set.

Keyword Arguments
  • masks: [Evision.Mat].

    Masks for each input image specifying where to look for keypoints (optional). masks[i] is a mask for images[i].

Return
  • keypoints: [vector_KeyPoint].

    The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .

Has overloading in C++

Python prototype (for reference):

detect(images[, masks]) -> keypoints

Variant 2:

Detects keypoints in an image (first variant) or image set (second variant).

Positional Arguments
Keyword Arguments
  • mask: Evision.Mat.

    Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.

Return
  • keypoints: [KeyPoint].

    The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .

Python prototype (for reference):

detect(image[, mask]) -> keypoints
Link to this function

detectAndCompute(self, image, mask)

View Source
Positional Arguments
Keyword Arguments
  • useProvidedKeypoints: bool.
Return

Detects keypoints and computes the descriptors

Python prototype (for reference):

detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors
Link to this function

detectAndCompute(self, image, mask, opts)

View Source
Positional Arguments
Keyword Arguments
  • useProvidedKeypoints: bool.
Return

Detects keypoints and computes the descriptors

Python prototype (for reference):

detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors

Python prototype (for reference):

empty() -> retval

Python prototype (for reference):

getDefaultName() -> retval

Variant 1:

Positional Arguments
  • arg1: FileNode

Python prototype (for reference):

read(arg1) -> None

Variant 2:

Positional Arguments

Python prototype (for reference):

read(fileName) -> None

Variant 1:

Positional Arguments
  • fs: Ptr<FileStorage>
Keyword Arguments

Python prototype (for reference):

write(fs[, name]) -> None

Variant 2:

Positional Arguments

Python prototype (for reference):

write(fileName) -> None
Positional Arguments
  • fs: Ptr<FileStorage>
Keyword Arguments

Python prototype (for reference):

write(fs[, name]) -> None

Link to this section Functions

Link to this function

compute!(self, images, keypoints)

View Source

Raising version of compute/3.

Link to this function

compute!(self, images, keypoints, opts)

View Source

Raising version of compute/4.

Raising version of create/0.

Raising version of create/1.

Link to this function

create!(nfeatures, nOctaveLayers, contrastThreshold, edgeThreshold, sigma, descriptorType)

View Source

Raising version of create/6.

Raising version of defaultNorm/1.

Raising version of descriptorSize/1.

Raising version of descriptorType/1.

Raising version of detect/2.

Link to this function

detect!(self, images, opts)

View Source

Raising version of detect/3.

Link to this function

detectAndCompute!(self, image, mask)

View Source

Raising version of detectAndCompute/3.

Link to this function

detectAndCompute!(self, image, mask, opts)

View Source

Raising version of detectAndCompute/4.

Raising version of empty/1.

Raising version of getDefaultName/1.

Raising version of read/2.

Raising version of write/2.

Raising version of write/3.