View Source Evision.DNN.Net (Evision v0.1.7)
Link to this section Summary
Functions
Raising version of connect/3
.
Connects output of the first layer to input of the second layer.
Raising version of dnn_Net/0
.
Python prototype (for reference):
Raising version of dump/1
.
Dump net to String
Raising version of dumpToFile/2
.
Dump net structure, hyperparameters, backend, target and fusion to dot file
Raising version of empty/1
.
Returns true if there are no layers in the network.
Raising version of enableFusion/2
.
Enables or disables layer fusion in the network.
Raising version of forward/2
.
Runs forward pass to compute outputs of layers listed in @p outBlobNames.
Raising version of forwardAndRetrieve/2
.
Runs forward pass to compute outputs of layers listed in @p outBlobNames.
Raising version of forwardAsync/1
.
Runs forward pass to compute output of layer with name @p outputName.
Raising version of getFLOPS/2
.
Raising version of getFLOPS/3
.
Variant 1:
Positional Arguments
- netInputShape:
MatShape
Has overloading in C++
Variant 1:
Positional Arguments
- layerId:
int
- netInputShape:
MatShape
Has overloading in C++
Raising version of getInputDetails/1
.
Returns input scale and zeropoint for a quantized Net.
Raising version of getLayer/2
.
Raising version of getLayerId/2
.
Converts string name of the layer to the integer identifier.
Raising version of getLayerNames/1
.
Python prototype (for reference):
Raising version of getLayerShapes/2
.
Positional Arguments
- netInputShapes:
[MatShape]
- layerId:
int
Return
- inLayerShapes:
[MatShape]
- outLayerShapes:
[MatShape]
Has overloading in C++
Raising version of getLayersCount/2
.
Returns count of layers of specified type.
Raising version of getLayersShapes/2
.
Positional Arguments
- netInputShape:
MatShape
Return
- layersIds:
[int]
- inLayersShapes:
[vector_MatShape]
- outLayersShapes:
[vector_MatShape]
Has overloading in C++
Raising version of getLayerTypes/1
.
Returns list of types for layer used in model.
Raising version of getMemoryConsumption/2
.
Raising version of getMemoryConsumption/3
.
Positional Arguments
- netInputShape:
MatShape
Return
- weights:
size_t
- blobs:
size_t
Has overloading in C++
Variant 1:
Positional Arguments
- layerId:
int
- netInputShape:
MatShape
Return
- weights:
size_t
- blobs:
size_t
Has overloading in C++
Raising version of getOutputDetails/1
.
Returns output scale and zeropoint for a quantized Net.
Raising version of getParam/2
.
Raising version of getParam/3
.
Variant 1:
Positional Arguments
- layerName:
String
Keyword Arguments
- numParam:
int
.
Python prototype (for reference):
Variant 1:
Positional Arguments
- layerName:
String
Keyword Arguments
- numParam:
int
.
Python prototype (for reference):
Raising version of getPerfProfile/1
.
Returns overall time for inference and timings (in ticks) for layers.
Raising version of getUnconnectedOutLayers/1
.
Returns indexes of layers with unconnected outputs.
Raising version of getUnconnectedOutLayersNames/1
.
Returns names of layers with unconnected outputs.
Raising version of quantize/4
.
Returns a quantized Net from a floating-point Net.
Raising version of readFromModelOptimizer/2
.
Variant 1:
Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).
Raising version of setHalideScheduler/2
.
Compile Halide layers.
Raising version of setInput/2
.
Raising version of setInput/3
.
Sets the new input value for the network
Sets the new input value for the network
Raising version of setInputShape/3
.
Specify shape of network input.
Raising version of setInputsNames/2
.
Sets outputs names of the network input pseudo layer.
Raising version of setParam/4
.
Variant 1:
Positional Arguments
- layerName:
String
- numParam:
int
- blob:
Evision.Mat
Python prototype (for reference):
Raising version of setPreferableBackend/2
.
Ask network to use specific computation backend where it supported.
Raising version of setPreferableTarget/2
.
Ask network to make computations on specific target device.
Link to this section Functions
Raising version of connect/3
.
Connects output of the first layer to input of the second layer.
Positional Arguments
outPin:
String
.descriptor of the first layer output.
inpPin:
String
.descriptor of the second layer input.
Descriptors have the following template <DFN><layer_name>[.input_number]</DFN>:
the first part of the template <DFN>layer_name</DFN> is string name of the added layer. If this part is empty then the network input pseudo layer will be used;
the second optional part of the template <DFN>input_number</DFN> is either number of the layer input, either label one. If this part is omitted then the first layer input will be used.
@see setNetInputs(), Layer::inputNameToIndex(), Layer::outputNameToIndex()
Python prototype (for reference):
connect(outPin, inpPin) -> None
Raising version of dnn_Net/0
.
Python prototype (for reference):
Net() -> <dnn_Net object>
Raising version of dump/1
.
Dump net to String
@returns String with structure, hyperparameters, backend, target and fusion Call method after setInput(). To see correct backend, target and fusion run after forward().
Python prototype (for reference):
dump() -> retval
Raising version of dumpToFile/2
.
Dump net structure, hyperparameters, backend, target and fusion to dot file
Positional Arguments
path:
String
.path to output file with .dot extension
@see dump()
Python prototype (for reference):
dumpToFile(path) -> None
Raising version of empty/1
.
Returns true if there are no layers in the network.
Python prototype (for reference):
empty() -> retval
Raising version of enableFusion/2
.
Enables or disables layer fusion in the network.
Positional Arguments
fusion:
bool
.true to enable the fusion, false to disable. The fusion is enabled by default.
Python prototype (for reference):
enableFusion(fusion) -> None
Raising version of forward/2
.
Runs forward pass to compute outputs of layers listed in @p outBlobNames.
Positional Arguments
outBlobNames:
[String]
.names for layers which outputs are needed to get
Return
outputBlobs:
[Evision.Mat]
.contains blobs for first outputs of specified layers.
Python prototype (for reference):
forward(outBlobNames[, outputBlobs]) -> outputBlobs
Raising version of forwardAndRetrieve/2
.
Runs forward pass to compute outputs of layers listed in @p outBlobNames.
Positional Arguments
outBlobNames:
[String]
.names for layers which outputs are needed to get
Return
outputBlobs:
[vector_Mat]
.contains all output blobs for each layer specified in @p outBlobNames.
Python prototype (for reference):
forwardAndRetrieve(outBlobNames) -> outputBlobs
Raising version of forwardAsync/1
.
Runs forward pass to compute output of layer with name @p outputName.
Keyword Arguments
outputName:
String
.name for layer which output is needed to get
@details By default runs forward pass for the whole network. This is an asynchronous version of forward(const String&). dnn::DNN_BACKEND_INFERENCE_ENGINE backend is required.
Python prototype (for reference):
forwardAsync([, outputName]) -> retval
Raising version of getFLOPS/2
.
Raising version of getFLOPS/3
.
Variant 1:
Positional Arguments
- netInputShape:
MatShape
Has overloading in C++
Python prototype (for reference):
getFLOPS(netInputShape) -> retval
Variant 2:
Computes FLOP for whole loaded model with specified input shapes.
Positional Arguments
netInputShapes:
[MatShape]
.vector of shapes for all net inputs.
@returns computed FLOP.
Python prototype (for reference):
getFLOPS(netInputShapes) -> retval
Variant 1:
Positional Arguments
- layerId:
int
- netInputShape:
MatShape
Has overloading in C++
Python prototype (for reference):
getFLOPS(layerId, netInputShape) -> retval
Variant 2:
Positional Arguments
- layerId:
int
- netInputShapes:
[MatShape]
Has overloading in C++
Python prototype (for reference):
getFLOPS(layerId, netInputShapes) -> retval
Raising version of getInputDetails/1
.
Returns input scale and zeropoint for a quantized Net.
Return
scales:
[float]
.output parameter for returning input scales.
zeropoints:
[int]
.output parameter for returning input zeropoints.
Python prototype (for reference):
getInputDetails() -> scales, zeropoints
Raising version of getLayer/2
.
Variant 1:
Positional Arguments
- layerName:
String
Has overloading in C++
@deprecated Use int getLayerId(const String &layer)
Python prototype (for reference):
getLayer(layerName) -> retval
Variant 2:
Returns pointer to layer with specified id or name which the network use.
Positional Arguments
- layerId:
int
Python prototype (for reference):
getLayer(layerId) -> retval
Variant 3:
Positional Arguments
- layerId:
LayerId
Has overloading in C++
@deprecated to be removed
Python prototype (for reference):
getLayer(layerId) -> retval
Raising version of getLayerId/2
.
Converts string name of the layer to the integer identifier.
Positional Arguments
- layer:
String
@returns id of the layer, or -1 if the layer wasn't found.
Python prototype (for reference):
getLayerId(layer) -> retval
Raising version of getLayerNames/1
.
Python prototype (for reference):
getLayerNames() -> retval
Raising version of getLayerShapes/2
.
Positional Arguments
- netInputShapes:
[MatShape]
- layerId:
int
Return
- inLayerShapes:
[MatShape]
- outLayerShapes:
[MatShape]
Has overloading in C++
Python prototype (for reference):
getLayerShapes(netInputShapes, layerId) -> inLayerShapes, outLayerShapes
Raising version of getLayersCount/2
.
Returns count of layers of specified type.
Positional Arguments
layerType:
String
.type.
@returns count of layers
Python prototype (for reference):
getLayersCount(layerType) -> retval
Raising version of getLayersShapes/2
.
Positional Arguments
- netInputShape:
MatShape
Return
- layersIds:
[int]
- inLayersShapes:
[vector_MatShape]
- outLayersShapes:
[vector_MatShape]
Has overloading in C++
Python prototype (for reference):
getLayersShapes(netInputShape) -> layersIds, inLayersShapes, outLayersShapes
Raising version of getLayerTypes/1
.
Returns list of types for layer used in model.
Return
layersTypes:
[String]
.output parameter for returning types.
Python prototype (for reference):
getLayerTypes() -> layersTypes
Raising version of getMemoryConsumption/2
.
Raising version of getMemoryConsumption/3
.
Positional Arguments
- netInputShape:
MatShape
Return
- weights:
size_t
- blobs:
size_t
Has overloading in C++
Python prototype (for reference):
getMemoryConsumption(netInputShape) -> weights, blobs
Variant 1:
Positional Arguments
- layerId:
int
- netInputShape:
MatShape
Return
- weights:
size_t
- blobs:
size_t
Has overloading in C++
Python prototype (for reference):
getMemoryConsumption(layerId, netInputShape) -> weights, blobs
Variant 2:
Positional Arguments
- layerId:
int
- netInputShapes:
[MatShape]
Return
- weights:
size_t
- blobs:
size_t
Has overloading in C++
Python prototype (for reference):
getMemoryConsumption(layerId, netInputShapes) -> weights, blobs
Raising version of getOutputDetails/1
.
Returns output scale and zeropoint for a quantized Net.
Return
scales:
[float]
.output parameter for returning output scales.
zeropoints:
[int]
.output parameter for returning output zeropoints.
Python prototype (for reference):
getOutputDetails() -> scales, zeropoints
Raising version of getParam/2
.
Raising version of getParam/3
.
Variant 1:
Positional Arguments
- layerName:
String
Keyword Arguments
- numParam:
int
.
Python prototype (for reference):
getParam(layerName[, numParam]) -> retval
Variant 2:
Returns parameter blob of the layer.
Positional Arguments
layer:
int
.name or id of the layer.
Keyword Arguments
numParam:
int
.index of the layer parameter in the Layer::blobs array.
@see Layer::blobs
Python prototype (for reference):
getParam(layer[, numParam]) -> retval
Variant 1:
Positional Arguments
- layerName:
String
Keyword Arguments
- numParam:
int
.
Python prototype (for reference):
getParam(layerName[, numParam]) -> retval
Variant 2:
Returns parameter blob of the layer.
Positional Arguments
layer:
int
.name or id of the layer.
Keyword Arguments
numParam:
int
.index of the layer parameter in the Layer::blobs array.
@see Layer::blobs
Python prototype (for reference):
getParam(layer[, numParam]) -> retval
Raising version of getPerfProfile/1
.
Returns overall time for inference and timings (in ticks) for layers.
Return
timings:
[double]
.vector for tick timings for all layers.
Indexes in returned vector correspond to layers ids. Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers. Supported by DNN_BACKEND_OPENCV on DNN_TARGET_CPU only. @return overall ticks for model inference.
Python prototype (for reference):
getPerfProfile() -> retval, timings
Raising version of getUnconnectedOutLayers/1
.
Returns indexes of layers with unconnected outputs.
FIXIT: Rework API to registerOutput() approach, deprecate this call
Python prototype (for reference):
getUnconnectedOutLayers() -> retval
Raising version of getUnconnectedOutLayersNames/1
.
Returns names of layers with unconnected outputs.
FIXIT: Rework API to registerOutput() approach, deprecate this call
Python prototype (for reference):
getUnconnectedOutLayersNames() -> retval
Raising version of quantize/4
.
Returns a quantized Net from a floating-point Net.
Positional Arguments
calibData:
[Evision.Mat]
.Calibration data to compute the quantization parameters.
inputsDtype:
int
.Datatype of quantized net's inputs. Can be CV_32F or CV_8S.
outputsDtype:
int
.Datatype of quantized net's outputs. Can be CV_32F or CV_8S.
Python prototype (for reference):
quantize(calibData, inputsDtype, outputsDtype) -> retval
Raising version of readFromModelOptimizer/2
.
Variant 1:
Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).
Positional Arguments
bufferModelConfig:
[uchar]
.buffer with model's configuration.
bufferWeights:
[uchar]
.buffer with model's trained weights.
@returns Net object.
Python prototype (for reference):
readFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval
Variant 2:
Create a network from Intel's Model Optimizer intermediate representation (IR).
Positional Arguments
xml:
String
.XML configuration file with network's topology.
bin:
String
.Binary file with trained weights. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.
Python prototype (for reference):
readFromModelOptimizer(xml, bin) -> retval
Raising version of setHalideScheduler/2
.
Compile Halide layers.
Positional Arguments
scheduler:
String
.Path to YAML file with scheduling directives.
@see setPreferableBackend Schedule layers that support Halide backend. Then compile them for specific target. For layers that not represented in scheduling file or if no manual scheduling used at all, automatic scheduling will be applied.
Python prototype (for reference):
setHalideScheduler(scheduler) -> None
Raising version of setInput/2
.
Raising version of setInput/3
.
Sets the new input value for the network
Positional Arguments
blob:
Evision.Mat
.A new blob. Should have CV_32F or CV_8U depth.
Keyword Arguments
name:
String
.A name of input layer.
scalefactor:
double
.An optional normalization scale.
mean:
Scalar
.An optional mean subtraction values.
@see connect(String, String) to know format of the descriptor. If scale or mean values are specified, a final input blob is computed as: \f[input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\f]
Python prototype (for reference):
setInput(blob[, name[, scalefactor[, mean]]]) -> None
Sets the new input value for the network
Positional Arguments
blob:
Evision.Mat
.A new blob. Should have CV_32F or CV_8U depth.
Keyword Arguments
name:
String
.A name of input layer.
scalefactor:
double
.An optional normalization scale.
mean:
Scalar
.An optional mean subtraction values.
@see connect(String, String) to know format of the descriptor. If scale or mean values are specified, a final input blob is computed as: \f[input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\f]
Python prototype (for reference):
setInput(blob[, name[, scalefactor[, mean]]]) -> None
Raising version of setInputShape/3
.
Specify shape of network input.
Positional Arguments
- inputName:
String
- shape:
MatShape
Python prototype (for reference):
setInputShape(inputName, shape) -> None
Raising version of setInputsNames/2
.
Sets outputs names of the network input pseudo layer.
Positional Arguments
- inputBlobNames:
[String]
Each net always has special own the network input pseudo layer with id=0. This layer stores the user blobs only and don't make any computations. In fact, this layer provides the only way to pass user data into the network. As any other layer, this layer can label its outputs and this function provides an easy way to do this.
Python prototype (for reference):
setInputsNames(inputBlobNames) -> None
Raising version of setParam/4
.
Variant 1:
Positional Arguments
- layerName:
String
- numParam:
int
- blob:
Evision.Mat
Python prototype (for reference):
setParam(layerName, numParam, blob) -> None
Variant 2:
Sets the new value for the learned param of the layer.
Positional Arguments
layer:
int
.name or id of the layer.
numParam:
int
.index of the layer parameter in the Layer::blobs array.
blob:
Evision.Mat
.the new value.
@see Layer::blobs Note: If shape of the new blob differs from the previous shape, then the following forward pass may fail.
Python prototype (for reference):
setParam(layer, numParam, blob) -> None
Raising version of setPreferableBackend/2
.
Ask network to use specific computation backend where it supported.
Positional Arguments
backendId:
int
.backend identifier.
@see Backend If OpenCV is compiled with Intel's Inference Engine library, DNN_BACKEND_DEFAULT means DNN_BACKEND_INFERENCE_ENGINE. Otherwise it equals to DNN_BACKEND_OPENCV.
Python prototype (for reference):
setPreferableBackend(backendId) -> None
Raising version of setPreferableTarget/2
.
Ask network to make computations on specific target device.
Positional Arguments
targetId:
int
.target identifier.
@see Target List of supported combinations backend / target: | | DNN_BACKEND_OPENCV | DNN_BACKEND_INFERENCE_ENGINE | DNN_BACKEND_HALIDE | DNN_BACKEND_CUDA | |------------------------|--------------------|------------------------------|--------------------|-------------------| | DNN_TARGET_CPU | + | + | + | | | DNN_TARGET_OPENCL | + | + | + | | | DNN_TARGET_OPENCL_FP16 | + | + | | | | DNN_TARGET_MYRIAD | | + | | | | DNN_TARGET_FPGA | | + | | | | DNN_TARGET_CUDA | | | | + | | DNN_TARGET_CUDA_FP16 | | | | + | | DNN_TARGET_HDDL | | + | | |
Python prototype (for reference):
setPreferableTarget(targetId) -> None