View Source OnnxInterp (onnx_interp v0.1.8)
Onnx runtime intepreter for Elixir. Deep Learning inference framework.
basic-usage
Basic Usage
You get the trained onnx model and save it in a directory that your application can read. "your-app/priv" may be good choice.
$ cp your-trained-model.onnx ./priv
Next, you will create a module that interfaces with the deep learning model. The module will need pre-processing and post-processing in addition to inference processing, as in the example following. OnnxInterp provides inference processing only.
You put use OnnxInterp
at the beginning of your module, specify the model path as an optional argument. In the inference
section, you will put data input to the model (OnnxInterp.set_input_tensor/3
), inference execution (OnnxInterp.invoke/1
),
and inference result retrieval (OnnxInterp.get_output_tensor/2
).
defmodule YourApp.YourModel do
use OnnxInterp, model: "priv/your-trained-model.onnx"
def predict(data) do
# preprocess
# to convert the data to be inferred to the input format of the model.
input_bin = convert-float32-binaries(data)
# inference
# typical I/O data for Onnx models is a serialized 32-bit float tensor.
output_bin =
__MODULE__
|> OnnxInterp.set_input_tensor(0, input_bin)
|> OnnxInterp.invoke()
|> OnnxInterp.get_output_tensor(0)
# postprocess
# add your post-processing here.
# you may need to reshape output_bin to tensor at first.
tensor = output_bin
|> Nx.from_binary({:f, 32})
|> Nx.reshape({size-x, size-y, :auto})
* your-postprocessing *
...
end
end
Link to this section Summary
Functions
Adjust NMS result to aspect of the input image. (letterbox)
Get name of backend NN framework.
Ensure that the back-end framework is as expected.
Get the flat binary from the output tensor on the interpreter.
Get the propaty of the model.
Invoke prediction.
Execute post processing: nms.
Put a flat binary to the input tensor on the interpreter.
Stop the onnx-runtime interpreter.
Ensure that the model matches the back-end framework.
Link to this section Functions
Adjust NMS result to aspect of the input image. (letterbox)
parameters
Parameters:
- nms_result - NMS result {:ok, result}
- [rx, ry] - aspect ratio of the input image
Get name of backend NN framework.
Ensure that the back-end framework is as expected.
Get the flat binary from the output tensor on the interpreter.
parameters
Parameters
- mod - modules' names or session.
- index - index of output tensor in the model
Get the propaty of the model.
parameters
Parameters
- mod - modules' names
Invoke prediction.
Two modes are toggled depending on the type of input data. One is the stateful mode, in which input/output data are stored as model states. The other mode is stateless, where input/output data is stored in a session structure assigned to the application.
parameters
Parameters
- mod/session - modules name(stateful) or session structure(stateless).
examples
Examples.
output_bin = session() # stateless mode
|> OnnxInterp.set_input_tensor(0, input_bin)
|> OnnxInterp.invoke()
|> OnnxInterp.get_output_tensor(0)
non_max_suppression_multi_class(mod, arg, boxes, scores, opts \\ [])
View SourceExecute post processing: nms.
parameters
Parameters
- mod - modules' names
- num_boxes - number of candidate boxes
- num_class - number of category class
- boxes - binaries, serialized boxes tensor[
num_boxes
][4]; dtype: float32 - scores - binaries, serialized score tensor[
num_boxes
][num_class
]; dtype: float32 - opts
- iou_threshold: - IOU threshold
- score_threshold: - score cutoff threshold
- sigma: - soft IOU parameter
- boxrepr: - type of box representation
- :center - center pos and width/height
- :topleft - top-left pos and width/height
- :corner - top-left and bottom-right corner pos
Put a flat binary to the input tensor on the interpreter.
parameters
Parameters
- mod - modules' names or session.
- index - index of input tensor in the model
- bin - input data - flat binary, cf. serialized tensor
- opts - data conversion
Stop the onnx-runtime interpreter.
parameters
Parameters
- mod - modules' names
Ensure that the model matches the back-end framework.