View Source TFLiteElixir.Interpreter (tflite_elixir v0.1.3)
Link to this section Summary
Functions
Allocate memory for tensors in the graph
Raising version of allocateTensors/1
.
Raising version of get_full_signature_list/1
.
Get the name of the input tensor
Raising version of getInputName/2
.
Get the list of output tensors.
Raising version of getOutputName/2
.
Raising version of getSignatureDefs/1
.
Fill data to the specified input tensor
Raising version of input_tensor/3
.
Get the list of input tensors.
Raising version of inputs/1
.
Run forwarding
Raising version of invoke/1
.
New interpreter
New interpreter with model
Raising version of new/1
.
Get the name of the input tensor
Raising version of output_tensor/2
.
Get the list of output tensors.
Raising version of outputs/1
.
Set the number of threads available to the interpreter.
Raising version of setNumThreads/2
.
Get any tensor in the graph by its id
Raising version of tensor/2
.
Link to this section Types
Link to this section Functions
Allocate memory for tensors in the graph
Raising version of allocateTensors/1
.
@spec get_full_signature_list(reference()) :: Map.t()
Raising version of get_full_signature_list/1
.
@spec getInputName(reference(), non_neg_integer()) :: {:ok, String.t()} | nif_error()
Get the name of the input tensor
Note that the index here means the index in the result list of inputs/1
. For example,
if inputs/1
returns [42, 314]
, then 0
should be passed here to get the name of
tensor 42
Raising version of getInputName/2
.
@spec getOutputName(reference(), non_neg_integer()) :: {:ok, String.t()} | nif_error()
Get the list of output tensors.
return a list of output tensor id
Raising version of getOutputName/2
.
@spec getSignatureDefs(reference()) :: Map.t()
Raising version of getSignatureDefs/1
.
@spec input_tensor(reference(), non_neg_integer(), binary()) :: :ok | nif_error()
Fill data to the specified input tensor
Note: although we have typed_input_tensor
in the C++ end, but here what we really passed
to the NIF is binary
data, therefore, I'm not pretend that we have type information.
example-get-the-expected-data-type-and-shape-for-the-input-tensor
Example: Get the expected data type and shape for the input tensor
{:ok, tensor} = TFLite.Interpreter.tensor(interpreter, 0)
{:ok, [1, 224, 224, 3]} = TFLite.TFLiteTensor.dims(tensor)
{:u, 8} = TFLite.TFLiteTensor.type(tensor)
Raising version of input_tensor/3
.
@spec inputs(reference()) :: {:ok, [non_neg_integer()]} | nif_error()
Get the list of input tensors.
return a list of input tensor id
Raising version of inputs/1
.
Run forwarding
Raising version of invoke/1
.
@spec new() :: nif_resource_ok() | nif_error()
New interpreter
@spec new(String.t()) :: nif_resource_ok() | nif_error()
New interpreter with model
Raising version of new/0
.
Raising version of new/1
.
@spec output_tensor(reference(), non_neg_integer()) :: {:ok, tensor_type(), binary()} | nif_error()
Get the name of the input tensor
Note that the index here means the index in the result list of outputs/1
. For example,
if outputs/1
returns [42, 314]
, then 0
should be passed here to get the name of
tensor 42
Raising version of output_tensor/2
.
@spec outputs(reference()) :: {:ok, [non_neg_integer()]} | nif_error()
Get the list of output tensors.
return a list of output tensor id
Raising version of outputs/1
.
Set the number of threads available to the interpreter.
NOTE: num_threads should be >= -1. Setting num_threads to 0 has the effect to disable multithreading, which is equivalent to setting num_threads to 1. If set to the value -1, the number of threads used will be implementation-defined and platform-dependent.
As TfLite interpreter could internally apply a TfLite delegate by default (i.e. XNNPACK), the number of threads that are available to the default delegate should be set via InterpreterBuilder APIs as follows:
interpreter = TFLiteElixir.Interpreter.new!()
builder = TFLiteElixir.InterpreterBuilder.new!(tflite model, op resolver)
TFLiteElixir.InterpreterBuilder.setNumThreads(builder, ...)
assert :ok == TFLiteElixir.InterpreterBuilder.build!(builder, interpreter)
Raising version of setNumThreads/2
.
@spec tensor(reference(), non_neg_integer()) :: {:ok, %TFLiteElixir.TFLiteTensor{ index: term(), name: term(), quantization_params: term(), reference: term(), shape: term(), shape_signature: term(), sparsity_params: term(), type: term() }} | nif_error()
Get any tensor in the graph by its id
Note that the tensor_index
here means the id of a tensor. For example,
if inputs/1
returns [42, 314]
, then 42
should be passed here to get tensor 42
.
Raising version of tensor/2
.