View Source TFLiteElixir.Interpreter (tflite_elixir v0.1.2)

Link to this section Summary

Functions

Allocate memory for tensors in the graph

Get the name of the input tensor

Get the list of output tensors.

Fill data to the specified input tensor

Get the list of input tensors.

Raising version of inputs/1.

Run forwarding

Raising version of invoke/1.

New interpreter

New interpreter with model

Raising version of new/0.

Raising version of new/1.

Get the name of the input tensor

Get the list of output tensors.

Raising version of outputs/1.

Set the number of threads available to the interpreter.

Get any tensor in the graph by its id

Link to this section Types

@type nif_error() :: {:error, String.t()}
@type nif_resource_ok() :: {:ok, reference()}
@type tensor_type() ::
  :no_type
  | {:f, 32}
  | {:s, 32}
  | {:u, 8}
  | {:s, 64}
  | :string
  | :bool
  | {:s, 16}
  | {:c, 64}
  | {:s, 8}
  | {:f, 16}
  | {:f, 64}
  | {:c, 128}
  | {:u, 64}
  | :resource
  | :variant
  | {:u, 32}

Link to this section Functions

@spec allocateTensors(reference()) :: :ok | nif_error()

Allocate memory for tensors in the graph

Raising version of allocateTensors/1.

Link to this function

get_full_signature_list(self)

View Source
@spec get_full_signature_list(reference()) :: Map.t()
Link to this function

get_full_signature_list!(self)

View Source

Raising version of get_full_signature_list/1.

Link to this function

getInputName(self, index)

View Source
@spec getInputName(reference(), non_neg_integer()) :: {:ok, String.t()} | nif_error()

Get the name of the input tensor

Note that the index here means the index in the result list of inputs/1. For example, if inputs/1 returns [42, 314], then 0 should be passed here to get the name of tensor 42

Link to this function

getInputName!(self, index)

View Source

Raising version of getInputName/2.

Link to this function

getOutputName(self, index)

View Source
@spec getOutputName(reference(), non_neg_integer()) :: {:ok, String.t()} | nif_error()

Get the list of output tensors.

return a list of output tensor id

Link to this function

getOutputName!(self, index)

View Source

Raising version of getOutputName/2.

@spec getSignatureDefs(reference()) :: Map.t()

Raising version of getSignatureDefs/1.

Link to this function

input_tensor(self, index, data)

View Source
@spec input_tensor(reference(), non_neg_integer(), binary()) :: :ok | nif_error()

Fill data to the specified input tensor

Note: although we have typed_input_tensor in the C++ end, but here what we really passed to the NIF is binary data, therefore, I'm not pretend that we have type information.

example-get-the-expected-data-type-and-shape-for-the-input-tensor

Example: Get the expected data type and shape for the input tensor

{:ok, tensor} = TFLite.Interpreter.tensor(interpreter, 0)
{:ok, [1, 224, 224, 3]} = TFLite.TFLiteTensor.dims(tensor)
{:u, 8} = TFLite.TFLiteTensor.type(tensor)
Link to this function

input_tensor!(self, index, data)

View Source

Raising version of input_tensor/3.

@spec inputs(reference()) :: {:ok, [non_neg_integer()]} | nif_error()

Get the list of input tensors.

return a list of input tensor id

Raising version of inputs/1.

@spec invoke(reference()) :: :ok | nif_error()

Run forwarding

Raising version of invoke/1.

@spec new() :: nif_resource_ok() | nif_error()

New interpreter

@spec new(String.t()) :: nif_resource_ok() | nif_error()

New interpreter with model

Raising version of new/0.

Raising version of new/1.

Link to this function

output_tensor(self, index)

View Source
@spec output_tensor(reference(), non_neg_integer()) ::
  {:ok, tensor_type(), binary()} | nif_error()

Get the name of the input tensor

Note that the index here means the index in the result list of outputs/1. For example, if outputs/1 returns [42, 314], then 0 should be passed here to get the name of tensor 42

Link to this function

output_tensor!(self, index)

View Source

Raising version of output_tensor/2.

@spec outputs(reference()) :: {:ok, [non_neg_integer()]} | nif_error()

Get the list of output tensors.

return a list of output tensor id

Raising version of outputs/1.

Link to this function

predict(interpreter, input)

View Source
Link to this function

setNumThreads(self, num_threads)

View Source
@spec setNumThreads(reference(), integer()) :: :ok | nif_error()

Set the number of threads available to the interpreter.

NOTE: num_threads should be >= -1. Setting num_threads to 0 has the effect to disable multithreading, which is equivalent to setting num_threads to 1. If set to the value -1, the number of threads used will be implementation-defined and platform-dependent.

As TfLite interpreter could internally apply a TfLite delegate by default (i.e. XNNPACK), the number of threads that are available to the default delegate should be set via InterpreterBuilder APIs as follows:

interpreter = TFLiteElixir.Interpreter.new!()
builder = TFLiteElixir.InterpreterBuilder.new!(tflite model, op resolver)
TFLiteElixir.InterpreterBuilder.setNumThreads(builder, ...)
assert :ok == TFLiteElixir.InterpreterBuilder.build!(builder, interpreter)
Link to this function

setNumThreads!(self, num_threads)

View Source

Raising version of setNumThreads/2.

Link to this function

tensor(self, tensor_index)

View Source
@spec tensor(reference(), non_neg_integer()) ::
  {:ok,
   %TFLiteElixir.TFLiteTensor{
     index: term(),
     name: term(),
     quantization_params: term(),
     reference: term(),
     shape: term(),
     shape_signature: term(),
     sparsity_params: term(),
     type: term()
   }}
  | nif_error()

Get any tensor in the graph by its id

Note that the tensor_index here means the id of a tensor. For example, if inputs/1 returns [42, 314], then 42 should be passed here to get tensor 42.

Link to this function

tensor!(self, tensor_index)

View Source

Raising version of tensor/2.