View Source TFLiteElixir.Interpreter (tflite_elixir v0.1.4)

An interpreter for a graph of nodes that input and output from tensors.

Link to this section Summary

Functions

Allocate memory for tensors in the graph

Get the name of the input tensor

Get the list of output tensors.

Fill data to the specified input tensor

Get the list of input tensors.

Raising version of inputs/1.

Run forwarding

Raising version of invoke/1.

New interpreter

New interpreter with model

Raising version of new/0.

Raising version of new/1.

Get the name of the input tensor

Get the list of output tensors.

Raising version of outputs/1.

Set the number of threads available to the interpreter.

Get any tensor in the graph by its id

Link to this section Types

@type nif_error() :: {:error, String.t()}
@type nif_resource_ok() :: {:ok, reference()}
@type tensor_type() ::
  :no_type
  | {:f, 32}
  | {:s, 32}
  | {:u, 8}
  | {:s, 64}
  | :string
  | :bool
  | {:s, 16}
  | {:c, 64}
  | {:s, 8}
  | {:f, 16}
  | {:f, 64}
  | {:c, 128}
  | {:u, 64}
  | :resource
  | :variant
  | {:u, 32}

Link to this section Functions

@spec allocate_tensors(reference()) :: :ok | nif_error()

Allocate memory for tensors in the graph

Raising version of allocate_tensors/1.

Link to this function

get_full_signature_list(self)

View Source
@spec get_full_signature_list(reference()) :: Map.t()
Link to this function

get_full_signature_list!(self)

View Source

Raising version of get_full_signature_list/1.

Link to this function

get_input_name(self, index)

View Source
@spec get_input_name(reference(), non_neg_integer()) ::
  {:ok, String.t()} | nif_error()

Get the name of the input tensor

Note that the index here means the index in the result list of inputs/1. For example, if inputs/1 returns [42, 314], then 0 should be passed here to get the name of tensor 42

Link to this function

get_input_name!(self, index)

View Source

Raising version of get_input_name/2.

Link to this function

get_output_name(self, index)

View Source
@spec get_output_name(reference(), non_neg_integer()) ::
  {:ok, String.t()} | nif_error()

Get the list of output tensors.

return a list of output tensor id

Link to this function

get_output_name!(self, index)

View Source

Raising version of get_output_name/2.

Link to this function

get_signature_defs(self)

View Source
@spec get_signature_defs(reference()) :: Map.t()
Link to this function

get_signature_defs!(self)

View Source

Raising version of get_signature_defs/1.

Link to this function

input_tensor(self, index, data)

View Source
@spec input_tensor(reference(), non_neg_integer(), binary()) :: :ok | nif_error()

Fill data to the specified input tensor

Note: although we have typed_input_tensor in the C++ end, but here what we really passed to the NIF is binary data, therefore, I'm not pretend that we have type information.

example-get-the-expected-data-type-and-shape-for-the-input-tensor

Example: Get the expected data type and shape for the input tensor

{:ok, tensor} = Interpreter.tensor(interpreter, 0)
{:ok, [1, 224, 224, 3]} = TFLiteTensor.dims(tensor)
{:u, 8} = TFLiteTensor.type(tensor)
Link to this function

input_tensor!(self, index, data)

View Source

Raising version of input_tensor/3.

@spec inputs(reference()) :: {:ok, [non_neg_integer()]} | nif_error()

Get the list of input tensors.

return a list of input tensor id

Raising version of inputs/1.

@spec invoke(reference()) :: :ok | nif_error()

Run forwarding

Raising version of invoke/1.

@spec new() :: nif_resource_ok() | nif_error()

New interpreter

@spec new(String.t()) :: nif_resource_ok() | nif_error()

New interpreter with model

Raising version of new/0.

Raising version of new/1.

Link to this function

output_tensor(self, index)

View Source
@spec output_tensor(reference(), non_neg_integer()) ::
  {:ok, tensor_type(), binary()} | nif_error()

Get the name of the input tensor

Note that the index here means the index in the result list of outputs/1. For example, if outputs/1 returns [42, 314], then 0 should be passed here to get the name of tensor 42

Link to this function

output_tensor!(self, index)

View Source

Raising version of output_tensor/2.

@spec outputs(reference()) :: {:ok, [non_neg_integer()]} | nif_error()

Get the list of output tensors.

return a list of output tensor id

Raising version of outputs/1.

Link to this function

predict(interpreter, input)

View Source
Link to this function

set_num_threads(self, num_threads)

View Source
@spec set_num_threads(reference(), integer()) :: :ok | nif_error()

Set the number of threads available to the interpreter.

NOTE: num_threads should be >= -1. Setting num_threads to 0 has the effect to disable multithreading, which is equivalent to setting num_threads to 1. If set to the value -1, the number of threads used will be implementation-defined and platform-dependent.

As TfLite interpreter could internally apply a TfLite delegate by default (i.e. XNNPACK), the number of threads that are available to the default delegate should be set via InterpreterBuilder APIs as follows:

interpreter = Interpreter.new!()
builder = InterpreterBuilder.new!(tflite model, op resolver)
InterpreterBuilder.set_num_threads(builder, ...)
assert :ok == InterpreterBuilder.build!(builder, interpreter)
Link to this function

set_num_threads!(self, num_threads)

View Source

Raising version of set_num_threads/2.

Link to this function

tensor(self, tensor_index)

View Source
@spec tensor(reference(), non_neg_integer()) ::
  {:ok,
   %TFLiteElixir.TFLiteTensor{
     index: term(),
     name: term(),
     quantization_params: term(),
     reference: term(),
     shape: term(),
     shape_signature: term(),
     sparsity_params: term(),
     type: term()
   }}
  | nif_error()

Get any tensor in the graph by its id

Note that the tensor_index here means the id of a tensor. For example, if inputs/1 returns [42, 314], then 42 should be passed here to get tensor 42.

Link to this function

tensor!(self, tensor_index)

View Source

Raising version of tensor/2.