View Source tflite_beam_interpreter (tflite_beam v0.3.4)

An interpreter for a graph of nodes that input and output from tensors.

Link to this section Summary

Functions

Allocate memory for tensors in the graph

Return the execution plan of the model.

Get the name of the input tensor

Get the name of the output tensor

Get SignatureDef map from the Metadata of a TfLite FlatBuffer buffer.

Fill data to the specified input tensor

Get the list of input tensors.

Run forwarding
New interpreter
New interpreter with model filepath
New interpreter with model buffer
Return the number of ops in the model.

Get the data of the output tensor

Get the list of output tensors.

Fill input data to corresponding input tensor of the interpreter, call tflite_beam_interpreter:invoke/1 and return output tensor(s).
Provide a list of tensor indexes that are inputs to the model. Each index is bound check and this modifies the consistent_ flag of the interpreter.

Set the number of threads available to the interpreter.

Provide a list of tensor indexes that are outputs to the model. Each index is bound check and this modifies the consistent_ flag of the interpreter.
Provide a list of tensor indexes that are variable tensors. Each index is bound check and this modifies the consistent_ flag of the interpreter.

Returns list of all keys of different method signatures defined in the model.

Get any tensor in the graph by its id

Return the number of tensors in the model.
Get the list of variable tensors.

Link to this section Types

Link to this type

tflite_beam_tensor_type/0

View Source
-type tflite_beam_tensor_type() ::
    no_type |
    {f, 32} |
    {s, 32} |
    {u, 8} |
    {s, 64} |
    string | bool |
    {s, 16} |
    {c, 64} |
    {s, 8} |
    {f, 16} |
    {f, 64} |
    {c, 128} |
    {u, 64} |
    resource | variant |
    {u, 32}.

Link to this section Functions

-spec allocate_tensors(reference()) -> ok | {error, binary()}.
Allocate memory for tensors in the graph
-spec execution_plan(reference()) -> [non_neg_integer()] | {error, binary()}.

Return the execution plan of the model.

Experimental interface, subject to change.
Link to this function

get_input_name(Self, Index)

View Source
-spec get_input_name(reference(), non_neg_integer()) -> {ok, binary()} | {error, binary()}.

Get the name of the input tensor

Note that the index here means the index in the result list of inputs/1. For example, if inputs/1 returns [42, 314], then 0 should be passed here to get the name of tensor 42
Link to this function

get_output_name(Self, Index)

View Source
-spec get_output_name(reference(), non_neg_integer()) -> {ok, binary()} | {error, binary()}.

Get the name of the output tensor

Note that the index here means the index in the result list of outputs/1. For example, if outputs/1 returns [42, 314], then 0 should be passed here to get the name of tensor 42
Link to this function

get_signature_defs(Self)

View Source
-spec get_signature_defs(reference()) -> {ok, map()} | nil | {error, binary()}.
Get SignatureDef map from the Metadata of a TfLite FlatBuffer buffer.
Link to this function

input_tensor(Self, Index, Data)

View Source
-spec input_tensor(reference(), non_neg_integer(), binary()) -> ok | {error, binary()}.

Fill data to the specified input tensor

Note: although we have typed_input_tensor available in C++, here what we really passed to the NIF is binary` data, therefore, Im not pretend that we have type information.
-spec inputs(reference()) -> {ok, [non_neg_integer()]} | {error, binary()}.

Get the list of input tensors.

return a list of input tensor id
-spec invoke(reference()) -> ok | {error, binary()}.
Run forwarding
-spec new() -> {ok, reference()} | {error, binary()}.
New interpreter
-spec new(list() | binary()) -> {ok, reference()} | {error, binary()}.
New interpreter with model filepath
-spec new_from_buffer(binary()) -> {ok, reference()} | {error, binary()}.
New interpreter with model buffer
-spec nodes_size(reference()) -> non_neg_integer() | {error, binary()}.
Return the number of ops in the model.
Link to this function

output_tensor(Self, Index)

View Source
-spec output_tensor(reference(), non_neg_integer()) -> {ok, binary()} | {error, binary()}.

Get the data of the output tensor

Note that the index here means the index in the result list of outputs/1. For example, if outputs/1 returns [42, 314], then 0` should be passed here to get the name of tensor `42
-spec outputs(reference()) -> {ok, [non_neg_integer()]} | {error, binary()}.

Get the list of output tensors.

return a list of output tensor id
-spec predict(reference(), [binary()] | binary() | map()) ->
           [#tflite_beam_tensor{} | {error, binary()}] | #tflite_beam_tensor{} | {error, binary()}.
Fill input data to corresponding input tensor of the interpreter, call tflite_beam_interpreter:invoke/1 and return output tensor(s).
Link to this function

set_inputs(Self, Inputs)

View Source
-spec set_inputs(reference(), [integer()]) -> ok | {error, binary()}.
Provide a list of tensor indexes that are inputs to the model. Each index is bound check and this modifies the consistent_ flag of the interpreter.
Link to this function

set_num_threads(Self, NumThreads)

View Source
-spec set_num_threads(reference(), integer()) -> ok | {error, binary()}.

Set the number of threads available to the interpreter.

As TfLite interpreter could internally apply a TfLite delegate by default (i.e. XNNPACK), the number of threads that are available to the default delegate should be set via InterpreterBuilder APIs as follows:

  {ok, Interpreter} = tflite_beam_interpreter:new(),
  {ok, Builder} = tflite_beam_interpreter_builder:new(Model, Resolver),
  tflite_beam_interpreter_builder:set_num_threads(Builder, NumThreads),
  tflite_beam_interpreter_builder:build(Builder, Interpreter)
Link to this function

set_outputs(Self, Outputs)

View Source
-spec set_outputs(reference(), [integer()]) -> ok | {error, binary()}.
Provide a list of tensor indexes that are outputs to the model. Each index is bound check and this modifies the consistent_ flag of the interpreter.
Link to this function

set_variables(Self, Variables)

View Source
-spec set_variables(reference(), [integer()]) -> ok | {error, binary()}.
Provide a list of tensor indexes that are variable tensors. Each index is bound check and this modifies the consistent_ flag of the interpreter.
-spec signature_keys(reference()) -> [binary()] | {error, binary()}.

Returns list of all keys of different method signatures defined in the model.

WARNING: Experimental interface, subject to change
Link to this function

tensor(Self, TensorIndex)

View Source
-spec tensor(reference(), non_neg_integer()) -> #tflite_beam_tensor{} | {error, binary()}.

Get any tensor in the graph by its id

Note that the tensor_index here means the id of a tensor. For example, if inputs/1 returns [42, 314], then 42 should be passed here to get tensor 42.
-spec tensors_size(reference()) -> non_neg_integer() | {error, binary()}.
Return the number of tensors in the model.
-spec variables(reference()) -> {ok, [non_neg_integer()]} | {error, binary()}.
Get the list of variable tensors.