Extensor v0.1.4 Extensor.Session

The session module provides functions for loading tensorflow graphs into a session and executing operations within the session. Graphs are represented by protocol buffers and named tensors are used for all inputs/outputs.

There are two primary methods for serializing models in tensorflow: frozen graph_defs and saved_models. A frozen graph_def is just a protocol buffer containing a compute graph with no variables (all variables have been frozen as consts). A saved_model is a directory containing metadata used for tensorflow serving (TFS) as well as weights for graph variables. For more information on these formats, see:

This module can be used to load either type of model. parse/load_frozen_graph loads a graph_def protocol buffer and imports it using TF_GraphImportGraphDef, and load_saved_model loads a saved_model using TF_LoadSessionFromSavedModel. Both functions create and return a reference to a tensorflow session that can be used to run operations, like model inference. A tensorflow ConfigProto can be passed to either function, in order to configure the session (GPU options, etc.).

Once the session has been created, it can be executed any number of times using the run function. Tensorflow sessions are also thread-safe and maintain graph state per call, so they can be executed in parallel. The run function accepts a map of named input tensors and a list of output tensor names to evaluate.

Example (Pythagorean Triple):

iex> session = Extensor.Session.load_saved_model!("test/data/pythagoras")

iex> input = %{
    "a" => Extensor.Tensor.from_list([5]),
    "b" => Extensor.Tensor.from_list([12])
  }

iex> output = Extensor.Session.run!(session, input, ["c"])

iex> Extensor.Tensor.to_list(output["c"])
[13.0]

See the Tensorflow.ConfigProto module and documentation for more information on how to pass configuration when creating a new session. The tensorflow protocol buffer modules were generated with the protobuf-elixir library.

Link to this section Summary

Functions

loads a custom op kernel library

loads a custom op kernel library

executes a tensorflow session

executes a tensorflow session

Link to this section Types

Link to this section Functions

Link to this function load_frozen_graph(path, config \\ %{__struct__: Tensorflow.ConfigProto, allow_soft_placement: false, cluster_def: nil, device_count: %{}, device_filters: [], experimental: nil, gpu_options: nil, graph_options: nil, inter_op_parallelism_threads: 0, intra_op_parallelism_threads: 0, isolate_session_state: false, log_device_placement: false, operation_timeout_in_ms: 0, placement_period: 0, rpc_options: nil, session_inter_op_thread_pool: [], use_per_session_threads: false})
load_frozen_graph(
  path :: String.t(),
  config :: %Tensorflow.ConfigProto{
    allow_soft_placement: term(),
    cluster_def: term(),
    device_count: term(),
    device_filters: term(),
    experimental: term(),
    gpu_options: term(),
    graph_options: term(),
    inter_op_parallelism_threads: term(),
    intra_op_parallelism_threads: term(),
    isolate_session_state: term(),
    log_device_placement: term(),
    operation_timeout_in_ms: term(),
    placement_period: term(),
    rpc_options: term(),
    session_inter_op_thread_pool: term(),
    use_per_session_threads: term()
  }
) :: {:ok, t()} | {:error, any()}

loads a graph_def from a file path

Link to this function load_frozen_graph!(path, config \\ %{__struct__: Tensorflow.ConfigProto, allow_soft_placement: false, cluster_def: nil, device_count: %{}, device_filters: [], experimental: nil, gpu_options: nil, graph_options: nil, inter_op_parallelism_threads: 0, intra_op_parallelism_threads: 0, isolate_session_state: false, log_device_placement: false, operation_timeout_in_ms: 0, placement_period: 0, rpc_options: nil, session_inter_op_thread_pool: [], use_per_session_threads: false})
load_frozen_graph!(
  path :: String.t(),
  config :: %Tensorflow.ConfigProto{
    allow_soft_placement: term(),
    cluster_def: term(),
    device_count: term(),
    device_filters: term(),
    experimental: term(),
    gpu_options: term(),
    graph_options: term(),
    inter_op_parallelism_threads: term(),
    intra_op_parallelism_threads: term(),
    isolate_session_state: term(),
    log_device_placement: term(),
    operation_timeout_in_ms: term(),
    placement_period: term(),
    rpc_options: term(),
    session_inter_op_thread_pool: term(),
    use_per_session_threads: term()
  }
) :: t() | no_return()

loads a graph_def from a file path

Link to this function load_library(name)
load_library(name :: String.t()) :: :ok | {:error, any()}

loads a custom op kernel library

Link to this function load_library!(name)
load_library!(name :: String.t()) :: nil

loads a custom op kernel library

Link to this function load_saved_model(path, config \\ %{__struct__: Tensorflow.ConfigProto, allow_soft_placement: false, cluster_def: nil, device_count: %{}, device_filters: [], experimental: nil, gpu_options: nil, graph_options: nil, inter_op_parallelism_threads: 0, intra_op_parallelism_threads: 0, isolate_session_state: false, log_device_placement: false, operation_timeout_in_ms: 0, placement_period: 0, rpc_options: nil, session_inter_op_thread_pool: [], use_per_session_threads: false}, tag \\ "serve")
load_saved_model(
  path :: String.t(),
  config :: %Tensorflow.ConfigProto{
    allow_soft_placement: term(),
    cluster_def: term(),
    device_count: term(),
    device_filters: term(),
    experimental: term(),
    gpu_options: term(),
    graph_options: term(),
    inter_op_parallelism_threads: term(),
    intra_op_parallelism_threads: term(),
    isolate_session_state: term(),
    log_device_placement: term(),
    operation_timeout_in_ms: term(),
    placement_period: term(),
    rpc_options: term(),
    session_inter_op_thread_pool: term(),
    use_per_session_threads: term()
  },
  tag :: String.t()
) :: {:ok, t()} | {:error, any()}

loads a saved_model from a directory path

Link to this function load_saved_model!(path, config \\ %{__struct__: Tensorflow.ConfigProto, allow_soft_placement: false, cluster_def: nil, device_count: %{}, device_filters: [], experimental: nil, gpu_options: nil, graph_options: nil, inter_op_parallelism_threads: 0, intra_op_parallelism_threads: 0, isolate_session_state: false, log_device_placement: false, operation_timeout_in_ms: 0, placement_period: 0, rpc_options: nil, session_inter_op_thread_pool: [], use_per_session_threads: false}, tag \\ "serve")
load_saved_model!(
  path :: String.t(),
  config :: %Tensorflow.ConfigProto{
    allow_soft_placement: term(),
    cluster_def: term(),
    device_count: term(),
    device_filters: term(),
    experimental: term(),
    gpu_options: term(),
    graph_options: term(),
    inter_op_parallelism_threads: term(),
    intra_op_parallelism_threads: term(),
    isolate_session_state: term(),
    log_device_placement: term(),
    operation_timeout_in_ms: term(),
    placement_period: term(),
    rpc_options: term(),
    session_inter_op_thread_pool: term(),
    use_per_session_threads: term()
  },
  tag :: String.t()
) :: t() | no_return()

loads a saved_model from a directory path

Link to this function parse_frozen_graph(graph_pb, config \\ %{__struct__: Tensorflow.ConfigProto, allow_soft_placement: false, cluster_def: nil, device_count: %{}, device_filters: [], experimental: nil, gpu_options: nil, graph_options: nil, inter_op_parallelism_threads: 0, intra_op_parallelism_threads: 0, isolate_session_state: false, log_device_placement: false, operation_timeout_in_ms: 0, placement_period: 0, rpc_options: nil, session_inter_op_thread_pool: [], use_per_session_threads: false})
parse_frozen_graph(
  graph_pb :: binary(),
  config :: %Tensorflow.ConfigProto{
    allow_soft_placement: term(),
    cluster_def: term(),
    device_count: term(),
    device_filters: term(),
    experimental: term(),
    gpu_options: term(),
    graph_options: term(),
    inter_op_parallelism_threads: term(),
    intra_op_parallelism_threads: term(),
    isolate_session_state: term(),
    log_device_placement: term(),
    operation_timeout_in_ms: term(),
    placement_period: term(),
    rpc_options: term(),
    session_inter_op_thread_pool: term(),
    use_per_session_threads: term()
  }
) :: {:ok, t()} | {:error, any()}

loads a graph_def from a binary string

Link to this function parse_frozen_graph!(graph_pb, config \\ %{__struct__: Tensorflow.ConfigProto, allow_soft_placement: false, cluster_def: nil, device_count: %{}, device_filters: [], experimental: nil, gpu_options: nil, graph_options: nil, inter_op_parallelism_threads: 0, intra_op_parallelism_threads: 0, isolate_session_state: false, log_device_placement: false, operation_timeout_in_ms: 0, placement_period: 0, rpc_options: nil, session_inter_op_thread_pool: [], use_per_session_threads: false})
parse_frozen_graph!(
  graph_pb :: binary(),
  config :: %Tensorflow.ConfigProto{
    allow_soft_placement: term(),
    cluster_def: term(),
    device_count: term(),
    device_filters: term(),
    experimental: term(),
    gpu_options: term(),
    graph_options: term(),
    inter_op_parallelism_threads: term(),
    intra_op_parallelism_threads: term(),
    isolate_session_state: term(),
    log_device_placement: term(),
    operation_timeout_in_ms: term(),
    placement_period: term(),
    rpc_options: term(),
    session_inter_op_thread_pool: term(),
    use_per_session_threads: term()
  }
) :: t() | no_return()

loads a graph_def from a binary string

Link to this function run(session, input_tensors, output_names)
run(
  session :: t(),
  input_tensors :: %{optional(String.t()) => Extensor.Tensor.t()},
  output_names :: [String.t(), ...]
) :: {:ok, %{optional(String.t()) => Extensor.Tensor.t()}} | {:error, any()}

executes a tensorflow session

Link to this function run!(session, input_tensors, output_names)
run!(
  session :: t(),
  input_tensors :: %{optional(String.t()) => Extensor.Tensor.t()},
  output_names :: [String.t(), ...]
) :: %{optional(String.t()) => Extensor.Tensor.t()}

executes a tensorflow session