ExTorch (extorch v0.3.0)

Copy Markdown

The ExTorch namespace contains data structures for multi-dimensional tensors and mathematical operations over these are defined. Additionally, it provides many utilities for efficient serializing of Tensors and arbitrary types, and other useful utilities.

It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0

Summary

Per-process settings

Get the default device on which ExTorch.Tensor structs are allocated.

Get the current default floating point dtype for the current process.

Sets the default device on which ExTorch.Tensor structs are allocated.

Sets the default floating point dtype of the current process to dtype.

Tensor creation

Returns a 1-D tensor of size $\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil$ with values from the interval [start, end) taken with common difference step beginning from start.

Constructs a complex tensor with its real part equal to real and its imaginary part equal to imag.

Returns a tensor filled with uninitialized data. The shape of the tensor is defined by the tuple argument size.

Returns an uninitialized tensor, with the same size as input.

Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.

Returns a tensor filled with the scalar value scalar, with the shape defined by the variable argument size.

Returns a tensor filled with the scalar value fill_value, with the same size as input.

Creates a one-dimensional tensor of size steps whose values are evenly spaced from start to end, inclusive. That is, the value are

Creates a one-dimensional tensor of size steps whose values are evenly spaced from ${{\text{{base}}}}^{{\text{{start}}}}$ to ${{\text{{base}}}}^{{\text{{end}}}}$, inclusive, on a logarithmic scale with base base. That is, the values are

Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size.

Returns a tensor filled with the scalar value 1, with the same size as input.

Constructs a complex tensor whose elements are Cartesian coordinates corresponding to the polar coordinates with absolute value abs and angle angle.

Returns a tensor filled with random numbers from a uniform distribution on the interval $[0, 1)$

Returns a tensor filled with random numbers from a uniform distribution on the interval $[0, 1)$, with the same size as input.

Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive).

Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive), with the same size as input.

Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution).

Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution), with the same size as input.

Constructs a tensor with data.

Returns a tensor filled with the scalar value 0, with the shape defined by the variable argument size.

Returns a tensor filled with the scalar value 0, with the same size as input.

Tensor manipulation

Returns a view of the tensor conjugated and with the last two dimensions transposed.

Concatenates the given sequence of seq tensors in the given dimension. All tensors must either have the same shape (except in the concatenating dimension) or be empty.

Attempts to split a tensor into the specified number of chunks. Each chunk is a view of the input tensor.

Creates a new tensor by horizontally stacking the tensors in tensors.

Alias to cat/1

Returns a view of input with a flipped conjugate bit. If input has a non-complex dtype, this function just returns input.

Embeds the values of the src tensor into input along the diagonal elements of input, with respect to dim1 and dim2.

Splits input, a tensor with three or more dimensions, into multiple tensors depthwise according to indices_or_sections. Each split is a view of input.

Stack tensors in sequence depthwise (along third axis).

Gathers values along an axis specified by dim.

Splits input, a tensor with one or more dimensions, into multiple tensors horizontally according to indices_or_sections. Each split is a view of input.

Stack tensors in sequence horizontally (column wise).

Moves the dimension(s) of input at the position(s) in source to the position(s) in destination.

Returns a new tensor that is a narrowed version of input tensor.

Same as ExTorch.narrow/4 except this returns a copy rather than shared storage. This is primarily for sparse tensors, which do not have a shared-storage narrow method.

Retrieve the indices of all non-zero elements in a tensor.

Returns a view of the original tensor input with its dimensions permuted.

Returns a tensor with the same data and number of elements as input, but with the specified shape.

Writes all values from the tensor src into input at the indices specified in the index tensor. For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.

Adds all values from the tensor src into input at the indices specified in the index tensor in a similar fashion to ExTorch.scatter/6.

Reduces all values from the src tensor to the indices specified in the index tensor in the input tensor using the applied reduction defined via the reduce argument (:sum, :prod, :mean, :amax, :amin). For each value in src, it is reduced to an index in input which is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim. If include_self: true, the values in the input tensor are included in the reduction.

Embeds the values of the src tensor into input at the given index. This function returns a tensor with fresh storage; it does not create a view.

Embeds the values of the src tensor into input at the given dimension. This function returns a tensor with fresh storage; it does not create a view.

Splits the tensor into chunks. Each chunk is a view of the original tensor.

Returns a tensor with all specified dimensions of input of size 1 removed.

Concatenates a sequence of tensors along a new dimension.

Expects input to be <= 2-D tensor and transposes dimensions 0 and 1.

Returns a new tensor with the elements of input at the given indices.

Selects values from input at the 1-dimensional indices from indices along the given dim.

Splits a tensor into multiple sub-tensors, all of which are views of input, along dimension dim according to the indices or number of sections specified by indices_or_sections.

Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped.

Append an empty dimension to a tensor on a given dimension.

Stack tensors in sequence vertically (row wise).

Tensor indexing

Index a tensor given a list of integers, ranges, tensors, nil or :ellipsis.

Accumulate the elements of alpha times source into the input tensor by adding to the indices in the order given in index.

Copies the elements of source into the input tensor by selecting the indices in the order given in index.

Assign a value into a tensor given a single or a sequence of indices.

Accumulate the elements of source into the self tensor by accumulating to the indices in the order given in index using the reduction given by the reduce argument.

Returns a new tensor which indexes the input tensor along dimension dim using the entries in index (whose dtype is :long).

Returns a new 1-D tensor which indexes the input tensor according to the boolean mask mask which has dtype :bool.

Slices the input tensor along the selected dimension at the given index. This function returns a view of the original tensor with the given dimension removed.

Create a slice to index a tensor.

Pointwise math operations

Adds other to input, scaled by alpha: $out = input + alpha \times other$.

Performs a batch matrix-matrix product of matrices stored in input and other.

Clamps all elements in input into the range [min, max].

Returns a deep copy of input. The returned tensor has the same data and type but does not share storage with the original.

Returns a contiguous in memory tensor containing the same data. If the tensor is already contiguous, returns itself.

Returns a new tensor detached from the current computation graph. The result will never require gradient.

Sums the product of the elements of the input tensors along dimensions specified using a notation based on the Einstein summation convention.

Returns a new view of the tensor with singleton dimensions expanded to a larger size. Pass -1 for dimensions you don't want to change.

Applies LogSoftmax along dim: $LogSoftmax(x_i) = \log\left(\frac{e^{x_i}}{\sum_j e^{x_j}}\right)$.

Applies the rectified linear unit function element-wise: $ReLU(x) = \max(0, x)$.

Applies the Softmax function along dim: $Softmax(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$.

Returns a new tensor containing imaginary values of the input tensor. The returned tensor and input share the same underlying storage.

Fills elements of input tensor with value where mask is true.

Matrix product of two tensors.

Performs a matrix multiplication of the matrices input and other.

Multiplies input by other element-wise: $out_i = input_i \times other_i$.

Returns the negative of input element-wise: $out = -input$.

Takes the power of each element by exponent: $out_i = input_i^{exponent}$.

Returns a new tensor containing real values of the input tensor. The returned tensor and input share the same underlying storage.

Subtracts other from input, scaled by alpha: $out = input - alpha \times other$.

Computes the absolute value of each element: $out_i = |input_i|$.

Returns a new tensor with the cosine: $out_i = \cos(input_i)$.

Divides input by other element-wise: $out_i = \frac{input_i}{other_i}$.

Returns a new tensor with the exponential: $out_i = e^{input_i}$.

Returns a new tensor with the natural logarithm: $out_i = \ln(input_i)$.

Returns a new tensor with the sine: $out_i = \sin(input_i)$.

Returns a new tensor with the square root: $out_i = \sqrt{input_i}$.

Returns a tensor of elements selected from either x or y, depending on condition.

Returns a new tensor with the same data but of a different shape.

Reduction operations

Check if all elements (or in a dimension) in input evaluate to true.

Returns the maximum value of each slice of the input tensor in the given dimension(s) dim.

Returns the minimum value of each slice of the input tensor in the given dimension(s) dim.

Computes the minimum and maximum values of the input tensor.

Check if at least one element (or element in a dimension) in input evaluates to true.

Returns the indices of the maximum value of all elements (or elements in a dimension) in the input tensor.

Returns the indices of the minimum value of all elements (or elements in a dimension) in the input tensor.

Counts the number of non-zero values in the tensor input along the given dim. If no dim is specified then all non-zeros in the tensor are counted.

Returns the p-norm of (input - other)

Returns the log of summed exponentials of each row of the input tensor in the given dimension dim. The computation is numerically stabilized.

Returns the maximum value of all elements (or elements in a dimension) in the input tensor.

Returns the mean value of all elements (or alongside an axis) in the input tensor.

Returns the median of the values in input.

Returns the minimum value of all elements (or elements in a dimension) in the input tensor.

Returns the mode of the values in input across dimension dim.

Computes the mean of all non-NaN elements along the specified dimensions.

Returns the median of the values in input, ignoring NaN values.

This is a variant of ExTorch.quantile/6 that “ignores” NaN values, computing the quantiles q as if NaN values in input did not exist. If all values in a reduced row are NaN then the quantiles for that reduction will be NaN. See the documentation for ExTorch.quantile/6.

Returns the sum of all elements (or alongside an axis) in the input tensor, treating NaNs as zeros.

Returns the product of all elements (or alongside an axis) in the input tensor.

Computes the q-th quantiles of each row of the input tensor along the dimension dim.

Calculates the standard deviation over the dimensions specified by dim.

Calculates the standard deviation and mean over the dimensions specified by dim.

Returns the sum of all elements (or alongside an axis) in the input tensor.

Returns the unique elements of the input tensor.

Eliminates all but the first element from every consecutive group of equivalent elements.

Calculates the variance over the dimensions specified by dim.

Calculates the variance and mean over the dimensions specified by dim.

Comparison operations

This function checks if input and other satisfy the condition: $$ |\text{input} - \text{other}| \leq \texttt{atol} + \texttt{rtol} \times |\text{other}| $$ elementwise, for all elements of input and other.

Returns the indices that sort a tensor along a given dimension in ascending order by value.

Computes element-wise equality.

Strict element-wise equality for two tensors.

Computes the element-wise maximum of input and other.

Computes the element-wise minimum of input and other.

Computes input >= other element-wise.

Computes input > other element-wise.

Returns a new tensor with boolean elements representing if each element of input is “close” to the corresponding element of other. Closeness is defined as

Returns a new tensor with boolean elements representing if each element is finite or not.

Tests if each element of elements is in test_elements. Returns a boolean tensor of the same shape as elements that is true for elements in test_elements and false otherwise.

Returns a new tensor with boolean elements representing if each element is infinity (both positive and negative).

Returns a new tensor with boolean elements representing if each element is :nan.

Returns a new tensor with boolean elements representing if each element is negative infinity.

Returns a new tensor with boolean elements representing if each element is positive infinity.

Returns a new tensor with boolean elements representing if each element is real valued.

Returns a tuple {values, indices} where values is the kth smallest element of each row of the input tensor in the given dimension dim. And indices is the index location of each element found.

Computes input <= other element-wise.

Computes input < other element-wise.

Computes the element-wise maximum of input and other.

Computes the element-wise minimum of input and other.

Sorts the elements of the input tensor along its first dimension in ascending order by value.

Computes input != other element-wise.

Sorts the elements of the input tensor along a given dimension in ascending order by value.

Returns the k largest elements of the given input tensor along a given dimension.

Other operations

Returns a new tensor with materialized conjugation if input’s conjugate bit is set to true, else returns input. The output tensor will always have its conjugate bit set to false.

Returns a view of input as a complex tensor.

Per-process settings

get_default_device()

@spec get_default_device() :: ExTorch.Device.device()

Get the default device on which ExTorch.Tensor structs are allocated.

Notes

By default, ExTorch will set :cpu as the default device.

get_default_dtype()

@spec get_default_dtype() :: ExTorch.DType.dtype()

Get the current default floating point dtype for the current process.

Notes

By default, ExTorch will set the default dtype to :float32.

set_default_device(device)

@spec set_default_device(ExTorch.Device.device()) :: ExTorch.Device.device()

Sets the default device on which ExTorch.Tensor structs are allocated.

This function does not affect factory function calls which are called with an explicit device argument. Factory calls will be performed as if they were passed device as an argument.

The default device is initially :cpu. If you set the default tensor device to another device (e.g., :cuda) without a device index, tensors will be allocated on whatever the current device for the device type.

Examples

# By default, the device will be :cpu
iex> a = ExTorch.tensor([1.2, 3])
iex> a.device
:cpu

# Change the default device to :cuda
iex> ExTorch.set_default_device(:cuda)

# Check that tensors are now being created on gpu
iex> a = ExTorch.tensor([1.2, 3])
iex> a.device
{:cuda, 0}

set_default_dtype(dtype)

@spec set_default_dtype(ExTorch.DType.dtype()) :: ExTorch.DType.dtype()

Sets the default floating point dtype of the current process to dtype.

Supports :float32 and :float64 as inputs. Other dtypes may be accepted without complaint but are not supported and are unlikely to work as expected.

When PyTorch is initialized its default floating point dtype is :float32, and the intent of set_default_dtype(:float64) is to facilitate NumPy-like type inference. The default floating point dtype is used to:

  1. Implicitly determine the default complex dtype. When the default floating point type is :float32 the default complex dtype is :complex64, and when the default floating point type is :float64 the default complex type is :complex128.

  2. Infer the dtype for tensors constructed using Elixir floats or ExTorch.Complex numbers. See examples below.

  3. Determine the result of type promotion between bool and integer tensors and Elixir floats and ExTorch.Complex numbers.

Examples

# Initial default for floating point is :float32
iex> a = ExTorch.tensor([1.2, 3])
iex> a.dtype
:float

# Initial default for floating point complex numbers is :complex64
iex> b = ExTorch.tensor([1.2, ExTorch.Complex.complex(0, 3)])
iex> b.dtype
:complex_float

# Changing the default dtype to :float64
iex> ExTorch.set_default_dtype(:float64)

# Floats are now interpreted as float64
iex> a = ExTorch.tensor([1.2, 3])
iex> a.dtype
:double

# Complex numbers are now interpreted as :complex128
iex> b = ExTorch.tensor([1.2, ExTorch.Complex.complex(0, 3)])
iex> b.dtype
:complex_double

Tensor creation

arange(end_bound)

@spec arange(number()) :: ExTorch.Tensor.t()

See ExTorch.arange/4

Available signature calls:

  • arange(end_bound)

arange(end_bound, kwargs)

@spec arange(number(),
  step: number(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec arange(
  number(),
  number()
) :: ExTorch.Tensor.t()

See ExTorch.arange/4

Available signature calls:

  • arange(start, end_bound)
  • arange(end_bound, kwargs)

arange(end_bound, step, opts)

@spec arange(
  number(),
  number(),
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()
@spec arange(
  number(),
  number(),
  step: number(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec arange(
  number(),
  number(),
  number()
) :: ExTorch.Tensor.t()

See ExTorch.arange/4

Available signature calls:

  • arange(start, end_bound, step)
  • arange(start, end_bound, kwargs)
  • arange(end_bound, step, opts)

arange(start, end_bound, step, kwargs)

@spec arange(
  number(),
  number(),
  number(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec arange(
  number(),
  number(),
  number(),
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Returns a 1-D tensor of size $\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil$ with values from the interval [start, end) taken with common difference step beginning from start.

Note that non-integer step is subject to floating point rounding errors when comparing against end; to avoid inconsistency, we advise adding a small epsilon to end in such cases.

$$ out_{i + 1} = out_i + step $$

Arguments

  • start: the starting value for the set of points. Default: 0.
  • end: the ending value for the set of points.
  • step: the gap between each pair of adjacent points. Default: 1.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

# Single argument, end only
iex> ExTorch.arange(5)
#Tensor<
[0., 1., 2., 3., 4.]
[
  size: {5},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# End only with options
iex> ExTorch.arange(5, dtype: :uint8)
#Tensor<
[0, 1, 2, 3, 4]
[
  size: {5},
  dtype: :byte,
  device: :cpu,
  requires_grad: false
]>

# Start to end
iex> ExTorch.arange(1, 7)
#Tensor<
[1., 2., 3., 4., 5., 6.]
[
  size: {6},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Start to end with options
iex> ExTorch.arange(1, 7, device: :cuda, dtype: :float16)
#Tensor<
[1., 2., 3., 4., 5., 6.]
[
  size: {6},
  dtype: :half,
  device: {:cuda, 0},
  requires_grad: false
]>

# Start to end with step
iex> ExTorch.arange(-1.3, 2.4, 0.5)
#Tensor<
[-1.3000, -0.8000, -0.3000,  0.2000,  0.7000,  1.2000,  1.7000,  2.2000]
[
  size: {8},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>


# Start to end with step and options
iex> ExTorch.arange(-1.3, 2.4, 0.5, dtype: :float64)
#Tensor<
[-1.3000, -0.8000, -0.3000,  0.2000,  0.7000,  1.2000,  1.7000,  2.2000]
[
  size: {8},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

complex(real, imag)

Constructs a complex tensor with its real part equal to real and its imaginary part equal to imag.

Arguments

Notes

If both the inputs are :float32, the output will be :complex64. Comparatively, if both the inputs are :float64, the output will be :complex128.

Examples

iex> real = ExTorch.arange(5)
iex> imag = ExTorch.arange(-5, 0)
iex> ExTorch.complex(real, imag)
#Tensor<
[0.-5.j, 1.-4.j, 2.-3.j, 3.-2.j, 4.-1.j]
[
  size: {5},
  dtype: :complex_float,
  device: :cpu,
  requires_grad: false
]>

empty(size)

@spec empty(tuple() | [integer()]) :: ExTorch.Tensor.t()

See ExTorch.empty/2

Available signature calls:

  • empty(size)

empty(size, kwargs)

@spec empty(tuple() | [integer()],
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec empty(
  tuple() | [integer()],
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Returns a tensor filled with uninitialized data. The shape of the tensor is defined by the tuple argument size.

Arguments

  • size: a tuple/list of integers defining the shape of the output tensor.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

iex> ExTorch.empty({2, 3})
#Tensor<
[[ 6.7262e-44,  0.0000e+00,  7.2868e-44],
 [ 0.0000e+00, -2.7524e+24,  4.5880e-41]]
[
  size: {2, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.empty({2, 3}, dtype: :int64, device: :cuda)
#Tensor<
[[0, 0, 0],
 [0, 0, 0]]
[
  size: {2, 3},
  dtype: :long,
  device: {:cuda, 0},
  requires_grad: false
]>

empty_like(input)

@spec empty_like(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.empty_like/2

Available signature calls:

  • empty_like(input)

empty_like(input, kwargs)

@spec empty_like(ExTorch.Tensor.t(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec empty_like(ExTorch.Tensor.t(), ExTorch.Tensor.Options.t()) :: ExTorch.Tensor.t()

Returns an uninitialized tensor, with the same size as input.

ExTorch.empty_like(input) is equivalent to ExTorch.empty(input.size, dtype: input.dtype, layout: input.layout, device: input.device)

Arguments

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: auto. If auto, it will use the same data type as the input. If nil, it will use a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: nil. If nil, it will use the same layout as the input.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: auto. If auto, it will use the same device as the input. If nil, it will use the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :preserve. If preserve, it will use the same memory format as the input.

Examples

# Create an empty tensor from another
iex> a = ExTorch.empty({4, 5})
iex> ExTorch.empty_like(a)
#Tensor<
[[ 8.3624e+06,  4.5880e-41, -2.8874e+24,  4.5880e-41,  2.5223e-44],
 [ 0.0000e+00,  2.5223e-44,  0.0000e+00,  5.1482e+22,  1.6816e-43],
 [ 9.8511e-43,  0.0000e+00,  8.3624e+06,  4.5880e-41, -3.1780e+24],
 [ 4.5880e-41,  2.5223e-44,  0.0000e+00,  2.5223e-44,  0.0000e+00]]
[
  size: {4, 5},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Create an empty tensor in GPU from a CPU one
iex> a = ExTorch.empty({3, 3})
iex> ExTorch.empty_like(a, device: :cuda)
#Tensor<
[[0., 0., 0.],
 [0., 0., 0.],
 [0., 0., 0.]]
[
  size: {3, 3},
  dtype: :float,
  device: {:cuda, 0},
  requires_grad: false
]>

eye(n)

@spec eye(integer()) :: ExTorch.Tensor.t()

See ExTorch.eye/3

Available signature calls:

  • eye(n)

eye(n, kwargs)

@spec eye(integer(),
  m: integer(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec eye(
  integer(),
  integer()
) :: ExTorch.Tensor.t()

See ExTorch.eye/3

Available signature calls:

  • eye(n, m)
  • eye(n, kwargs)

eye(n, m, kwargs)

@spec eye(
  integer(),
  integer(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec eye(
  integer(),
  integer(),
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.

Arguments

  • n: the number of rows
  • m: the number of columns

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

iex> ExTorch.eye(3)
#Tensor<
[[1., 0., 0.],
 [0., 1., 0.],
 [0., 0., 1.]]
[
  size: {3, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.eye(3, 3)
#Tensor<
[[1., 0., 0.],
 [0., 1., 0.],
 [0., 0., 1.]]
[
  size: {3, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.eye(4, 6, dtype: :uint8, device: :cuda)
#Tensor<
[[1, 0, 0, 0, 0, 0],
 [0, 1, 0, 0, 0, 0],
 [0, 0, 1, 0, 0, 0],
 [0, 0, 0, 1, 0, 0]]
[
  size: {4, 6},
  dtype: :byte,
  device: {:cuda, 0},
  requires_grad: false
]>

full(size, scalar)

@spec full(
  tuple() | [integer()],
  ExTorch.Scalar.t()
) :: ExTorch.Tensor.t()

See ExTorch.full/3

Available signature calls:

  • full(size, scalar)

full(size, scalar, kwargs)

@spec full(
  tuple() | [integer()],
  ExTorch.Scalar.t(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec full(
  tuple() | [integer()],
  ExTorch.Scalar.t(),
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Returns a tensor filled with the scalar value scalar, with the shape defined by the variable argument size.

Arguments

  • size: a tuple/list of integers defining the shape of the output tensor.
  • scalar: the value to fill the output tensor with.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

iex> ExTorch.full({2, 3}, 2)
#Tensor<
[[2., 2., 2.],
 [2., 2., 2.]]
[
  size: {2, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.full({2, 3}, 23, dtype: :uint8, device: :cuda)
#Tensor<
[[2, 2, 2],
 [2, 2, 2]]
[
  size: {2, 3},
  dtype: :byte,
  device: {:cuda, 0},
  requires_grad: false
]>

full_like(input, fill_value)

@spec full_like(
  ExTorch.Tensor.t(),
  ExTorch.Scalar.t()
) :: ExTorch.Tensor.t()

See ExTorch.full_like/3

Available signature calls:

  • full_like(input, fill_value)

full_like(input, fill_value, kwargs)

@spec full_like(
  ExTorch.Tensor.t(),
  ExTorch.Scalar.t(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec full_like(
  ExTorch.Tensor.t(),
  ExTorch.Scalar.t(),
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Returns a tensor filled with the scalar value fill_value, with the same size as input.

ExTorch.full_like(input, fill_value) is equivalent to ExTorch.full(input.size, fill_value, dtype: input.dtype, layout: input.layout, device: input.device)

Arguments

  • input: The input tensor (ExTorch.Tensor)
  • fill_value: the value to fill the output tensor with.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: auto. If auto, it will use the same data type as the input. If nil, it will use a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: nil. If nil, it will use the same layout as the input.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: auto. If auto, it will use the same device as the input. If nil, it will use the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :preserve. If preserve, it will use the same memory format as the input.

Examples

# Create a tensor filled with -1 from an int64 input.
iex> a = ExTorch.empty({1, 2, 2}, dtype: :int64)
iex> ExTorch.full_like(a, -1)
#Tensor<
[[[-1, -1],
  [-1, -1]]]
[
  size: {1, 2, 2},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Create a CUDA complex tensor filled with a given value from a CPU input.
iex> b = ExTorch.ones({3, 3}, dtype: :complex128)
iex> ExTorch.full_like(b, ExTorch.Complex.complex(0.8, -0.5), device: :cuda)
#Tensor<
[[0.8000-0.5000j, 0.8000-0.5000j, 0.8000-0.5000j],
 [0.8000-0.5000j, 0.8000-0.5000j, 0.8000-0.5000j],
 [0.8000-0.5000j, 0.8000-0.5000j, 0.8000-0.5000j]]
[
  size: {3, 3},
  dtype: :complex_double,
  device: {:cuda, 0},
  requires_grad: false
]>

linspace(start, end_bound, steps)

@spec linspace(
  number(),
  number(),
  integer()
) :: ExTorch.Tensor.t()

See ExTorch.linspace/4

Available signature calls:

  • linspace(start, end_bound, steps)

linspace(start, end_bound, steps, kwargs)

@spec linspace(
  number(),
  number(),
  integer(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec linspace(
  number(),
  number(),
  integer(),
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Creates a one-dimensional tensor of size steps whose values are evenly spaced from start to end, inclusive. That is, the value are:

$$ (\text{start}, \text{start} + \frac{\text{end} - \text{start}}{\text{steps} - 1}, \ldots, \text{start} + (\text{steps} - 2) * \frac{\text{end} - \text{start}}{\text{steps} - 1}, \text{end}) $$

Arguments

  • start: the starting value for the set of points.
  • end: the ending value for the set of points.
  • steps: size of the constructed tensor.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

# Returns a tensor with 10 evenly-spaced values between -2 and 10
iex> ExTorch.linspace(-2, 10, 10)
#Tensor<
[-2.0000, -0.6667,  0.6667,  2.0000,  3.3333,  4.6667,  6.0000,  7.3333,
  8.6667, 10.0000]
[
  size: {10},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Returns a tensor with 10 evenly-spaced int32 values between -2 and 10
iex> ExTorch.linspace(-2, 10, 10, dtype: :int32)
#Tensor<
[-2,  0,  0,  1,  3,  4,  6,  7,  8, 10]
[
  size: {10},
  dtype: :int,
  device: :cpu,
  requires_grad: false
]>

logspace(start, end_bound, steps)

@spec logspace(
  number(),
  number(),
  integer()
) :: ExTorch.Tensor.t()

See ExTorch.logspace/5

Available signature calls:

  • logspace(start, end_bound, steps)

logspace(start, end_bound, steps, base)

@spec logspace(
  number(),
  number(),
  integer(),
  number()
) :: ExTorch.Tensor.t()
@spec logspace(
  number(),
  number(),
  integer(),
  base: number(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()

See ExTorch.logspace/5

Available signature calls:

  • logspace(start, end_bound, steps, kwargs)
  • logspace(start, end_bound, steps, base)

logspace(start, end_bound, steps, base, kwargs)

@spec logspace(
  number(),
  number(),
  integer(),
  number(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec logspace(
  number(),
  number(),
  integer(),
  number(),
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Creates a one-dimensional tensor of size steps whose values are evenly spaced from ${{\text{{base}}}}^{{\text{{start}}}}$ to ${{\text{{base}}}}^{{\text{{end}}}}$, inclusive, on a logarithmic scale with base base. That is, the values are:

$$ (\text{base}^{\text{start}}, \text{base}^{(\text{start} + \frac{\text{end} - \text{start}}{ \text{steps} - 1})}, \ldots, \text{base}^{(\text{start} + (\text{steps} - 2) * \frac{\text{end} - \text{start}}{ \text{steps} - 1})}, \text{base}^{\text{end}}) $$

Arguments

  • start: the starting value for the set of points.
  • end: the ending value for the set of points.
  • steps: size of the constructed tensor.
  • base: base of the logarithm function. Default: 10.0.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

# Returns a tensor containing five logarithmic-spaced values between -10 and 10
iex> ExTorch.logspace(-10, 10, 5)
#Tensor<
[1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10]
[
  size: {5},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Returns a tensor containing five logarithmic-spaced values between 0.1 and 1.0
iex> ExTorch.logspace(0.1, 1.0, 5)
#Tensor<
[ 1.2589,  2.1135,  3.5481,  5.9566, 10.0000]
[
  size: {5},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Returns a tensor containing three logarithmic-spaced (base 2) values between 0.1 and 1.0
iex> ExTorch.logspace(0.1, 1.0, 3, base: 2)
#Tensor<
[1.0718, 1.4641, 2.0000]
[
  size: {3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Returns a float64 tensor containing three logarithmic-spaced (base 2) values between 0.1 and 1.0
iex> ExTorch.logspace(0.1, 1.0, 3, base: 2, dtype: :float64)
#Tensor<
[1.0718, 1.4641, 2.0000]
[
  size: {3},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

ones(size)

@spec ones(tuple() | [integer()]) :: ExTorch.Tensor.t()

See ExTorch.ones/2

Available signature calls:

  • ones(size)

ones(size, kwargs)

@spec ones(tuple() | [integer()],
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec ones(
  tuple() | [integer()],
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size.

Arguments

  • size: a tuple/list of integers defining the shape of the output tensor.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

iex> ExTorch.ones({2, 3})
#Tensor<
[[1., 1., 1.],
 [1., 1., 1.]]
[
  size: {2, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.ones({2, 3}, dtype: :uint8, device: :cuda)
#Tensor<
[[1, 1, 1],
 [1, 1, 1]]
[
  size: {2, 3},
  dtype: :byte,
  device: {:cuda, 0},
  requires_grad: false
]>

ones_like(input)

@spec ones_like(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.ones_like/2

Available signature calls:

  • ones_like(input)

ones_like(input, kwargs)

@spec ones_like(ExTorch.Tensor.t(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec ones_like(ExTorch.Tensor.t(), ExTorch.Tensor.Options.t()) :: ExTorch.Tensor.t()

Returns a tensor filled with the scalar value 1, with the same size as input.

ExTorch.ones_like(input) is equivalent to ExTorch.ones(input.size, dtype: input.dtype, layout: input.layout, device: input.device)

Arguments

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: auto. If auto, it will use the same data type as the input. If nil, it will use a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: nil. If nil, it will use the same layout as the input.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: auto. If auto, it will use the same device as the input. If nil, it will use the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :preserve. If preserve, it will use the same memory format as the input.

Examples

# Create a tensor filled with ones from another float64 tensor.
iex> a = ExTorch.rand({3, 4})
iex> ExTorch.ones_like(a)
#Tensor<
[[1., 1., 1., 1.],
 [1., 1., 1., 1.],
 [1., 1., 1., 1.]]
[
  size: {3, 4},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

# Create a complex tensor with real part equal to one in GPU from another CPU tensor.
iex> a = ExTorch.rand({3, 4}, dtype: :complex64)
iex> ExTorch.ones_like(a, device: :cuda)
#Tensor<
[[1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j],
 [1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j],
 [1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j]]
[
  size: {3, 4},
  dtype: :complex_float,
  device: {:cuda, 0},
  requires_grad: false
]>

polar(real, imag)

Constructs a complex tensor whose elements are Cartesian coordinates corresponding to the polar coordinates with absolute value abs and angle angle.

$$ (\text{out} = \text{abs} \cdot \cos(\text{angle}) + \text{abs} \cdot \sin(\text{angle}) \cdot j) $$

Arguments

Notes

If both the inputs are :float32, the output will be :complex64. Comparatively, if both the inputs are :float64, the output will be :complex128.

Examples

iex> real = ExTorch.arange(5)
iex> imag = ExTorch.arange(-5, 0)
iex> ExTorch.polar(real, imag)
#Tensor<
[ 0.0000+0.0000j, -0.6536+0.7568j, -1.9800-0.2822j, -1.2484-2.7279j,
  2.1612-3.3659j]
[
  size: {5},
  dtype: :complex_float,
  device: :cpu,
  requires_grad: false
]>

rand(size)

@spec rand(tuple() | [integer()]) :: ExTorch.Tensor.t()

See ExTorch.rand/2

Available signature calls:

  • rand(size)

rand(size, kwargs)

@spec rand(tuple() | [integer()],
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec rand(
  tuple() | [integer()],
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Returns a tensor filled with random numbers from a uniform distribution on the interval $[0, 1)$

The shape of the tensor is defined by the variable argument size.

Arguments

  • size: a tuple/list of integers defining the shape of the output tensor.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

iex> ExTorch.rand({3, 3, 3})
#Tensor<
[[[0.4099, 0.8473, 0.6221],
  [0.9906, 0.3174, 0.9849],
  [0.6988, 0.1157, 0.9424]],

 [[0.0550, 0.9723, 0.4380],
  [0.9304, 0.2973, 0.4920],
  [0.1860, 0.9460, 0.2602]],

 [[0.9208, 0.9713, 0.8194],
  [0.8109, 0.1395, 0.1245],
  [0.5742, 0.5222, 0.0937]]]
[
  size: {3, 3, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.rand({2, 3}, dtype: :float32, device: :cuda)
#Tensor<
[[0.1583, 0.5184, 0.6711],
 [0.3829, 0.3248, 0.3524]]
[
  size: {2, 3},
  dtype: :float,
  device: {:cuda, 0},
  requires_grad: false
]>

rand_like(input)

@spec rand_like(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.rand_like/2

Available signature calls:

  • rand_like(input)

rand_like(input, kwargs)

@spec rand_like(ExTorch.Tensor.t(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec rand_like(ExTorch.Tensor.t(), ExTorch.Tensor.Options.t()) :: ExTorch.Tensor.t()

Returns a tensor filled with random numbers from a uniform distribution on the interval $[0, 1)$, with the same size as input.

ExTorch.rand_like(input) is equivalent to ExTorch.rand(input.size, dtype: input.dtype, layout: input.layout, device: input.device)

Arguments

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: auto. If auto, it will use the same data type as the input. If nil, it will use a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: nil. If nil, it will use the same layout as the input.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: auto. If auto, it will use the same device as the input. If nil, it will use the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :preserve. If preserve, it will use the same memory format as the input.

Examples

# Derive a new float64 tensor from another one
iex> a = ExTorch.empty({3, 2, 2}, dtype: :float64)
iex> ExTorch.rand_like(a)
#Tensor<
[[[0.6495, 0.9480],
  [0.3083, 0.7135]],

 [[0.5482, 0.3676],
  [0.2825, 0.1806]],

 [[0.4742, 0.8673],
  [0.4542, 0.4239]]]
[
  size: {3, 2, 2},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

# Derive a GPU tensor from a CPU one
iex> b = ExTorch.ones({2, 3}, dtype: :complex64)
iex> ExTorch.rand_like(b, device: :cuda)
#Tensor<
[[0.1554+0.6794j, 0.5356+0.2049j, 0.7555+0.3877j],
 [0.0148+0.0772j, 0.8368+0.3802j, 0.6820+0.1727j]]
[
  size: {2, 3},
  dtype: :complex_float,
  device: {:cuda, 0},
  requires_grad: false
]>

randint(high, size)

@spec randint(
  integer(),
  tuple() | [integer()]
) :: ExTorch.Tensor.t()

See ExTorch.randint/4

Available signature calls:

  • randint(high, size)

randint(high, size, kwargs)

@spec randint(
  integer(),
  tuple() | [integer()],
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec randint(
  integer(),
  tuple() | [integer()],
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()
@spec randint(
  integer(),
  integer(),
  tuple() | [integer()]
) :: ExTorch.Tensor.t()

See ExTorch.randint/4

Available signature calls:

  • randint(low, high, size)
  • randint(high, size, opts)
  • randint(high, size, kwargs)

randint(low, high, size, kwargs)

@spec randint(
  integer(),
  integer(),
  tuple() | [integer()],
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec randint(
  integer(),
  integer(),
  tuple() | [integer()],
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive).

The shape of the tensor is defined by the variable argument size.

Arguments

  • low: Lowest integer to be drawn from the distribution. Default: 0.
  • high: One above the highest integer to be drawn from the distribution.
  • size: a tuple/list of integers defining the shape of the output tensor.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

# Sample numbers between 0 and 3
iex> ExTorch.randint(3, {3, 3, 4})
#Tensor<
[[[0., 2., 0., 0.],
  [1., 2., 0., 2.],
  [2., 2., 1., 2.]],

 [[2., 1., 1., 1.],
  [2., 1., 0., 1.],
  [1., 1., 0., 0.]],

 [[1., 1., 1., 1.],
  [2., 2., 0., 1.],
  [1., 2., 1., 1.]]]
[
  size: {3, 3, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Sample numbers between 0 and 3 of type int64
iex> ExTorch.randint(3, {3, 3, 4}, dtype: :int64)
#Tensor<
[[[0, 0, 0, 2],
  [1, 1, 0, 0],
  [1, 0, 1, 1]],

 [[0, 1, 1, 1],
  [2, 0, 2, 0],
  [2, 1, 0, 2]],

 [[1, 0, 2, 1],
  [2, 2, 1, 0],
  [0, 0, 0, 2]]]
[
  size: {3, 3, 4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Sample numbers between -2 and 4
iex> ExTorch.randint(-2, 3, {2, 2, 4})
#Tensor<
[[[ 2.,  1.,  0., -1.],
  [ 2.,  2., -2.,  2.]],

 [[-1., -1.,  1., -1.],
  [ 2., -1.,  1., -1.]]]
[
  size: {2, 2, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Sample numbers between -2 and 4 on gpu
iex> ExTorch.randint(-2, 3, {2, 2, 4}, device: :cuda)
#Tensor<
[[[-2.,  2.,  0., -2.],
  [ 0.,  0.,  0., -2.]],

 [[ 0.,  0., -2.,  0.],
  [ 0.,  1., -2.,  1.]]]
[
  size: {2, 2, 4},
  dtype: :float,
  device: {:cuda, 0},
  requires_grad: false
]>

randint_like(input, high)

@spec randint_like(ExTorch.Tensor.t(), integer()) :: ExTorch.Tensor.t()

See ExTorch.randint_like/4

Available signature calls:

  • randint_like(input, high)

randint_like(input, high, kwargs)

@spec randint_like(ExTorch.Tensor.t(), integer(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec randint_like(ExTorch.Tensor.t(), integer(), ExTorch.Tensor.Options.t()) ::
  ExTorch.Tensor.t()
@spec randint_like(ExTorch.Tensor.t(), integer(), integer()) :: ExTorch.Tensor.t()

See ExTorch.randint_like/4

Available signature calls:

  • randint_like(input, low, high)
  • randint_like(input, high, opts)
  • randint_like(input, high, kwargs)

randint_like(input, low, high, kwargs)

@spec randint_like(ExTorch.Tensor.t(), integer(), integer(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec randint_like(
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  ExTorch.Tensor.Options.t()
) ::
  ExTorch.Tensor.t()

Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive), with the same size as input.

ExTorch.randint_like(input, low, high) is equivalent to ExTorch.randint(low, high, input.size, dtype: input.dtype, layout: input.layout, device: input.device)

Arguments

  • input: The input tensor (ExTorch.Tensor)
  • low: Lowest integer to be drawn from the distribution. Default: 0.
  • high: One above the highest integer to be drawn from the distribution.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: auto. If auto, it will use the same data type as the input. If nil, it will use a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: nil. If nil, it will use the same layout as the input.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: auto. If auto, it will use the same device as the input. If nil, it will use the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :preserve. If preserve, it will use the same memory format as the input.

Examples

# Create a random tensor with values between 0 and 10 from a float32 one.
iex> a = ExTorch.zeros({3, 4, 5}, dtype: :float32)
iex> ExTorch.randint_like(a, 10)
#Tensor<
[[[2., 5., 0., 7., 5.],
  [9., 0., 9., 1., 4.],
  [6., 3., 6., 0., 2.],
  [2., 6., 5., 9., 0.]],

 [[9., 4., 7., 9., 8.],
  [2., 8., 0., 8., 3.],
  [6., 6., 1., 9., 0.],
  [5., 2., 1., 7., 8.]],

 [[8., 3., 6., 8., 9.],
  [5., 7., 0., 7., 6.],
  [5., 4., 0., 3., 3.],
  [4., 3., 7., 3., 5.]]]
[
  size: {3, 4, 5},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Create a CUDA random tensor with values between 0 and 5 from a CPU one
iex> b = ExTorch.rand({3, 3})
iex> ExTorch.randint_like(b, 5, device: :cuda)
#Tensor<
[[4., 1., 4.],
 [1., 1., 1.],
 [2., 4., 3.]]
[
  size: {3, 3},
  dtype: :float,
  device: {:cuda, 0},
  requires_grad: false
]>


# Create a random tensor with values between -1 and 5 from a int32 one.
iex> c = ExTorch.ones({3, 3}, dtype: :int32)
iex> ExTorch.randint_like(c, -1, 5)
#Tensor<
[[ 2,  2,  4],
 [-1,  4, -1],
 [ 0,  3,  1]]
[
  size: {3, 3},
  dtype: :int,
  device: :cpu,
  requires_grad: false
]>

# Create a float32 CUDA random tensor with values between -1 and 5 from a int32 one.
iex> ExTorch.randint_like(c, -1, 5, dtype: :float32, device: :cuda)
#Tensor<
[[4., 0., 4.],
 [0., 1., 2.],
 [1., 2., 1.]]
[
  size: {3, 3},
  dtype: :float,
  device: {:cuda, 0},
  requires_grad: false
]>

randn(size)

@spec randn(tuple() | [integer()]) :: ExTorch.Tensor.t()

See ExTorch.randn/2

Available signature calls:

  • randn(size)

randn(size, kwargs)

@spec randn(tuple() | [integer()],
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec randn(
  tuple() | [integer()],
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution).

$$ \text{{out}}_{{i}} \sim \mathcal{{N}}(0, 1) $$

The shape of the tensor is defined by the variable argument :attr:size.

Arguments

  • size: a tuple/list of integers defining the shape of the output tensor.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

iex> ExTorch.randn({3, 3, 5})
#Tensor<
[[[ 0.6246, -0.4914,  1.1007, -0.0740, -1.6833],
  [-0.3883,  1.2653, -0.7250,  0.4994, -0.0219],
  [-1.3880,  1.8336, -1.7369, -0.2781, -0.0703]],

 [[ 0.2841,  0.7564, -0.3294,  0.1375,  2.0717],
  [-0.6085, -0.8361,  0.5009,  1.5529,  0.5856],
  [-0.3905, -0.3704,  1.1392,  0.3159, -0.5587]],

 [[ 0.8050, -0.0064, -0.6925, -0.0121, -1.2824],
  [-1.7309, -1.4089, -1.0207,  0.2222, -0.5027],
  [-0.4363, -0.1095,  1.3950, -0.4580,  0.2475]]]
[
  size: {3, 3, 5},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.randn({3, 3, 5}, device: :cuda)
#Tensor<
[[[ 3.5948e-01,  1.9308e-01, -1.0206e-01, -8.1509e-01, -1.6322e+00],
  [ 5.3390e-02, -1.2340e-01, -4.0909e-01,  3.5126e-01, -1.4023e-01],
  [ 6.5496e-01,  1.4283e+00, -1.2375e+00,  1.3729e+00,  4.2116e-01]],

 [[ 1.4638e+00,  6.9129e-03, -1.4147e+00, -1.8253e+00, -1.9235e+00],
  [-1.3941e-01, -7.3455e-01,  3.7658e-01, -1.0569e-01,  6.8978e-01],
  [ 3.7640e-01, -3.5241e-01, -1.1376e-01, -5.2477e-01, -1.6157e-01]],

 [[-2.8951e-01, -1.5665e+00,  3.4778e-01, -2.1329e+00, -1.0400e+00],
  [ 4.7831e-04,  1.2714e+00,  1.6693e+00, -2.1787e+00,  4.4486e-01],
  [-3.2052e-01,  2.3278e+00,  6.2929e-01,  2.5321e-01, -1.4433e+00]]]
[
  size: {3, 3, 5},
  dtype: :float,
  device: {:cuda, 0},
  requires_grad: false
]>

randn_like(input)

@spec randn_like(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.randn_like/2

Available signature calls:

  • randn_like(input)

randn_like(input, kwargs)

@spec randn_like(ExTorch.Tensor.t(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec randn_like(ExTorch.Tensor.t(), ExTorch.Tensor.Options.t()) :: ExTorch.Tensor.t()

Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution), with the same size as input.

ExTorch.randn_like(input) is equivalent to ExTorch.randn(input.size, dtype: input.dtype, layout: input.layout, device: input.device)

Arguments

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: auto. If auto, it will use the same data type as the input. If nil, it will use a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: nil. If nil, it will use the same layout as the input.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: auto. If auto, it will use the same device as the input. If nil, it will use the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :preserve. If preserve, it will use the same memory format as the input.

Examples

# Derive a new float64 tensor from another one
iex> a = ExTorch.empty({3, 2, 2}, dtype: :float64)
iex> ExTorch.rand_like(a)
#Tensor<
[[[0.6394, 0.0540],
  [0.8050, 0.6426]],

 [[0.7196, 0.6789],
  [0.2813, 0.4029]],

 [[0.0898, 0.4235],
  [0.3301, 0.2744]]]
[
  size: {3, 2, 2},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

# Derive a new cuda float64 tensor from another one
iex> b = ExTorch.empty({3, 2}, device: :cuda)
iex> ExTorch.rand_like(b, dtype: :float64)
#Tensor<
[[0.2639, 0.7628],
 [0.5935, 0.4772],
 [0.0176, 0.2496]]
[
  size: {3, 2},
  dtype: :double,
  device: {:cuda, 0},
  requires_grad: false
]>

tensor(list)

See ExTorch.tensor/2

Available signature calls:

  • tensor(list)

tensor(list, kwargs)

Constructs a tensor with data.

Arguments

  • list: Initial data for the tensor. Can be a list, tuple or number.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

iex> ExTorch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
#Tensor<
[[0.1000, 1.2000],
 [2.2000, 3.1000],
 [4.9000, 5.2000]]
[
  size: {3, 2},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Type inference
iex> ExTorch.tensor([0, 1])
#Tensor<
[0, 1]
[size: {2}, dtype: :byte, device: :cpu, requires_grad: false]>

iex> ExTorch.tensor([[0.11111, 0.222222, 0.3333333]], dtype: :float64)
#Tensor<
[[0.1111, 0.2222, 0.3333]]
[
  size: {1, 3},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

zeros(size)

@spec zeros(tuple() | [integer()]) :: ExTorch.Tensor.t()

See ExTorch.zeros/2

Available signature calls:

  • zeros(size)

zeros(size, kwargs)

@spec zeros(tuple() | [integer()],
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec zeros(
  tuple() | [integer()],
  ExTorch.Tensor.Options.t()
) :: ExTorch.Tensor.t()

Returns a tensor filled with the scalar value 0, with the shape defined by the variable argument size.

Arguments

  • size: a tuple/list of integers defining the shape of the output tensor.

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: if nil, uses a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: :strided.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: if nil, uses the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :contiguous

Examples

iex> ExTorch.zeros({2, 3})
#Tensor<
[[0., 0., 0.],
 [0., 0., 0.]]
[
  size: {2, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>


iex> ExTorch.zeros({2, 3}, dtype: :uint8, device: :cuda)
#Tensor<
[[0, 0, 0],
 [0, 0, 0]]
[
  size: {2, 3},
  dtype: :byte,
  device: {:cuda, 0},
  requires_grad: false
]>

zeros_like(input)

@spec zeros_like(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.zeros_like/2

Available signature calls:

  • zeros_like(input)

zeros_like(input, kwargs)

@spec zeros_like(ExTorch.Tensor.t(),
  device: ExTorch.Device.device(),
  dtype: ExTorch.DType.dtype(),
  layout: ExTorch.Layout.layout(),
  memory_format: ExTorch.MemoryFormat.memory_format(),
  pin_memory: boolean(),
  requires_grad: boolean()
) :: ExTorch.Tensor.t()
@spec zeros_like(ExTorch.Tensor.t(), ExTorch.Tensor.Options.t()) :: ExTorch.Tensor.t()

Returns a tensor filled with the scalar value 0, with the same size as input.

ExTorch.zeros_like(input) is equivalent to ExTorch.zeros(input.size, dtype: input.dtype, layout: input.layout, device: input.device)

Arguments

Keyword args

  • dtype (ExTorch.DType, optional): the desired data type of returned tensor. Default: auto. If auto, it will use the same data type as the input. If nil, it will use a global default (see ExTorch.set_default_dtype).

  • layout (ExTorch.Layout, optional): the desired layout of returned Tensor. Default: nil. If nil, it will use the same layout as the input.

  • device (ExTorch.Device, optional): the desired device of returned tensor. Default: auto. If auto, it will use the same device as the input. If nil, it will use the current device for the default tensor type (see ExTorch.set_default_device). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (boolean(), optional): If autograd should record operations on the returned tensor. Default: false.

  • pin_memory (bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: false.

  • memory_format (ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default: :preserve. If preserve, it will use the same memory format as the input.

Examples

# Create a tensor filled with ones from another float64 tensor.
iex> a = ExTorch.rand({3, 4})
iex> ExTorch.zeros_like(a)
#Tensor<
[[0., 0., 0., 0.],
 [0., 0., 0., 0.],
 [0., 0., 0., 0.]]
[
  size: {3, 4},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

# Create a complex tensor with real part equal to one in GPU from another CPU tensor.
iex> a = ExTorch.rand({3, 4}, dtype: :complex64)
iex> ExTorch.zeros_like(a, device: :cuda)
#Tensor<
[[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
 [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
 [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]]
[
  size: {3, 4},
  dtype: :complex_float,
  device: {:cuda, 0},
  requires_grad: false
]>

Tensor manipulation

adjoint(input)

@spec adjoint(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a view of the tensor conjugated and with the last two dimensions transposed.

ExTorch.adjoint(x) is equivalent to ExTorch.conj(ExTorch.transpose(x, -2, -1)) and to ExTorch.transpose(x, -2, -1) for real tensors.

Arguments

Examples

iex> x = ExTorch.arange(4)
iex> a = ExTorch.complex(x, x) |> ExTorch.reshape({2, 2})
#Tensor<
[[0.0000+0.0000j, 1.0000+1.0000j],
 [2.0000+2.0000j, 3.0000+3.0000j]]
[
  size: {2, 2},
  dtype: :complex_float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.adjoint(a)
#Tensor<
[[0.0000-0.0000j, 2.0000-2.0000j],
 [1.0000-1.0000j, 3.0000-3.0000j]]
[
  size: {2, 2},
  dtype: :complex_float,
  device: :cpu,
  requires_grad: false
]>

cat(input)

@spec cat([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()

See ExTorch.cat/3

Available signature calls:

  • cat(input)

cat(input, dim)

@spec cat([ExTorch.Tensor.t()] | tuple(), integer()) :: ExTorch.Tensor.t()
@spec cat([ExTorch.Tensor.t()] | tuple(),
  dim: integer(),
  out: ExTorch.Tensor.t() | nil
) ::
  ExTorch.Tensor.t()

See ExTorch.cat/3

Available signature calls:

  • cat(input, kwargs)
  • cat(input, dim)

cat(input, dim, kwargs)

@spec cat([ExTorch.Tensor.t()] | tuple(), integer(), [
  {:out, ExTorch.Tensor.t() | nil}
]) ::
  ExTorch.Tensor.t()
@spec cat([ExTorch.Tensor.t()] | tuple(), integer(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Concatenates the given sequence of seq tensors in the given dimension. All tensors must either have the same shape (except in the concatenating dimension) or be empty.

Arguments

  • tensors ([ExTorch.Tensor] | tuple()) - A sequence of tensors of the same type. Non-empty tensors provided must have the same shape, except in the cat dimension.

Optional arguments

  • dim (integer()) - the dimension over which the tensors are concatenated. Default: 0
  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the concatenation output. Default: nil

Examples

iex> x = ExTorch.arange(5) |> ExTorch.unsqueeze(-1)
#Tensor<
[[0.0000],
 [1.0000],
 [2.0000],
 [3.0000],
 [4.0000]]
[
  size: {5, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.cat([x, x], -1)
#Tensor<
[[0.0000, 0.0000],
 [1.0000, 1.0000],
 [2.0000, 2.0000],
 [3.0000, 3.0000],
 [4.0000, 4.0000]]
[
  size: {5, 2},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

chunk(input, chunks)

@spec chunk(ExTorch.Tensor.t(), integer()) :: [ExTorch.Tensor.t()]

See ExTorch.chunk/3

Available signature calls:

  • chunk(input, chunks)

chunk(input, chunks, dim)

@spec chunk(ExTorch.Tensor.t(), integer(), integer()) :: [ExTorch.Tensor.t()]
@spec chunk(ExTorch.Tensor.t(), integer(), [{:dim, integer()}]) :: [
  ExTorch.Tensor.t()
]

Attempts to split a tensor into the specified number of chunks. Each chunk is a view of the input tensor.

If the tensor size along the given dimension dim is divisible by chunks, all returned chunks will be the same size. If the tensor size along the given dimension dim is not divisible by chunks, all returned chunks will be the same size, except the last one. If such division is not possible, this function may return fewer than the specified number of chunks.

Arguments

  • input (ExTorch.Tensor) - the tensor to split
  • chunks (integer) - number of chunks to return

Optional arguments

  • dim (integer) - dimension along which to split the tensor. Default: 0

Notes

  • This function may return fewer than the specified number of chunks!
  • Use ExTorch.tensor_split/3 to ensure that the result will have the exact number of chunks.

Examples

iex> ExTorch.arange(11) |> ExTorch.chunk(6)
[
  #Tensor<
[0.0000, 1.0000]
[
    size: {2},
    dtype: :float,
    device: :cpu,
    requires_grad: false
  ]>,
  #Tensor<
[2., 3.]
[
    size: {2},
    dtype: :float,
    device: :cpu,
    requires_grad: false
  ]>,
  #Tensor<
[4., 5.]
[
    size: {2},
    dtype: :float,
    device: :cpu,
    requires_grad: false
  ]>,
  #Tensor<
[6., 7.]
[
    size: {2},
    dtype: :float,
    device: :cpu,
    requires_grad: false
  ]>,
  #Tensor<
[8., 9.]
[
    size: {2},
    dtype: :float,
    device: :cpu,
    requires_grad: false
  ]>,
  #Tensor<
[10.]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
]

column_stack(tensors)

@spec column_stack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()

See ExTorch.column_stack/2

Available signature calls:

  • column_stack(tensors)

column_stack(tensors, kwargs)

@spec column_stack(
  [ExTorch.Tensor.t()] | tuple(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec column_stack([ExTorch.Tensor.t()] | tuple(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Creates a new tensor by horizontally stacking the tensors in tensors.

Equivalent to ExTorch.hstack(tensors), except each zero or one dimensional tensor t in tensors is first reshaped into a (ExTorch.Tensor.numel(t), 1) column before being stacked horizontally.

Arguments

  • tensors ([ExTorch.Tensor] | tuple()) - sequence of tensors to concatenate.

Optional arguments

  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default: nil

Examples

# Stack two 1D tensors
iex> a = ExTorch.tensor([1, 2, 3])
iex> b = ExTorch.tensor([4, 5, 6])
iex> ExTorch.column_stack([a, b])
#Tensor<
[[1, 4],
 [2, 5],
 [3, 6]]
[size: {3, 2}, dtype: :byte, device: :cpu, requires_grad: false]>

# Stack 2D tensors
iex> a = ExTorch.arange(5)
iex> b = ExTorch.arange(10) |> ExTorch.reshape({5, 2})
iex> ExTorch.column_stack({a, b, b})
#Tensor<
[[0.0000, 0.0000, 1.0000, 0.0000, 1.0000],
 [1.0000, 2.0000, 3.0000, 2.0000, 3.0000],
 [2.0000, 4.0000, 5.0000, 4.0000, 5.0000],
 [3.0000, 6.0000, 7.0000, 6.0000, 7.0000],
 [4.0000, 8.0000, 9.0000, 8.0000, 9.0000]]
[size: {5, 5}, dtype: :float, device: :cpu, requires_grad: false]>

concat(input)

@spec concat([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()

Alias to cat/1

concat(input, dim)

@spec concat([ExTorch.Tensor.t()] | tuple(), integer()) :: ExTorch.Tensor.t()
@spec concat([ExTorch.Tensor.t()] | tuple(),
  dim: integer(),
  out: ExTorch.Tensor.t() | nil
) ::
  ExTorch.Tensor.t()

Alias to cat/2

concat(input, dim, kwargs)

@spec concat([ExTorch.Tensor.t()] | tuple(), integer(), [
  {:out, ExTorch.Tensor.t() | nil}
]) ::
  ExTorch.Tensor.t()
@spec concat([ExTorch.Tensor.t()] | tuple(), integer(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Alias to cat/3

concatenate(input)

@spec concatenate([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()

Alias to cat/1

concatenate(input, dim)

@spec concatenate([ExTorch.Tensor.t()] | tuple(), integer()) :: ExTorch.Tensor.t()
@spec concatenate([ExTorch.Tensor.t()] | tuple(),
  dim: integer(),
  out: ExTorch.Tensor.t() | nil
) ::
  ExTorch.Tensor.t()

Alias to cat/2

concatenate(input, dim, kwargs)

@spec concatenate([ExTorch.Tensor.t()] | tuple(), integer(), [
  {:out, ExTorch.Tensor.t() | nil}
]) ::
  ExTorch.Tensor.t()
@spec concatenate([ExTorch.Tensor.t()] | tuple(), integer(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Alias to cat/3

conj(tensor)

@spec conj(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a view of input with a flipped conjugate bit. If input has a non-complex dtype, this function just returns input.

Arguments

Notes

ExTorch.conj/1 performs a lazy conjugation, but the actual conjugated tensor can be materialized at any time using ExTorch.resolve_conj/1.

Examples

iex> a = ExTorch.rand({2, 2}, dtype: :complex64)
#Tensor<
[[0.5885+0.0263j, 0.8141+0.0605j],
 [0.9169+0.3126j, 0.6344+0.2768j]]
[
  size: {2, 2},
  dtype: :complex_float,
  device: :cpu,
  requires_grad: false
]>

# Conjugate the input
iex> b = ExTorch.conj(a)
#Tensor<
[[0.5885-0.0263j, 0.8141-0.0605j],
 [0.9169-0.3126j, 0.6344-0.2768j]]
[
  size: {2, 2},
  dtype: :complex_float,
  device: :cpu,
  requires_grad: false
]>

# Check that conj bit is set to true
iex> ExTorch.Tensor.is_conj(b)
true

diagonal_scatter(input, src)

@spec diagonal_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.diagonal_scatter/6

Available signature calls:

  • diagonal_scatter(input, src)

diagonal_scatter(input, src, kwargs)

@spec diagonal_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  offset: integer(),
  dim1: integer(),
  dim2: integer(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()
@spec diagonal_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer()
) :: ExTorch.Tensor.t()

See ExTorch.diagonal_scatter/6

Available signature calls:

  • diagonal_scatter(input, src, offset)
  • diagonal_scatter(input, src, kwargs)

diagonal_scatter(input, src, offset, dim1)

@spec diagonal_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer()
) :: ExTorch.Tensor.t()
@spec diagonal_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  dim1: integer(),
  dim2: integer(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.diagonal_scatter/6

Available signature calls:

  • diagonal_scatter(input, src, offset, kwargs)
  • diagonal_scatter(input, src, offset, dim1)

diagonal_scatter(input, src, offset, dim1, dim2)

@spec diagonal_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  integer()
) :: ExTorch.Tensor.t()
@spec diagonal_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  dim2: integer(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.diagonal_scatter/6

Available signature calls:

  • diagonal_scatter(input, src, offset, dim1, kwargs)
  • diagonal_scatter(input, src, offset, dim1, dim2)

diagonal_scatter(input, src, offset, dim1, dim2, kwargs)

@spec diagonal_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  integer(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec diagonal_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  integer(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Embeds the values of the src tensor into input along the diagonal elements of input, with respect to dim1 and dim2.

This function returns a tensor with fresh storage; it does not return a view.

The argument offset controls which diagonal to consider:

  • If offset = 0, it is the main diagonal.
  • If offset > 0, it is above the main diagonal.
  • If offset < 0, it is below the main diagonal.

Arguments

  • input (ExTorch.Tensor) - the input tensor. Must be at least 2-dimensional.
  • src (ExTorch.Tensor) - the tensor to embed into input.
  • offset (integer) - which diagonal to consider. Default: 0 (main diagonal).
  • dim1 (integer) - first dimension with respect to which to take diagonal. Default: 0.
  • dim2 (integer) - second dimension with respect to which to take diagonal. Default: 1.

Optional arguments

  • out (ExTorch.Tensor or nil) - an optional pre-allocated tensor used to store the output result. Default: nil

Notes

src must be of the proper size in order to be embedded into input. Specifically, it should have the same shape as ExTorch.diagonal(input, offset, dim1, dim2)

Examples

iex> a = ExTorch.zeros({3, 3})
#Tensor<
[[   0.,    0.,    0.],
 [   0.,    0.,    0.],
 [   0.,    0.,    0.]]
[size: {3, 3}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.diagonal_scatter(a, ExTorch.ones(3), 0)
#Tensor<
[[1.0000, 0.0000, 0.0000],
 [0.0000, 1.0000, 0.0000],
 [0.0000, 0.0000, 1.0000]]
[size: {3, 3}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.diagonal_scatter(a, ExTorch.ones(2), 1)
#Tensor<
[[0.0000, 1.0000, 0.0000],
 [0.0000, 0.0000, 1.0000],
 [0.0000, 0.0000, 0.0000]]
[size: {3, 3}, dtype: :float, device: :cpu, requires_grad: false]>

dsplit(input, indices_or_sections)

@spec dsplit(ExTorch.Tensor.t(), integer() | [integer()] | tuple()) :: [
  ExTorch.Tensor.t()
]

Splits input, a tensor with three or more dimensions, into multiple tensors depthwise according to indices_or_sections. Each split is a view of input.

This is equivalent to calling ExTorch.tensor_split(input, indices_or_sections, dim: 2) (the split dimension is 2), except that if indices_or_sections is an integer it must evenly divide the split dimension or a runtime error will be thrown.

Arguments

Examples

iex> t = ExTorch.arange(16) |> ExTorch.reshape({2, 2, 4})
#Tensor<
[[[ 0.0000,  1.0000,  2.0000,  3.0000],
  [ 4.0000,  5.0000,  6.0000,  7.0000]],

 [[ 8.0000,  9.0000, 10.0000, 11.0000],
  [12.0000, 13.0000, 14.0000, 15.0000]]]
[size: {2, 2, 4}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.dsplit(t, 2)
[
  #Tensor<
  [[[ 0.0000,  1.0000],
    [ 4.0000,  5.0000]],

   [[ 8.0000,  9.0000],
    [12.0000, 13.0000]]]
  [size: {2, 2, 2}, dtype: :float, device: :cpu, requires_grad: false]>,
  #Tensor<
  [[[ 2.,  3.],
    [ 6.,  7.]],

   [[10., 11.],
    [14., 15.]]]
  [size: {2, 2, 2}, dtype: :float, device: :cpu, requires_grad: false]>
]

dstack(tensors)

@spec dstack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()

See ExTorch.dstack/2

Available signature calls:

  • dstack(tensors)

dstack(tensors, kwargs)

@spec dstack(
  [ExTorch.Tensor.t()] | tuple(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec dstack([ExTorch.Tensor.t()] | tuple(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Stack tensors in sequence depthwise (along third axis).

This is equivalent to concatenation along the third axis after 1-D and 2-D tensors have been reshaped by ensuring each tensor has at least 3 dimensions.

Arguments

  • tensors ([ExTorch.Tensor.t()] | tuple()) - sequence of tensors to concatenate.

Optional arguments

  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default: nil

Examples

iex> a = ExTorch.tensor([1, 2, 3])
iex> b = ExTorch.tensor([4, 5, 6])
iex> ExTorch.dstack({a, b})
#Tensor<
[[[1, 4],
  [2, 5],
  [3, 6]]]
[size: {1, 3, 2}, dtype: :byte, device: :cpu, requires_grad: false]>

iex> a = ExTorch.tensor([[1],[2],[3]])
#Tensor<
[[1],
 [2],
 [3]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> b = ExTorch.tensor([[4],[5],[6]])
#Tensor<
[[4],
 [5],
 [6]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> ExTorch.dstack([a, b])
#Tensor<
[[[1, 4]],

 [[2, 5]],

 [[3, 6]]]
[size: {3, 1, 2}, dtype: :byte, device: :cpu, requires_grad: false]>

gather(input, dim, index)

See ExTorch.gather/5

Available signature calls:

  • gather(input, dim, index)

gather(input, dim, index, kwargs)

@spec gather(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  sparse_grad: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()
@spec gather(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  boolean()
) :: ExTorch.Tensor.t()

See ExTorch.gather/5

Available signature calls:

  • gather(input, dim, index, sparse_grad)
  • gather(input, dim, index, kwargs)

gather(input, dim, index, sparse_grad, kwargs)

@spec gather(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  boolean(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec gather(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  boolean(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Gathers values along an axis specified by dim.

For a 3-D tensor the output is specified by:

out[i][j][k] = input[index[i][j][k]][j][k]  # if dim == 0
out[i][j][k] = input[i][index[i][j][k]][k]  # if dim == 1
out[i][j][k] = input[i][j][index[i][j][k]]  # if dim == 2

input and index must have the same number of dimensions. It is also required that ExTorch.Tensor.size(index, d) <= ExTorch.Tensor.size(input, d) for all dimensions d != dim. out will have the same shape as index. Note that input and index do not broadcast against each other.

Arguments

  • input (ExTorch.Tensor) - the source tensor.
  • dim (integer()) - the axis along which to index.
  • index (ExTorch.Tensor) - the indices of elements to gather. Its dtype must be :int64 or :long

Optional arguments

  • sparse_grad (boolean()) - if true, then the gradient w.r.t. input will be a sparse tensor. Default: false
  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default: nil

Examples

iex> t = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.gather(t, 1, ExTorch.tensor([[0, 0], [1, 0]], dtype: :int64))
#Tensor<
[[1, 1],
 [4, 3]]
[size: {2, 2}, dtype: :byte, device: :cpu, requires_grad: false]>

hsplit(input, indices_or_sections)

@spec hsplit(ExTorch.Tensor.t(), integer() | [integer()] | tuple()) :: [
  ExTorch.Tensor.t()
]

Splits input, a tensor with one or more dimensions, into multiple tensors horizontally according to indices_or_sections. Each split is a view of input.

If input is one dimensional this is equivalent to calling ExTorch.tensor_split(input, indices_or_sections, dim: 0) (the split dimension is zero), and if input has two or more dimensions it’s equivalent to calling ExTorch.tensor_split(input, indices_or_sections, dim: 1) (the split dimension is 1), except that if indices_or_sections is an integer it must evenly divide the split dimension or a runtime error will be thrown.

Arguments

Examples

iex> a = ExTorch.arange(16) |> ExTorch.reshape({4, 4})
#Tensor<
[[ 0.0000,  1.0000,  2.0000,  3.0000],
 [ 4.0000,  5.0000,  6.0000,  7.0000],
 [ 8.0000,  9.0000, 10.0000, 11.0000],
 [12.0000, 13.0000, 14.0000, 15.0000]]
[size: {4, 4}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.hsplit(a, 2)
[
  #Tensor<
  [[ 0.0000,  1.0000],
   [ 4.0000,  5.0000],
   [ 8.0000,  9.0000],
   [12.0000, 13.0000]]
  [size: {4, 2}, dtype: :float, device: :cpu, requires_grad: false]>,
  #Tensor<
  [[ 2.,  3.],
   [ 6.,  7.],
   [10., 11.],
   [14., 15.]]
  [size: {4, 2}, dtype: :float, device: :cpu, requires_grad: false]>
]

iex> ExTorch.hsplit(a, [3, 6])
[
  #Tensor<
  [[ 0.0000,  1.0000,  2.0000],
   [ 4.0000,  5.0000,  6.0000],
   [ 8.0000,  9.0000, 10.0000],
   [12.0000, 13.0000, 14.0000]]
  [size: {4, 3}, dtype: :float, device: :cpu, requires_grad: false]>,
  #Tensor<
  [[ 3.],
   [ 7.],
   [11.],
   [15.]]
  [size: {4, 1}, dtype: :float, device: :cpu, requires_grad: false]>,
  #Tensor<
  []
  [size: {4, 0}, dtype: :float, device: :cpu, requires_grad: false]>
]

hstack(tensors)

@spec hstack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()

See ExTorch.hstack/2

Available signature calls:

  • hstack(tensors)

hstack(tensors, kwargs)

@spec hstack(
  [ExTorch.Tensor.t()] | tuple(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec hstack([ExTorch.Tensor.t()] | tuple(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Stack tensors in sequence horizontally (column wise).

This is equivalent to concatenation along the first axis for 1-D tensors, and along the second axis for all other tensors.

Arguments

  • tensors ([ExTorch.Tensor] | tuple()) - sequence of tensors to concatenate.

Optional arguments

  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default: nil

Examples

iex> a = ExTorch.tensor([1, 2, 3])
iex> b = ExTorch.tensor([4, 5, 6])
iex> ExTorch.hstack({a, b})
#Tensor<
[1, 2, 3, 4, 5, 6]
[size: {6}, dtype: :byte, device: :cpu, requires_grad: false]>

iex> a = ExTorch.tensor([[1],[2],[3]])
#Tensor<
[[1],
 [2],
 [3]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> b = ExTorch.tensor([[4],[5],[6]])
#Tensor<
[[4],
 [5],
 [6]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>

iex> ExTorch.hstack([a, b])
#Tensor<
[[1, 4],
 [2, 5],
 [3, 6]]
[size: {3, 2}, dtype: :byte, device: :cpu, requires_grad: false]>

moveaxis(input, source, destination)

@spec moveaxis(ExTorch.Tensor.t(), tuple() | integer(), tuple() | integer()) ::
  ExTorch.Tensor.t()

Alias to movedim/3

movedim(input, source, destination)

@spec movedim(ExTorch.Tensor.t(), tuple() | integer(), tuple() | integer()) ::
  ExTorch.Tensor.t()

Moves the dimension(s) of input at the position(s) in source to the position(s) in destination.

Other dimensions of input that are not explicitly moved remain in their original order and appear at the positions not specified in destination.

Arguments

  • input (ExTorch.Tensor) - the input tensor.
  • source (integer() | tuple()) - original positions of the dims to move. These must be unique.

  • destination (integer() | tuple()) - destination positions of the dims to move. These must be unique.

Examples

iex> a = ExTorch.randn({3, 2, 1})
#Tensor<
[[[-0.0404],
  [ 0.5073]],

 [[ 0.3008],
  [-0.6428]],

 [[-0.8649],
  [ 0.3615]]]
[size: {3, 2, 1}, dtype: :float, device: :cpu, requires_grad: false]>

# Swap two singular dimensions
iex> ExTorch.movedim(a, 1, 0)
#Tensor<
[[[-0.0404],
  [ 0.3008],
  [-0.8649]],

 [[ 0.5073],
  [-0.6428],
  [ 0.3615]]]
[size: {2, 3, 1}, dtype: :float, device: :cpu, requires_grad: false]>

# Swap multiple dimensions
iex> ExTorch.movedim(a, {1, 2}, {0, 1})
#Tensor<
[[[-0.0404,  0.3008, -0.8649]],

 [[ 0.5073, -0.6428,  0.3615]]]
[size: {2, 1, 3}, dtype: :float, device: :cpu, requires_grad: false]>

narrow(input, dim, start, length)

Returns a new tensor that is a narrowed version of input tensor.

The dimension dim is input from start to start + length. The returned tensor and input tensor share the same underlying storage.

Arguments

  • input (ExTorch.Tensor) - the tensor to narrow.
  • dim (integer) - the dimension along which to narrow.
  • start (integer | ExTorch.Tensor) - index of the element to start the narrowed dimension from. Can be negative, which means indexing from the end of dim. If ExTorch.Tensor, it must be an 0-dim integral Tensor (bools not allowed).

  • length (integer) - length of the narrowed dimension, must be weakly positive.

Examples

iex> a = ExTorch.arange(12) |> ExTorch.reshape({4, 3})
#Tensor<
[[ 0.0000,  1.0000,  2.0000],
 [ 3.0000,  4.0000,  5.0000],
 [ 6.0000,  7.0000,  8.0000],
 [ 9.0000, 10.0000, 11.0000]]
[size: {4, 3}, dtype: :float, device: :cpu, requires_grad: false]>

# Narrow tensor from 0 to 2 in the first dimension
iex> ExTorch.narrow(a, 0, 0, 2)
#Tensor<
[[0.0000, 1.0000, 2.0000],
 [3.0000, 4.0000, 5.0000]]
[size: {2, 3}, dtype: :float, device: :cpu, requires_grad: false]>

# Narrow tensor from 1 to 3 in the second dimension
iex> ExTorch.narrow(a, 1, 1, 2)
#Tensor<
[[ 1.,  2.],
 [ 4.,  5.],
 [ 7.,  8.],
 [10., 11.]]
[size: {4, 2}, dtype: :float, device: :cpu, requires_grad: false]>

# Narrow tensor using a `start` tensor
iex> ExTorch.narrow(a, -1, ExTorch.tensor(-1), 1)
#Tensor<
[[ 2.],
 [ 5.],
 [ 8.],
 [11.]]
[size: {4, 1}, dtype: :float, device: :cpu, requires_grad: false]>

narrow_copy(input, dim, start, length)

@spec narrow_copy(
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  integer()
) :: ExTorch.Tensor.t()

See ExTorch.narrow_copy/5

Available signature calls:

  • narrow_copy(input, dim, start, length)

narrow_copy(input, dim, start, length, kwargs)

@spec narrow_copy(
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  integer(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec narrow_copy(
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  integer(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Same as ExTorch.narrow/4 except this returns a copy rather than shared storage. This is primarily for sparse tensors, which do not have a shared-storage narrow method.

Arguments

  • input (ExTorch.Tensor) - the tensor to narrow.
  • dim (integer) - the dimension along which to narrow.
  • start (integer) - index of the element to start the narrowed dimension from. Can be negative, which means indexing from the end of dim.
  • length (integer) - length of the narrowed dimension, must be weakly positive.

Optional arguments

  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default: nil

Examples

iex> a = ExTorch.arange(12) |> ExTorch.reshape({4, 3})
#Tensor<
[[ 0.0000,  1.0000,  2.0000],
 [ 3.0000,  4.0000,  5.0000],
 [ 6.0000,  7.0000,  8.0000],
 [ 9.0000, 10.0000, 11.0000]]
[size: {4, 3}, dtype: :float, device: :cpu, requires_grad: false]>

# Narrow tensor from 0 to 2 in the first dimension
iex> ExTorch.narrow_copy(a, 0, 0, 2)
#Tensor<
[[0.0000, 1.0000, 2.0000],
 [3.0000, 4.0000, 5.0000]]
[size: {2, 3}, dtype: :float, device: :cpu, requires_grad: false]>

# Narrow tensor from 1 to 3 in the second dimension
iex> ExTorch.narrow_copy(a, 1, 1, 2)
#Tensor<
[[ 1.,  2.],
 [ 4.,  5.],
 [ 7.,  8.],
 [10., 11.]]
[size: {4, 2}, dtype: :float, device: :cpu, requires_grad: false]>

nonzero(input)

@spec nonzero(ExTorch.Tensor.t()) :: ExTorch.Tensor.t() | tuple()

See ExTorch.nonzero/3

Available signature calls:

  • nonzero(input)

nonzero(input, kwargs)

@spec nonzero(ExTorch.Tensor.t(), out: ExTorch.Tensor.t() | nil, as_tuple: boolean()) ::
  ExTorch.Tensor.t() | tuple()
@spec nonzero(ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t() | tuple()

See ExTorch.nonzero/3

Available signature calls:

  • nonzero(input, out)
  • nonzero(input, kwargs)

nonzero(input, out, as_tuple)

@spec nonzero(ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil, boolean()) ::
  ExTorch.Tensor.t() | tuple()
@spec nonzero(ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil, [{:as_tuple, boolean()}]) ::
  ExTorch.Tensor.t() | tuple()

Retrieve the indices of all non-zero elements in a tensor.

This function can behave differently depending of the value set by the as_tuple parameter:

When as_tuple is false (default):

Returns a tensor containing the indices of all non-zero elements of input. Each row in the result contains the indices of a non-zero element in input. The result is sorted lexicographically, with the last index changing the fastest (C-style).

If input has $n$ dimensions, then the resulting indices tensor out is of size $(z \times n)$, where $z$ is the total number of non-zero elements in the input tensor.

When as_tuple is true

Returns a tuple of 1-D tensors, one for each dimension in input, each containing the indices (in that dimension) of all non-zero elements of input .

If input has $n$ dimensions, then the resulting tuple contains $n$ tensors of size $z$, where $z$ is the total number of non-zero elements in the input tensor.

As a special case, when input has zero dimensions and a nonzero scalar value, it is treated as a one-dimensional tensor with one element.

Arguments

Optional arguments

  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. This will only take effect when as_tuple = false. Default: nil

  • as_tuple (boolean) - if false, the function will return the output tensor containing indices. Else, it returns one 1-D tensor for each dimension, containing the indices of each nonzero element along that dimension. Default: false

Examples

iex> input1 = ExTorch.tensor([1, 1, 1, 0, 1])
iex> input2 = ExTorch.tensor([[0.6, 0.0, 0.0, 0.0],
...>                          [0.0, 0.4, 0.0, 0.0],
...>                          [0.0, 0.0, 1.2, 0.0],
...>                          [0.0, 0.0, 0.0,-0.4]])

# Return tensor indices
iex> ExTorch.nonzero(input1)
#Tensor<
[[0],
 [1],
 [2],
 [4]]
[size: {4, 1}, dtype: :long, device: :cpu, requires_grad: false]>

iex> ExTorch.nonzero(input2)
#Tensor<
[[0, 0],
 [1, 1],
 [2, 2],
 [3, 3]]
[size: {4, 2}, dtype: :long, device: :cpu, requires_grad: false]>

# Return tuple indices
iex> ExTorch.nonzero(input1, as_tuple: true)
#Tensor<
[0, 1, 2, 4]
[size: {4}, dtype: :long, device: :cpu, requires_grad: false]>

iex> ExTorch.nonzero(input2, as_tuple: true)
{#Tensor<
 [0, 1, 2, 3]
 [size: {4}, dtype: :long, device: :cpu, requires_grad: false]>,
 #Tensor<
 [0, 1, 2, 3]
 [size: {4}, dtype: :long, device: :cpu, requires_grad: false]>}

permute(input, dims)

@spec permute(ExTorch.Tensor.t(), tuple() | [integer()]) :: ExTorch.Tensor.t()

Returns a view of the original tensor input with its dimensions permuted.

Arguments

  • input (ExTorch.Tensor) - the input tensor.
  • dims (tuple() | [integer()]) - The desired ordering of dimensions.

Examples

iex> a = ExTorch.rand({3, 2, 4, 5})
iex> out = ExTorch.permute(a, {2, -1, 0, 1})
iex> out.size
{4, 5, 3, 2}

reshape(tensor, shape)

@spec reshape(ExTorch.Tensor.t(), tuple() | [integer()]) :: ExTorch.Tensor.t()

Returns a tensor with the same data and number of elements as input, but with the specified shape.

When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior.

A single dimension may be -1, in which case it’s inferred from theremaining dimensions and the number of elements in input.

Arguments

  • tensor: input tensor (ExTorch.Tensor)
  • shape: the new shape ([integer()] | tuple())

Examples

iex> a = ExTorch.arange(0, 20) |> ExTorch.reshape({5, 4})
#Tensor<
[[ 0.,  1.,  2.,  3.],
 [ 4.,  5.,  6.,  7.],
 [ 8.,  9., 10., 11.],
 [12., 13., 14., 15.],
 [16., 17., 18., 19.]]
[
  size: {5, 4},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

iex> b = ExTorch.tensor([[0, 1], [2, 3]]) |> ExTorch.reshape({-1})
#Tensor<
[0, 1, 2, 3]
[
  size: {4},
  dtype: :byte,
  device: :cpu,
  requires_grad: false
]>

row_stack(tensors)

@spec row_stack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()

Alias to vstack/1

row_stack(tensors, kwargs)

@spec row_stack(
  [ExTorch.Tensor.t()] | tuple(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec row_stack([ExTorch.Tensor.t()] | tuple(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Alias to vstack/2

scatter(input, dim, index, src)

See ExTorch.scatter/6

Available signature calls:

  • scatter(input, dim, index, src)

scatter(input, dim, index, src, kwargs)

See ExTorch.scatter/6

Available signature calls:

  • scatter(input, dim, index, src, out)
  • scatter(input, dim, index, src, kwargs)

scatter(input, dim, index, src, out, inplace)

Writes all values from the tensor src into input at the indices specified in the index tensor. For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.

For a 3-D tensor, input is updated as:

input[index[i][j][k]][j][k] = src[i][j][k]  # if dim == 0
input[i][index[i][j][k]][k] = src[i][j][k]  # if dim == 1
input[i][j][index[i][j][k]] = src[i][j][k]  # if dim == 2

This is the reverse operation of the manner described in ExTorch.gather/5.

input, index and src (if it is a ExTorch.Tensor) should all have the same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= input.size(d) for all dimensions d != dim. Note that index and src do not broadcast.

Moreover, as for ExTorch.gather/5, the values of index must be between 0 and input.size(dim) - 1 inclusive.

Arguments

  • input (ExTorch.Tensor) - the input tensor.
  • dim (integer) - the axis along which to index.
  • index (ExTorch.Tensor) - the indices of elements to scatter, can be either empty or of the same dimensionality as src. When empty, the operation returns input unchanged. It's dtype must be :long or :int64
  • src (ExTorch.Tensor or float) - the source element(s) to scatter.

Optional arguments

  • out (ExTorch.Tensor or nil) - an optional pre-allocated tensor used to store the output result. If inplace = true it will not take any effect. Default: nil
  • inplace (bool) - if true then the scatter operation will take inplace on the input argument. Else it will return a separate tensor with the result. Default: false

Warnings

When indices are not unique, the behavior is non-deterministic (one of the values from src will be picked arbitrarily) and the gradient will be incorrect (it will be propagated to all locations in the source that correspond to the same index)!

Notes

  1. The backward pass is implemented only for src.size == index.size.
  2. This function does not expose the reduce argument since it is set for deprecation. It is recommended to use the ExTorch.scatter_reduce function instead.

Examples

iex> src = ExTorch.arange(1, 11) |> ExTorch.reshape({2, 5})
#Tensor<
[[ 1.,  2.,  3.,  4.,  5.],
 [ 6.,  7.,  8.,  9., 10.]]
[size: {2, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex> index = ExTorch.tensor([[0, 1, 2, 0]], dtype: :int64)
#Tensor<
[[0, 1, 2, 0]]
[size: {1, 4}, dtype: :long, device: :cpu, requires_grad: false]>
iex> input = ExTorch.zeros({3, 5}, dtype: src.dtype)
#Tensor<
[[   0.,    0.,    0.,    0.,    0.],
 [   0.,    0.,    0.,    0.,    0.],
 [   0.,    0.,    0.,    0.,    0.]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.scatter(input, 0, index, src)
#Tensor<
[[1.0000, 0.0000, 0.0000, 4.0000, 0.0000],
 [0.0000, 2.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 3.0000, 0.0000, 0.0000]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>

iex> index = ExTorch.tensor([[0, 1, 2], [0, 1, 4]], dtype: :int64)
#Tensor<
[[0, 1, 2],
 [0, 1, 4]]
[size: {2, 3}, dtype: :long, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter(input, 1, index, src)
#Tensor<
[[1.0000, 2.0000, 3.0000, 0.0000, 0.0000],
 [6.0000, 7.0000, 0.0000, 0.0000, 8.0000],
 [0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>

scatter_add(input, dim, index, src)

See ExTorch.scatter_add/6

Available signature calls:

  • scatter_add(input, dim, index, src)

scatter_add(input, dim, index, src, kwargs)

@spec scatter_add(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  out: ExTorch.Tensor.t() | nil,
  inplace: boolean()
) :: ExTorch.Tensor.t()
@spec scatter_add(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.scatter_add/6

Available signature calls:

  • scatter_add(input, dim, index, src, out)
  • scatter_add(input, dim, index, src, kwargs)

scatter_add(input, dim, index, src, out, inplace)

@spec scatter_add(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t() | nil,
  boolean()
) :: ExTorch.Tensor.t()
@spec scatter_add(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t() | nil,
  [{:inplace, boolean()}]
) :: ExTorch.Tensor.t()

Adds all values from the tensor src into input at the indices specified in the index tensor in a similar fashion to ExTorch.scatter/6.

For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.

For a 3-D tensor, input is updated as:

input[index[i][j][k]][j][k] += src[i][j][k]  # if dim == 0
input[i][index[i][j][k]][k] += src[i][j][k]  # if dim == 1
input[i][j][index[i][j][k]] += src[i][j][k]  # if dim == 2

This is the reverse operation of the manner described in ExTorch.gather/5.

input, index and src (if it is a ExTorch.Tensor) should all have the same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= input.size(d) for all dimensions d != dim. Note that index and src do not broadcast.

Moreover, as for ExTorch.gather/5, the values of index must be between 0 and input.size(dim) - 1 inclusive.

Arguments

  • input (ExTorch.Tensor) - the input tensor.
  • dim (integer) - the axis along which to index.
  • index (ExTorch.Tensor) - the indices of elements to scatter, can be either empty or of the same dimensionality as src. When empty, the operation returns input unchanged. It's dtype must be :long or :int64
  • src (ExTorch.Tensor) - the source elements to scatter and add.

Optional arguments

  • out (ExTorch.Tensor or nil) - an optional pre-allocated tensor used to store the output result. If inplace = true it will not take any effect. Default: nil
  • inplace (bool) - if true then the scatter operation will take inplace on the input argument. Else it will return a separate tensor with the result. Default: false

Notes

  1. The backward pass is implemented only for src.size == index.size.
  2. This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information.

Examples

iex> src = ExTorch.ones({2, 5})
#Tensor<
[[1., 1., 1., 1., 1.],
 [1., 1., 1., 1., 1.]]
[size: {2, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex> index = ExTorch.tensor([[0, 1, 2, 0, 0]], dtype: :int64)
#Tensor<
[[0, 1, 2, 0, 0]]
[size: {1, 5}, dtype: :long, device: :cpu, requires_grad: false]>
iex> input = ExTorch.zeros({3, 5}, dtype: src.dtype)
#Tensor<
[[   0.,    0.,    0.,    0.,    0.],
 [   0.,    0.,    0.,    0.,    0.],
 [   0.,    0.,    0.,    0.,    0.]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex(4)> ExTorch.scatter_add(input, 0, index, src)
#Tensor<
[[1.0000, 0.0000, 0.0000, 1.0000, 1.0000],
 [0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 1.0000, 0.0000, 0.0000]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>

iex> index = ExTorch.tensor([[0, 1, 2, 0, 0], [0, 1, 2, 2, 2]], dtype: :int64)
#Tensor<
[[0, 1, 2, 0, 0],
 [0, 1, 2, 2, 2]]
[size: {2, 5}, dtype: :long, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter_add(input, 0, index, src)
#Tensor<
[[2.0000, 0.0000, 0.0000, 1.0000, 1.0000],
 [0.0000, 2.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 2.0000, 1.0000, 1.0000]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>

scatter_reduce(input, dim, index, src, reduce)

@spec scatter_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :sum | :prod | :mean | :amax | :amin
) :: ExTorch.Tensor.t()

See ExTorch.scatter_reduce/8

Available signature calls:

  • scatter_reduce(input, dim, index, src, reduce)

scatter_reduce(input, dim, index, src, reduce, include_self)

@spec scatter_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :sum | :prod | :mean | :amax | :amin,
  boolean()
) :: ExTorch.Tensor.t()
@spec scatter_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :sum | :prod | :mean | :amax | :amin,
  include_self: boolean(),
  out: ExTorch.Tensor.t() | nil,
  inplace: boolean()
) :: ExTorch.Tensor.t()

See ExTorch.scatter_reduce/8

Available signature calls:

  • scatter_reduce(input, dim, index, src, reduce, kwargs)
  • scatter_reduce(input, dim, index, src, reduce, include_self)

scatter_reduce(input, dim, index, src, reduce, include_self, kwargs)

@spec scatter_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :sum | :prod | :mean | :amax | :amin,
  boolean(),
  out: ExTorch.Tensor.t() | nil,
  inplace: boolean()
) :: ExTorch.Tensor.t()
@spec scatter_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :sum | :prod | :mean | :amax | :amin,
  boolean(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.scatter_reduce/8

Available signature calls:

  • scatter_reduce(input, dim, index, src, reduce, include_self, out)
  • scatter_reduce(input, dim, index, src, reduce, include_self, kwargs)

scatter_reduce(input, dim, index, src, reduce, include_self, out, inplace)

@spec scatter_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :sum | :prod | :mean | :amax | :amin,
  boolean(),
  ExTorch.Tensor.t() | nil,
  boolean()
) :: ExTorch.Tensor.t()
@spec scatter_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :sum | :prod | :mean | :amax | :amin,
  boolean(),
  ExTorch.Tensor.t() | nil,
  [{:inplace, boolean()}]
) :: ExTorch.Tensor.t()

Reduces all values from the src tensor to the indices specified in the index tensor in the input tensor using the applied reduction defined via the reduce argument (:sum, :prod, :mean, :amax, :amin). For each value in src, it is reduced to an index in input which is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim. If include_self: true, the values in the input tensor are included in the reduction.

For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.

For a 3-D tensor with reduce: :sum and include_self: true, input is updated as:

input[index[i][j][k]][j][k] += src[i][j][k]  # if dim == 0
input[i][index[i][j][k]][k] += src[i][j][k]  # if dim == 1
input[i][j][index[i][j][k]] += src[i][j][k]  # if dim == 2

This is the reverse operation of the manner described in ExTorch.gather/5.

input, index and src (if it is a ExTorch.Tensor) should all have the same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= input.size(d) for all dimensions d != dim. Note that index and src do not broadcast.

Moreover, as for ExTorch.gather/5, the values of index must be between 0 and input.size(dim) - 1 inclusive.

Arguments

  • input (ExTorch.Tensor) - the input tensor.
  • dim (integer) - the axis along which to index.
  • index (ExTorch.Tensor) - the indices of elements to scatter, can be either empty or of the same dimensionality as src. When empty, the operation returns input unchanged. It's dtype must be :long or :int64
  • src (ExTorch.Tensor) - the source elements to scatter and add.
  • reduce (:sum or :prod or :mean or :amax or :amin) - the reduction operation to apply for non-unique indices.s

Optional arguments

  • include_self (boolean) - whether elements from the input tensor are included in the reduction. Default: true
  • out (ExTorch.Tensor or nil) - an optional pre-allocated tensor used to store the output result. If inplace = true it will not take any effect. Default: nil
  • inplace (bool) - if true then the scatter operation will take inplace on the input argument. Else it will return a separate tensor with the result. Default: false

Notes

  1. The backward pass is implemented only for src.size == index.size.
  2. This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information.
  3. This function is in beta and may change in the near future.

Examples

iex> src = ExTorch.tensor([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
#Tensor<
[1., 2., 3., 4., 5., 6.]
[size: {6}, dtype: :float, device: :cpu, requires_grad: false]>
iex> index = ExTorch.tensor([0, 1, 0, 1, 2, 1], dtype: :long)
#Tensor<
[0, 1, 0, 1, 2, 1]
[size: {6}, dtype: :long, device: :cpu, requires_grad: false]>
iex> input = ExTorch.tensor([1.0, 2.0, 3.0, 4.0])
#Tensor<
[1., 2., 3., 4.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.scatter_reduce(input, 0, index, src, :sum)
#Tensor<
[ 5., 14.,  8.,  4.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter_reduce(input, 0, index, src, :sum, include_self: false)
#Tensor<
[ 4., 12.,  5.,  4.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>

iex> input2 = ExTorch.tensor([5.0, 4.0, 3.0, 2.0])
#Tensor<
[5., 4., 3., 2.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter_reduce(input2, 0, index, src, :amax)
#Tensor<
[5., 6., 5., 2.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter_reduce(input2, 0, index, src, :amax, include_self: false)
#Tensor<
[3., 6., 5., 2.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>

select_scatter(input, src, dim, index)

@spec select_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer()
) :: ExTorch.Tensor.t()

See ExTorch.select_scatter/5

Available signature calls:

  • select_scatter(input, src, dim, index)

select_scatter(input, src, dim, index, kwargs)

@spec select_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec select_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Embeds the values of the src tensor into input at the given index. This function returns a tensor with fresh storage; it does not create a view.

Arguments

  • input (ExTorch.Tensor) - the input tensor.
  • src (ExTorch.Tensor) - the tensor to embed into input.
  • dim (integer) - the dimension to insert the slice into.
  • index (integer) - the index to select with.

Optional arguments

  • out (ExTorch.Tensor or nil) - an optional pre-allocated tensor used to store the output result. Default: nil

Note

src must be of the proper size in order to be embedded into input. Specifically, it should have the same shape as ExTorch.select(input, dim, index)

Examples

iex> a = ExTorch.zeros({2, 2})
#Tensor<
[[   0.,    0.],
 [   0.,    0.]]
[size: {2, 2}, dtype: :float, device: :cpu, requires_grad: false]>
iex> b = ExTorch.ones(2)
#Tensor<
[1., 1.]
[size: {2}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.select_scatter(a, b, 0, 0)
#Tensor<
[[1.0000, 1.0000],
 [0.0000, 0.0000]]
[size: {2, 2}, dtype: :float, device: :cpu, requires_grad: false]>

slice_scatter(input, src)

@spec slice_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.slice_scatter/7

Available signature calls:

  • slice_scatter(input, src)

slice_scatter(input, src, dim)

@spec slice_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer()
) :: ExTorch.Tensor.t()
@spec slice_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  dim: integer(),
  start: integer() | nil,
  stop: integer() | nil,
  step: integer() | nil,
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.slice_scatter/7

Available signature calls:

  • slice_scatter(input, src, kwargs)
  • slice_scatter(input, src, dim)

slice_scatter(input, src, dim, kwargs)

@spec slice_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  start: integer() | nil,
  stop: integer() | nil,
  step: integer() | nil,
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()
@spec slice_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer() | nil
) :: ExTorch.Tensor.t()

See ExTorch.slice_scatter/7

Available signature calls:

  • slice_scatter(input, src, dim, start)
  • slice_scatter(input, src, dim, kwargs)

slice_scatter(input, src, dim, start, kwargs)

@spec slice_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer() | nil,
  stop: integer() | nil,
  step: integer() | nil,
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()
@spec slice_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer() | nil,
  integer() | nil
) :: ExTorch.Tensor.t()

See ExTorch.slice_scatter/7

Available signature calls:

  • slice_scatter(input, src, dim, start, stop)
  • slice_scatter(input, src, dim, start, kwargs)

slice_scatter(input, src, dim, start, stop, kwargs)

@spec slice_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer() | nil,
  integer() | nil,
  step: integer() | nil,
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()
@spec slice_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer() | nil,
  integer() | nil,
  integer() | nil
) :: ExTorch.Tensor.t()

See ExTorch.slice_scatter/7

Available signature calls:

  • slice_scatter(input, src, dim, start, stop, step)
  • slice_scatter(input, src, dim, start, stop, kwargs)

slice_scatter(input, src, dim, start, stop, step, kwargs)

@spec slice_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer() | nil,
  integer() | nil,
  integer() | nil,
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec slice_scatter(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer(),
  integer() | nil,
  integer() | nil,
  integer() | nil,
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Embeds the values of the src tensor into input at the given dimension. This function returns a tensor with fresh storage; it does not create a view.

Arguments

Optional arguments

  • dim (integer) - the dimension to insert the slice into. Default: 0
  • start (integer or nil) - the start index from which the slice should be inserted into. Default: nil
  • stop (integer or nil) - the end index until which the slice is inserted. Default: nil
  • step (integer or nil) - how many elements are skipped in between slice insertions. Default: 1
  • out (ExTorch.Tensor or nil) - an optional pre-allocated tensor used to store the output result. Default: nil

Examples

iex> a = ExTorch.zeros({8, 8})
iex> b = ExTorch.ones({2, 8})
iex> ExTorch.slice_scatter(a, b, start: 6)
#Tensor<
[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
 [1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
 [1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000]]
[size: {8, 8}, dtype: :float, device: :cpu, requires_grad: false]>

iex> b = ExTorch.ones({8, 2})
iex> ExTorch.slice_scatter(a, b, dim: 1, start: 2, stop: 6, step: 2)
#Tensor<
[[0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
 [0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000]]
[size: {8, 8}, dtype: :float, device: :cpu, requires_grad: false]>

split(tensor, split_size_or_sections)

@spec split(ExTorch.Tensor.t(), integer() | [integer()] | tuple()) :: [
  ExTorch.Tensor.t()
]

See ExTorch.split/3

Available signature calls:

  • split(tensor, split_size_or_sections)

split(tensor, split_size_or_sections, dim)

@spec split(ExTorch.Tensor.t(), integer() | [integer()] | tuple(), integer()) :: [
  ExTorch.Tensor.t()
]
@spec split(ExTorch.Tensor.t(), integer() | [integer()] | tuple(), [{:dim, integer()}]) ::
  [
    ExTorch.Tensor.t()
  ]

Splits the tensor into chunks. Each chunk is a view of the original tensor.

If split_size_or_sections is an integer type, then tensor will be split into equally sized chunks (if possible). Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by split_size.

If split_size_or_sections is a list, then tensor will be split into length(split_size_or_sections) chunks with sizes in dim according to split_size_or_sections.

Arguments

  • tensor (ExTorch.Tensor) - tensor to split.
  • split_size_or_sections (integer or [integer]) - size of a single chunk or list of sizes for each chunk.

Optional arguments

  • dim (integer) - dimension along which to split the tensor.

Examples

iex> a = ExTorch.arange(10) |> ExTorch.reshape({5, 2})
#Tensor<
[[0.0000, 1.0000],
 [2.0000, 3.0000],
 [4.0000, 5.0000],
 [6.0000, 7.0000],
 [8.0000, 9.0000]]
[size: {5, 2}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.split(a, 2)
[
  #Tensor<
  [[0.0000, 1.0000],
   [2.0000, 3.0000]]
  [size: {2, 2}, dtype: :float, device: :cpu, requires_grad: false]>,
  #Tensor<
  [[4., 5.],
   [6., 7.]]
  [size: {2, 2}, dtype: :float, device: :cpu, requires_grad: false]>,
  #Tensor<
  [[8., 9.]]
  [size: {1, 2}, dtype: :float, device: :cpu, requires_grad: false]>
]
iex> ExTorch.split(a, [2, 3])
[
  #Tensor<
  [[0.0000, 1.0000],
   [2.0000, 3.0000]]
  [size: {2, 2}, dtype: :float, device: :cpu, requires_grad: false]>,
  #Tensor<
  [[4., 5.],
   [6., 7.],
   [8., 9.]]
  [size: {3, 2}, dtype: :float, device: :cpu, requires_grad: false]>
]

squeeze(input)

@spec squeeze(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.squeeze/2

Available signature calls:

  • squeeze(input)

squeeze(input, dim)

@spec squeeze(ExTorch.Tensor.t(), integer() | tuple() | [integer()] | nil) ::
  ExTorch.Tensor.t()
@spec squeeze(
  ExTorch.Tensor.t(),
  [{:dim, integer() | tuple() | [integer()] | nil}]
) :: ExTorch.Tensor.t()

Returns a tensor with all specified dimensions of input of size 1 removed.

For example, if input is of shape: $\(A \times 1 \times B \times C \times 1 \times D\)$ then ExTorch.squeeze(input) will be of shape: $\(A \times B \times C \times D \)$.

When dim is given, a squeeze operation is done only in the given dimension(s). If input is of shape: $\(A \times 1 \times B \)$, squeeze(input, 0) leaves the tensor unchanged, but squeeze(input, 1) will squeeze the tensor to the shape $\(A \times B \)$.

Arguments

Optional arguments

  • dim (integer or tuple or [integer] or nil) - the dimension(s) to squeeze from input. If nil, then all singleton dimensions will be squeezed.

Notes

  1. The returned tensor shares the storage with the input tensor, so changing the contents of one will change the contents of the other.
  2. If the tensor has a batch dimension of size 1, then squeeze(input) will also remove the batch dimension, which can lead to unexpected errors. Consider specifying only the dims you wish to be squeezed.

Examples

iex> a = ExTorch.empty({1, 3, 1, 4, 1, 5})
iex> a.size
{1, 3, 1, 4, 5}

# Squeeze all singleton dimensions
iex> b = ExTorch.squeeze(a)
iex> b.size
{3, 4, 5}

# Squeeze a particular dimension
iex> b = ExTorch.squeeze(a, -2)
iex> b.size
{1, 3, 1, 4, 5}

# Squeeze particular dimensions
iex> b = ExTorch.squeeze(a, {2, 4})
iex> b.size
{1, 3, 4, 5}

stack(input)

@spec stack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()

See ExTorch.stack/3

Available signature calls:

  • stack(input)

stack(input, dim)

@spec stack([ExTorch.Tensor.t()] | tuple(), integer()) :: ExTorch.Tensor.t()
@spec stack([ExTorch.Tensor.t()] | tuple(),
  dim: integer(),
  out: ExTorch.Tensor.t() | nil
) ::
  ExTorch.Tensor.t()

See ExTorch.stack/3

Available signature calls:

  • stack(input, kwargs)
  • stack(input, dim)

stack(input, dim, kwargs)

@spec stack([ExTorch.Tensor.t()] | tuple(), integer(), [
  {:out, ExTorch.Tensor.t() | nil}
]) ::
  ExTorch.Tensor.t()
@spec stack([ExTorch.Tensor.t()] | tuple(), integer(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Concatenates a sequence of tensors along a new dimension.

All tensors need to be of the same size. This function is analogous to ExTorch.cat/3

Arguments

  • tensors ([ExTorch.Tensor] | tuple()) - A sequence of tensors of the same type. Non-empty tensors provided must have the same shape.

Optional arguments

  • dim (integer()) - the dimension over which the tensors are concatenated. Default: 0
  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the concatenation output. Default: nil

Examples

iex> a = ExTorch.rand({3, 4})
#Tensor<
[[0.7419, 0.4063, 0.0514, 0.4281],
 [0.7350, 0.1977, 0.5593, 0.1701],
 [0.4135, 0.7213, 0.9591, 0.2798]]
[size: {3, 4}, dtype: :float, device: :cpu, requires_grad: false]>

# Concatenate tensors into a new dimension at the beginning
iex> ExTorch.stack([a, a, a])
#Tensor<
[[[0.7419, 0.4063, 0.0514, 0.4281],
  [0.7350, 0.1977, 0.5593, 0.1701],
  [0.4135, 0.7213, 0.9591, 0.2798]],

 [[0.7419, 0.4063, 0.0514, 0.4281],
  [0.7350, 0.1977, 0.5593, 0.1701],
  [0.4135, 0.7213, 0.9591, 0.2798]],

 [[0.7419, 0.4063, 0.0514, 0.4281],
  [0.7350, 0.1977, 0.5593, 0.1701],
  [0.4135, 0.7213, 0.9591, 0.2798]]]
[size: {3, 3, 4}, dtype: :float, device: :cpu, requires_grad: false]>

# Concatenate tensors into a new dimension at the first position
iex> ExTorch.stack([a, a, a], 1)
#Tensor<
[[[0.7419, 0.4063, 0.0514, 0.4281],
  [0.7419, 0.4063, 0.0514, 0.4281],
  [0.7419, 0.4063, 0.0514, 0.4281]],

 [[0.7350, 0.1977, 0.5593, 0.1701],
  [0.7350, 0.1977, 0.5593, 0.1701],
  [0.7350, 0.1977, 0.5593, 0.1701]],

 [[0.4135, 0.7213, 0.9591, 0.2798],
  [0.4135, 0.7213, 0.9591, 0.2798],
  [0.4135, 0.7213, 0.9591, 0.2798]]]
[size: {3, 3, 4}, dtype: :float, device: :cpu, requires_grad: false]>

swapaxes(input, dim0, dim1)

@spec swapaxes(ExTorch.Tensor.t(), integer(), integer()) :: ExTorch.Tensor.t()

Alias to transpose/3

swapdims(input, dim0, dim1)

@spec swapdims(ExTorch.Tensor.t(), integer(), integer()) :: ExTorch.Tensor.t()

Alias to transpose/3

t(input)

Expects input to be <= 2-D tensor and transposes dimensions 0 and 1.

0-D and 1-D tensors are returned as is. When input is a 2-D tensor this is equivalent to ExTorch.transpose(input, 0, 1).

Arguments

Examples

iex> a = ExTorch.rand({4, 5})
#Tensor<
[[0.3812, 0.6590, 0.8400, 0.4826, 0.3654],
 [0.4542, 0.4252, 0.5376, 0.8787, 0.6286],
 [0.3727, 0.4394, 0.0584, 0.2185, 0.7270],
 [0.8123, 0.2479, 0.2493, 0.2429, 0.3871]]
[size: {4, 5}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.t(a)
#Tensor<
[[0.3812, 0.4542, 0.3727, 0.8123],
 [0.6590, 0.4252, 0.4394, 0.2479],
 [0.8400, 0.5376, 0.0584, 0.2493],
 [0.4826, 0.8787, 0.2185, 0.2429],
 [0.3654, 0.6286, 0.7270, 0.3871]]
[size: {5, 4}, dtype: :float, device: :cpu, requires_grad: false]>

take(input, indices)

Returns a new tensor with the elements of input at the given indices.

The input tensor is treated as if it were viewed as a 1-D tensor. The result takes the same shape as the indices.

Arguments

Examples

iex> a = ExTorch.rand({3, 3})
#Tensor<
[[0.0860, 0.9378, 0.3475],
 [0.3576, 0.7145, 0.1036],
 [0.7352, 0.4285, 0.2933]]
[size: {3, 3}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.take(a, ExTorch.tensor([1, 5, 6], dtype: :int64))
#Tensor<
[0.9378, 0.1036, 0.7352]
[size: {3}, dtype: :float, device: :cpu, requires_grad: false]>

take_along_dim(input, indices)

@spec take_along_dim(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.take_along_dim/4

Available signature calls:

  • take_along_dim(input, indices)

take_along_dim(input, indices, dim)

@spec take_along_dim(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer() | nil
) :: ExTorch.Tensor.t()
@spec take_along_dim(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  dim: integer() | nil,
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.take_along_dim/4

Available signature calls:

  • take_along_dim(input, indices, kwargs)
  • take_along_dim(input, indices, dim)

take_along_dim(input, indices, dim, kwargs)

@spec take_along_dim(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer() | nil,
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec take_along_dim(
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  integer() | nil,
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Selects values from input at the 1-dimensional indices from indices along the given dim.

Functions that return indices along a dimension, like ExTorch.argmax/3 and ExTorch.argmin/3, are designed to work with this function. See the examples below.

Arguments

Optional arguments

  • dim (integer) - dimension to select along. If nil, then it will index all the dimensions as a single one. Default: nil.
  • out (ExTorch.Tensor or nil) - an optional pre-allocated tensor used to store the output. Default: nil

Notes

This function is similar to NumPy’s take_along_axis. See also ExTorch.gather/5.

Examples

iex> t = ExTorch.tensor([[10, 30, 20], [60, 40, 50]], dtype: :long)
#Tensor<
[[10, 30, 20],
 [60, 40, 50]]
[size: {2, 3}, dtype: :long, device: :cpu, requires_grad: false]>
iex> max_idx = ExTorch.argmax(t)
#Tensor< 3 [size: {}, dtype: :long, device: :cpu, requires_grad: false]>
iex> ExTorch.take_along_dim(t, max_idx)
#Tensor< [60] [size: {1}, dtype: :long, device: :cpu, requires_grad: false]>

iex> sorted_idx = ExTorch.argsort(t, dim: 1)
#Tensor<
[[0, 2, 1],
 [1, 2, 0]]
[size: {2, 3}, dtype: :long, device: :cpu, requires_grad: false]>
iex> ExTorch.take_along_dim(t, sorted_idx, dim: 1)
#Tensor<
[[10, 20, 30],
 [40, 50, 60]]
[size: {2, 3}, dtype: :long, device: :cpu, requires_grad: false]>

tensor_split(input, indices_or_sections)

@spec tensor_split(
  ExTorch.Tensor.t(),
  integer() | [integer()] | tuple() | ExTorch.Tensor.t()
) :: [ExTorch.Tensor.t()]

See ExTorch.tensor_split/3

Available signature calls:

  • tensor_split(input, indices_or_sections)

tensor_split(input, indices_or_sections, dim)

@spec tensor_split(
  ExTorch.Tensor.t(),
  integer() | [integer()] | tuple() | ExTorch.Tensor.t(),
  integer()
) :: [ExTorch.Tensor.t()]
@spec tensor_split(
  ExTorch.Tensor.t(),
  integer() | [integer()] | tuple() | ExTorch.Tensor.t(),
  [{:dim, integer()}]
) :: [ExTorch.Tensor.t()]

Splits a tensor into multiple sub-tensors, all of which are views of input, along dimension dim according to the indices or number of sections specified by indices_or_sections.

Arguments

  • input (ExTorch.Tensor) - the tensor to split.
  • indices_or_sections (integer | ExTorch.Tensor | [integer()] | tuple) -

    • If indices_or_sections is an integer n or a zero dimensional long tensor with value n, input is split into n sections along dimension dim. If input is divisible by n along dimension dim, each section will be of equal size, input.size[dim] / n.
    • If input is not divisible by n, the sizes of the first input.size[dim] % n sections will have size input.size[dim] / n + 1, and the rest will have size input.size[dim] / n.
    • If indices_or_sections is a list or tuple of ints, or a one-dimensional long tensor, then input is split along dimension dim at each of the indices in the list, tuple or tensor. For instance, indices_or_sections = [2, 3] and dim = 0 would result in the tensors input[:2], input[2:3], and input[3:].
    • If indices_or_sections is a tensor, it must be a zero-dimensional or one-dimensional long tensor on the CPU.

Optional arguments

  • dim (integer) - dimension along which to split the tensor. Default: 0

Examples

# Split a tensor in a given number of chunks
iex> a = ExTorch.arange(10)
iex> ExTorch.tensor_split(a, 2)
[
  #Tensor<
[0.0000, 1.0000, 2.0000, 3.0000, 4.0000]
[
    size: {5},
    dtype: :float,
    device: :cpu,
    requires_grad: false
  ]>,
  #Tensor<
[5., 6., 7., 8., 9.]
[
    size: {5},
    dtype: :float,
    device: :cpu,
    requires_grad: false
  ]>
]

# Split a tensor into the given sections
iex> ExTorch.tensor_split(a, [2, 5])
[
  #Tensor<
  [0.0000, 1.0000]
  [size: {2}, dtype: :float, device: :cpu, requires_grad: false]>,
  #Tensor<
  [2., 3., 4.]
  [size: {3}, dtype: :float, device: :cpu, requires_grad: false]>,
  #Tensor<
  [5., 6., 7., 8., 9.]
  [size: {5}, dtype: :float, device: :cpu, requires_grad: false]>
]

transpose(input, dim0, dim1)

@spec transpose(ExTorch.Tensor.t(), integer(), integer()) :: ExTorch.Tensor.t()

Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped.

Arguments

  • input (ExTorch.Tensor) - the input tensor.
  • dim0 (integer()) - the first dimension to transpose.
  • dim1 (integer()) - the second dimension to transpose.

Notes

  • If input is a strided tensor then the resulting out tensor shares its underlying storage with the input tensor, so changing the content of one would change the content of the other.
  • If input is a sparse tensor then the resulting out tensor does not share the underlying storage with the input tensor.
  • If input is a sparse tensor with compressed layout (SparseCSR, SparseBSR, SparseCSC or SparseBSC) the arguments dim0 and dim1 must be both batch dimensions, or must both be sparse dimensions. The batch dimensions of a sparse tensor are the dimensions preceding the sparse dimensions.

Examples

iex> a = ExTorch.arange(6) |> ExTorch.reshape({2, 3})
#Tensor<
[[0.0000, 1.0000, 2.0000],
 [3.0000, 4.0000, 5.0000]]
[
  size: {2, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.transpose(a, 0, 1)
#Tensor<
[[0.0000, 3.0000],
 [1.0000, 4.0000],
 [2.0000, 5.0000]]
[
  size: {3, 2},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

unsqueeze(tensor, dim)

@spec unsqueeze(
  ExTorch.Tensor.t(),
  integer()
) :: ExTorch.Tensor.t()

Append an empty dimension to a tensor on a given dimension.

Arguments

Examples

iex> x = ExTorch.full({2, 2}, -2)
#Tensor<
[[-2., -2.],
 [-2., -2.]]
[
  size: {2, 2},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.unsqueeze(x, -1)
#Tensor<
[[[-2.],
  [-2.]],

 [[-2.],
  [-2.]]]
[
  size: {2, 2, 1},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.unsqueeze(x, 1)
#Tensor<
[[[-2., -2.]],

 [[-2., -2.]]]
[
  size: {2, 1, 2},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.unsqueeze(x, 0)
#Tensor<
[[[-2., -2.],
  [-2., -2.]]]
[
  size: {1, 2, 2},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

vstack(tensors)

@spec vstack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()

See ExTorch.vstack/2

Available signature calls:

  • vstack(tensors)

vstack(tensors, kwargs)

@spec vstack(
  [ExTorch.Tensor.t()] | tuple(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec vstack([ExTorch.Tensor.t()] | tuple(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Stack tensors in sequence vertically (row wise).

This is equivalent to concatenation along the first axis after all 1-D tensors have been reshaped to at least 2 dimensions.

Arguments

  • tensors ([ExTorch.Tensor.t()] | tuple()) - sequence of tensors to concatenate.

Optional arguments

  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default: nil

Examples

iex> a = ExTorch.tensor([1, 2, 3])
iex> b = ExTorch.tensor([4, 5, 6])
iex> ExTorch.vstack({a, b})
#Tensor<
[[1, 2, 3],
 [4, 5, 6]]
[size: {2, 3}, dtype: :byte, device: :cpu, requires_grad: false]>

iex> a = ExTorch.tensor([[1],[2],[3]])
#Tensor<
[[1],
 [2],
 [3]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> b = ExTorch.tensor([[4],[5],[6]])
#Tensor<
[[4],
 [5],
 [6]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>

iex> ExTorch.vstack([a, b])
#Tensor<
[[1],
 [2],
 [3],
 [4],
 [5],
 [6]]
[size: {6, 1}, dtype: :byte, device: :cpu, requires_grad: false]>

Tensor indexing

index(tensor, indices)

Index a tensor given a list of integers, ranges, tensors, nil or :ellipsis.

Arguments

Examples

iex> a = ExTorch.arange(3 * 4 * 4) |> ExTorch.reshape({3, 4, 4})
#Tensor<
[[[ 0.,  1.,  2.,  3.],
  [ 4.,  5.,  6.,  7.],
  [ 8.,  9., 10., 11.],
  [12., 13., 14., 15.]],

 [[16., 17., 18., 19.],
  [20., 21., 22., 23.],
  [24., 25., 26., 27.],
  [28., 29., 30., 31.]],

 [[32., 33., 34., 35.],
  [36., 37., 38., 39.],
  [40., 41., 42., 43.],
  [44., 45., 46., 47.]]]
[
  size: {3, 4, 4},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

# Use an integer index
iex> ExTorch.index(a, 0)
#Tensor<
[[ 0.,  1.,  2.,  3.],
 [ 4.,  5.,  6.,  7.],
 [ 8.,  9., 10., 11.],
 [12., 13., 14., 15.]]
[
  size: {4, 4},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

# Use a slice index
iex> ExTorch.index(a, 0..2)
#Tensor<
[[[ 0.,  1.,  2.,  3.],
  [ 4.,  5.,  6.,  7.],
  [ 8.,  9., 10., 11.],
  [12., 13., 14., 15.]],

 [[16., 17., 18., 19.],
  [20., 21., 22., 23.],
  [24., 25., 26., 27.],
  [28., 29., 30., 31.]]]
[
  size: {2, 4, 4},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.index(a, ExTorch.slice(0, 1))
#Tensor<
[[[ 0.,  1.,  2.,  3.],
  [ 4.,  5.,  6.,  7.],
  [ 8.,  9., 10., 11.],
  [12., 13., 14., 15.]]]
[
  size: {1, 4, 4},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

# Index multiple dimensions
iex> ExTorch.index(a, [:::, ExTorch.slice(0, 2), 0])
#Tensor<
[[ 0.,  4.],
 [16., 20.],
 [32., 36.]]
[
  size: {3, 2},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

Notes

For more information regarding the kind of accepted indices and their corresponding behaviour, please see the ExTorch.Index documentation

index_add(input, dim, index, source)

See ExTorch.index_add/7

Available signature calls:

  • index_add(input, dim, index, source)

index_add(input, dim, index, source, alpha)

See ExTorch.index_add/7

Available signature calls:

  • index_add(input, dim, index, source, kwargs)
  • index_add(input, dim, index, source, alpha)

index_add(input, dim, index, source, alpha, kwargs)

See ExTorch.index_add/7

Available signature calls:

  • index_add(input, dim, index, source, alpha, out)
  • index_add(input, dim, index, source, alpha, kwargs)

index_add(input, dim, index, source, alpha, out, inplace)

Accumulate the elements of alpha times source into the input tensor by adding to the indices in the order given in index.

For example, if dim == 0, index[i] == j, and alpha=-1, then the ith row of source is subtracted from the jth row of input.

The dim-th dimension of source must have the same size as the length of index (which must be a 1D tensor), and all other dimensions must match input, or an error will be raised.

For a 3-D tensor the output is given as:

out[index[i], :, :] = input[index[i], :, :] + alpha * src[i, :, :]  # if dim == 0
out[:, index[i], :] = input[:, index[i], :] + alpha * src[:, i, :]  # if dim == 1
out[:, :, index[i]] = input[:, :, index[i]] + alpha * src[:, :, i]  # if dim == 2

Arguments

  • input (ExTorch.Tensor) - input tensor.
  • dim (integer()) - dimension along which to index.
  • index (ExTorch.Tensor) - indices of input to select from, its dtype must be :long.
  • source (ExTorch.Tensor) - the tensor containing values to add.

Optional arguments

  • alpha (ExTorch.Scalar) - the scalar multiplier for source. Default: 1
  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default: nil

  • inplace (boolean) - if true, then the operation will be written to the input tensor (the out argument will be ignored). Else, it returns a new tensor or writes to the out argument (if not nil)

Examples

iex> x = ExTorch.ones({5, 3})
iex> t = ExTorch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype: :float)
iex> index = ExTorch.tensor([0, 4, 2], dtype: :long)
iex>  ExTorch.index_add(x, 0, index, t)
#Tensor<
[[ 2.,  3.,  4.],
 [ 1.,  1.,  1.],
 [ 8.,  9., 10.],
 [ 1.,  1.,  1.],
 [ 5.,  6.,  7.]]
[size: {5, 3}, dtype: :float, device: :cpu, requires_grad: false]>

index_copy(input, dim, index, source)

See ExTorch.index_copy/6

Available signature calls:

  • index_copy(input, dim, index, source)

index_copy(input, dim, index, source, kwargs)

@spec index_copy(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  out: ExTorch.Tensor.t() | nil,
  inplace: boolean()
) :: ExTorch.Tensor.t()
@spec index_copy(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.index_copy/6

Available signature calls:

  • index_copy(input, dim, index, source, out)
  • index_copy(input, dim, index, source, kwargs)

index_copy(input, dim, index, source, out, inplace)

Copies the elements of source into the input tensor by selecting the indices in the order given in index.

For example, if dim == 0 and index[i] == j, then the ith row of source is copied to the jth row of input.

The dimth dimension of source must have the same size as the length of index (which must be a 1D tensor), and all other dimensions must match input, or an error will be raised.

Arguments

  • input (ExTorch.Tensor) - input tensor.
  • dim (integer()) - dimension along which to index.
  • index (ExTorch.Tensor) - indices of input to select from, its dtype must be :long.
  • source (ExTorch.Tensor) - the tensor containing values to copy.

Optional arguments

  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default: nil

  • inplace (boolean) - if true, the input tensor will be modified and be the object of the modifications done by this function. Else it will return a new tensor with the changes, or it will apply them to out. Default: false

Notes

If index contains duplicate entries, multiple elements from source will be copied to the same index of self. The result is nondeterministic since it depends on which copy occurs last.

Examples

iex> x = ExTorch.zeros({5, 3})
iex> t = ExTorch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype: :float)
iex> ExTorch.index_copy(input, 0, index, t)
#Tensor<
[[1.0000, 2.0000, 3.0000],
 [0.0000, 0.0000, 0.0000],
 [7.0000, 8.0000, 9.0000],
 [0.0000, 0.0000, 0.0000],
 [4.0000, 5.0000, 6.0000]]
[size: {5, 3}, dtype: :float, device: :cpu, requires_grad: false]>

index_put(tensor, indices, value)

See ExTorch.index_put/4

Available signature calls:

  • index_put(tensor, indices, value)

index_put(tensor, indices, value, inplace)

Assign a value into a tensor given a single or a sequence of indices.

Arguments

Optional arguments

  • inplace (boolean()) - If true, then the values will be replaced on the original tensor argument. Else, it will return a copy of tensor with the values replaced. Default: false

Examples

# Assign a particular value
iex> x = ExTorch.zeros({2, 3, 3})
#Tensor<
[[[0., 0., 0.],
  [0., 0., 0.],
  [0., 0., 0.]],

 [[0., 0., 0.],
  [0., 0., 0.],
  [0., 0., 0.]]]
[
  size: {2, 3, 3},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

iex> x = ExTorch.index_put(x, 0, -1)
#Tensor<
[[[-1., -1., -1.],
  [-1., -1., -1.],
  [-1., -1., -1.]],

 [[ 0.,  0.,  0.],
  [ 0.,  0.,  0.],
  [ 0.,  0.,  0.]]]
[
  size: {2, 3, 3},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

# Assign a value into an slice
iex> x = ExTorch.index_put(x, [0, ExTorch.slice(1), ExTorch.slice(1)], 0.3)
#Tensor<
[[[-1.0000, -1.0000, -1.0000],
  [-1.0000,  0.3000,  0.3000],
  [-1.0000,  0.3000,  0.3000]],

 [[ 0.0000,  0.0000,  0.0000],
  [ 0.0000,  0.0000,  0.0000],
  [ 0.0000,  0.0000,  0.0000]]]
[
  size: {2, 3, 3},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

# Assign a tensor into an index
iex> value = ExTorch.eye(3)
iex> x = ExTorch.index_put(x, 1, value)
#Tensor<
[[[-1.0000, -1.0000, -1.0000],
  [-1.0000,  0.3000,  0.3000],
  [-1.0000,  0.3000,  0.3000]],

 [[ 1.0000,  0.0000,  0.0000],
  [ 0.0000,  1.0000,  0.0000],
  [ 0.0000,  0.0000,  1.0000]]]
[
  size: {2, 3, 3},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

# Assign a list of numbers into an index (broadcastable)
iex> x = ExTorch.index_put(x, [:::, 1], [1, 2, 3])
#Tensor<
[[[-1.0000, -1.0000, -1.0000],
  [ 1.0000,  2.0000,  3.0000],
  [-1.0000,  0.3000,  0.3000]],

 [[ 1.0000,  0.0000,  0.0000],
  [ 1.0000,  2.0000,  3.0000],
  [ 0.0000,  0.0000,  1.0000]]]
[
  size: {2, 3, 3},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

index_reduce(self, dim, index, source, reduce)

@spec index_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :prod | :mean | :amax | :amin
) :: ExTorch.Tensor.t()

See ExTorch.index_reduce/8

Available signature calls:

  • index_reduce(self, dim, index, source, reduce)

index_reduce(self, dim, index, source, reduce, include_self)

@spec index_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :prod | :mean | :amax | :amin,
  boolean()
) :: ExTorch.Tensor.t()
@spec index_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :prod | :mean | :amax | :amin,
  include_self: boolean(),
  out: ExTorch.Tensor.t() | nil,
  inplace: boolean()
) :: ExTorch.Tensor.t()

See ExTorch.index_reduce/8

Available signature calls:

  • index_reduce(self, dim, index, source, reduce, kwargs)
  • index_reduce(self, dim, index, source, reduce, include_self)

index_reduce(self, dim, index, source, reduce, include_self, kwargs)

@spec index_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :prod | :mean | :amax | :amin,
  boolean(),
  out: ExTorch.Tensor.t() | nil,
  inplace: boolean()
) :: ExTorch.Tensor.t()
@spec index_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :prod | :mean | :amax | :amin,
  boolean(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.index_reduce/8

Available signature calls:

  • index_reduce(self, dim, index, source, reduce, include_self, out)
  • index_reduce(self, dim, index, source, reduce, include_self, kwargs)

index_reduce(self, dim, index, source, reduce, include_self, out, inplace)

@spec index_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :prod | :mean | :amax | :amin,
  boolean(),
  ExTorch.Tensor.t() | nil,
  boolean()
) :: ExTorch.Tensor.t()
@spec index_reduce(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t(),
  :prod | :mean | :amax | :amin,
  boolean(),
  ExTorch.Tensor.t() | nil,
  [{:inplace, boolean()}]
) :: ExTorch.Tensor.t()

Accumulate the elements of source into the self tensor by accumulating to the indices in the order given in index using the reduction given by the reduce argument.

For example, if dim == 0, index[i] == j, reduce == :prod and include_self == true then the ith row of source is multiplied by the jth row of self. If include_self = true, the values in the self tensor are included in the reduction, otherwise, rows in the self tensor that are accumulated to are treated as if they were filled with the reduction identites.

The dimth dimension of source must have the same size as the length of index (which must be a 1D tensor), and all other dimensions must match self, or an error will be raised.

For a 3-D tensor with reduce = :prod and include_self = true the output is given as:

self[index[i], :, :] *= src[i, :, :]  # if dim == 0
self[:, index[i], :] *= src[:, i, :]  # if dim == 1
self[:, :, index[i]] *= src[:, :, i]  # if dim == 2

Arguments

  • self (ExTorch.Tensor) - input tensor.
  • dim (integer()) - dimension along which to index.
  • index (ExTorch.Tensor) - indices of source to select from, its dtype must be :long.
  • source (ExTorch.Tensor) - the tensor containing values to copy.
  • reduce (:prod | :mean | :amax | :amin) - the reduction operation to apply.

Optional arguments

  • include_self (boolean) - whether the elements from the self tensor are included in the reduction. Default: true
  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default: nil

  • inplace (boolean) - if true, the self tensor will be modified and be the object of the modifications done by this function. Else it will return a new tensor with the changes, or it will apply them to out. Default: false

Notes

  • This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information.
  • This function only supports floating point tensors.
  • This function is in beta and may change in the near future.

Examples

iex> x = ExTorch.full({5, 3}, 2)
iex> t = ExTorch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]], dtype: :float)
iex> index = ExTorch.tensor([0, 4, 2, 0], dtype: :long)
#Tensor<
[0, 4, 2, 0]
[size: {4}, dtype: :long, device: :cpu, requires_grad: false]>

iex> ExTorch.index_reduce(x, 0, index, t, :prod)
#Tensor<
[[20., 44., 72.],
 [ 2.,  2.,  2.],
 [14., 16., 18.],
 [ 2.,  2.,  2.],
 [ 8., 10., 12.]]
[size: {5, 3}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.index_reduce(x, 0, index, t, :prod, include_self: false)
#Tensor<
[[10., 22., 36.],
 [ 2.,  2.,  2.],
 [ 7.,  8.,  9.],
 [ 2.,  2.,  2.],
 [ 4.,  5.,  6.]]
[size: {5, 3}, dtype: :float, device: :cpu, requires_grad: false]>

index_select(input, dim, index)

@spec index_select(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.index_select/4

Available signature calls:

  • index_select(input, dim, index)

index_select(input, dim, index, kwargs)

@spec index_select(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec index_select(
  ExTorch.Tensor.t(),
  integer(),
  ExTorch.Tensor.t(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Returns a new tensor which indexes the input tensor along dimension dim using the entries in index (whose dtype is :long).

The returned tensor has the same number of dimensions as the original tensor (input). The dimth dimension has the same size as the length of index; other dimensions have the same size as in the original tensor.

Arguments

  • input (ExTorch.Tensor) - input tensor.
  • dim (integer()) - dimension along which to index.
  • index (ExTorch.Tensor) - indices of input to select from, its dtype must be :long.

Optional arguments

  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default: nil

Notes

  • The returned tensor does not use the same storage as the original tensor.
  • If out has a different shape than expected, we silently change it to the correct shape, reallocating the underlying storage if necessary.

Examples

iex> x = ExTorch.randn({3, 4})
#Tensor<
[[ 2.3564,  1.1268, -0.3407, -0.0561],
 [ 0.6479, -2.3011, -1.6695,  0.5547],
 [ 1.3554,  3.6460,  2.5569, -0.1892]]
[size: {3, 4}, dtype: :float, device: :cpu, requires_grad: false]>

iex> indices = ExTorch.tensor([0, 2], dtype: :long)

iex> ExTorch.index_select(x, 0, indices)
#Tensor<
[[ 2.3564,  1.1268, -0.3407, -0.0561],
 [ 1.3554,  3.6460,  2.5569, -0.1892]]
[size: {2, 4}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.index_select(x, 1, indices)
#Tensor<
[[ 2.3564, -0.3407],
 [ 0.6479, -1.6695],
 [ 1.3554,  2.5569]]
[size: {3, 2}, dtype: :float, device: :cpu, requires_grad: false]>

masked_select(input, mask)

@spec masked_select(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.masked_select/3

Available signature calls:

  • masked_select(input, mask)

masked_select(input, mask, kwargs)

@spec masked_select(ExTorch.Tensor.t(), ExTorch.Tensor.t(), [
  {:out, ExTorch.Tensor.t() | nil}
]) ::
  ExTorch.Tensor.t()
@spec masked_select(ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Returns a new 1-D tensor which indexes the input tensor according to the boolean mask mask which has dtype :bool.

The shapes of the mask tensor and the input tensor don’t need to match, but they must be broadcastable.

Arguments

Optional arguments

  • out (ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default: nil

Notes

The returned tensor does not use the same storage as the original tensor.

Examples

iex> x = ExTorch.randn({4, 5})
#Tensor<
[[ 1.6055, -0.1662, -0.6764, -0.8615, -2.1960],
 [ 0.8188, -1.1111, -0.2659,  1.4720, -0.0226],
 [-0.7065, -1.0628, -0.7172, -1.0006,  0.3091],
 [-0.8901, -0.6624, -0.4590,  0.0821, -0.9716]]
[size: {4, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex> mask = ExTorch.ge(x, 0)
#Tensor<
[[ true, false, false, false, false],
 [ true, false, false,  true, false],
 [false, false, false, false,  true],
 [false, false, false,  true, false]]
[size: {4, 5}, dtype: :bool, device: :cpu, requires_grad: false]>

iex> ExTorch.masked_select(x, mask)
#Tensor<
[1.6055, 0.8188, 1.4720, 0.3091, 0.0821]
[size: {5}, dtype: :float, device: :cpu, requires_grad: false]>

select(input, dim, index)

@spec select(ExTorch.Tensor.t(), integer(), integer()) :: ExTorch.Tensor.t()

Slices the input tensor along the selected dimension at the given index. This function returns a view of the original tensor with the given dimension removed.

Arguments

  • input (ExTorch.Tensor) - the input tensor.
  • dim (integer) - the dimension to slice.
  • index (integer) - the index to select.

Notes

ExTorch.select/3 is equivalent to slicing. For example, ExTorch.select(0, index) is equivalent to tensor[index] and ExTorch.select(2, index) is equivalent to tensor[{:::, :::, index}].

Examples

iex> a = ExTorch.arange(2 * 3 * 4) |> ExTorch.reshape({2, 3, 4})
#Tensor<
[[[ 0.0000,  1.0000,  2.0000,  3.0000],
  [ 4.0000,  5.0000,  6.0000,  7.0000],
  [ 8.0000,  9.0000, 10.0000, 11.0000]],

 [[12.0000, 13.0000, 14.0000, 15.0000],
  [16.0000, 17.0000, 18.0000, 19.0000],
  [20.0000, 21.0000, 22.0000, 23.0000]]]
[size: {2, 3, 4}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.select(a, 0, 1)
#Tensor<
[[12., 13., 14., 15.],
 [16., 17., 18., 19.],
 [20., 21., 22., 23.]]
[size: {3, 4}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.select(a, 1, 0)
#Tensor<
[[ 0.0000,  1.0000,  2.0000,  3.0000],
 [12.0000, 13.0000, 14.0000, 15.0000]]
[size: {2, 4}, dtype: :float, device: :cpu, requires_grad: false]>

iex> ExTorch.select(a, 2, 2)
#Tensor<
[[ 2.,  6., 10.],
 [14., 18., 22.]]
[size: {2, 3}, dtype: :float, device: :cpu, requires_grad: false]>

slice(start \\ nil, stop \\ nil, step \\ nil)

@spec slice(integer() | nil, integer() | nil, integer() | nil) ::
  ExTorch.Index.Slice.t()

Create a slice to index a tensor.

Arguments

  • start: The starting slice value. Default: nil
  • stop: The non-inclusive end of the slice. Default: nil
  • step: The step between values. Default: nil

Returns

Notes

An empty slice will represent the "take-all axis", represented by ":" in Python.

Pointwise math operations

add(input, other)

See ExTorch.add/3

Available signature calls:

  • add(input, other)

add(input, other, alpha)

Adds other to input, scaled by alpha: $out = input + alpha \times other$.

Args

  • input (ExTorch.Tensor) - the first input tensor.
  • other (ExTorch.Tensor) - the second input tensor.
  • alpha (number) - the multiplier for other. Default: 1.

Shape

  • Input: {*} (any shape, must be broadcastable).
  • Output: same shape as broadcasted input.

bmm(input, other)

Performs a batch matrix-matrix product of matrices stored in input and other.

input and other must be 3-D tensors each containing the same number of matrices.

Args

Shape

  • Input: {b, n, m} and {b, m, p}.
  • Output: {b, n, p}.

clamp(input, min_val, max_val)

@spec clamp(ExTorch.Tensor.t(), number(), number()) :: ExTorch.Tensor.t()

Clamps all elements in input into the range [min, max].

$out_i = \min(\max(input_i, min), max)$

Args

  • input (ExTorch.Tensor) - the input tensor.
  • min (number) - lower bound of the range.
  • max (number) - upper bound of the range.

clone(input)

@spec clone(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a deep copy of input. The returned tensor has the same data and type but does not share storage with the original.

contiguous(input)

@spec contiguous(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a contiguous in memory tensor containing the same data. If the tensor is already contiguous, returns itself.

detach(input)

@spec detach(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor detached from the current computation graph. The result will never require gradient.

einsum(equation, a, b)

Sums the product of the elements of the input tensors along dimensions specified using a notation based on the Einstein summation convention.

Args

Examples

# Matrix multiply
ExTorch.einsum("ij,jk->ik", a, b)

# Batch matrix multiply
ExTorch.einsum("bij,bjk->bik", a, b)

# Dot product
ExTorch.einsum("i,i->", a, b)

expand(input, shape)

@spec expand(ExTorch.Tensor.t(), tuple()) :: ExTorch.Tensor.t()

Returns a new view of the tensor with singleton dimensions expanded to a larger size. Pass -1 for dimensions you don't want to change.

Args

  • input (ExTorch.Tensor) - the input tensor.
  • shape (tuple) - the desired expanded size.

functional_log_softmax(input, dim)

@spec functional_log_softmax(ExTorch.Tensor.t(), integer()) :: ExTorch.Tensor.t()

Applies LogSoftmax along dim: $LogSoftmax(x_i) = \log\left(\frac{e^{x_i}}{\sum_j e^{x_j}}\right)$.

Args

  • input (ExTorch.Tensor) - the input tensor.
  • dim (integer) - the dimension along which LogSoftmax is computed.

functional_relu(input)

@spec functional_relu(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Applies the rectified linear unit function element-wise: $ReLU(x) = \max(0, x)$.

functional_softmax(input, dim)

@spec functional_softmax(ExTorch.Tensor.t(), integer()) :: ExTorch.Tensor.t()

Applies the Softmax function along dim: $Softmax(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$.

Args

  • input (ExTorch.Tensor) - the input tensor.
  • dim (integer) - the dimension along which Softmax is computed.

imag(input)

@spec imag(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor containing imaginary values of the input tensor. The returned tensor and input share the same underlying storage.

Arguments

  • input: The input tensor.

Examples

iex> x = ExTorch.rand({3}, dtype: :complex64)
#Tensor<
[0.8235+0.9395j, 0.9912+0.4506j, 0.5164+0.3070j]
[
  size: {3},
  dtype: :complex_float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.imag(x)
#Tensor<
[0.9395, 0.4506, 0.3070]
[
  size: {3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

masked_fill(input, mask, value)

@spec masked_fill(ExTorch.Tensor.t(), ExTorch.Tensor.t(), number()) ::
  ExTorch.Tensor.t()

Fills elements of input tensor with value where mask is true.

Args

  • input (ExTorch.Tensor) - the input tensor.
  • mask (ExTorch.Tensor) - a boolean mask tensor.
  • value (number) - the value to fill in where mask is true.

matmul(input, other)

Matrix product of two tensors.

The behavior depends on the dimensionality of the tensors:

  • If both are 1-D, computes the dot product.
  • If both are 2-D, computes matrix-matrix product.
  • If the first is 1-D and second is 2-D, a 1 is prepended to its dimension for the matrix multiply, then removed after.
  • If the first is 2-D and second is 1-D, computes matrix-vector product.
  • If both are at least 1-D and at least one is N-D (N > 2), computes a batched matrix multiply.

Args

mm(input, other)

Performs a matrix multiplication of the matrices input and other.

If input is a {n, m} tensor, other is a {m, p} tensor, output will be a {n, p} tensor. For batched matrix multiply, see bmm/2.

Args

Shape

  • Input: {n, m} and {m, p}.
  • Output: {n, p}.

mul(input, other)

Multiplies input by other element-wise: $out_i = input_i \times other_i$.

Args

neg(input)

Returns the negative of input element-wise: $out = -input$.

pow_tensor(input, exponent)

@spec pow_tensor(ExTorch.Tensor.t(), number()) :: ExTorch.Tensor.t()

Takes the power of each element by exponent: $out_i = input_i^{exponent}$.

Args

  • input (ExTorch.Tensor) - the input tensor.
  • exponent (number) - the exponent value.

real(input)

@spec real(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor containing real values of the input tensor. The returned tensor and input share the same underlying storage.

Arguments

  • input: The input tensor.

Examples

iex> x = ExTorch.rand({3}, dtype: :complex64)
#Tensor<
[0.8235+0.9395j, 0.9912+0.4506j, 0.5164+0.3070j]
[
  size: {3},
  dtype: :complex_float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.real(x)
#Tensor<
[0.8235, 0.9912, 0.5164]
[
  size: {3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

sub(input, other)

See ExTorch.sub/3

Available signature calls:

  • sub(input, other)

sub(input, other, alpha)

Subtracts other from input, scaled by alpha: $out = input - alpha \times other$.

Args

  • input (ExTorch.Tensor) - the first input tensor.
  • other (ExTorch.Tensor) - the second input tensor.
  • alpha (number) - the multiplier for other. Default: 1.

tensor_abs(input)

@spec tensor_abs(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Computes the absolute value of each element: $out_i = |input_i|$.

tensor_cos(input)

@spec tensor_cos(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with the cosine: $out_i = \cos(input_i)$.

tensor_div(input, other)

@spec tensor_div(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Divides input by other element-wise: $out_i = \frac{input_i}{other_i}$.

Args

tensor_exp(input)

@spec tensor_exp(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with the exponential: $out_i = e^{input_i}$.

tensor_log(input)

@spec tensor_log(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with the natural logarithm: $out_i = \ln(input_i)$.

tensor_sin(input)

@spec tensor_sin(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with the sine: $out_i = \sin(input_i)$.

tensor_sqrt(input)

@spec tensor_sqrt(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with the square root: $out_i = \sqrt{input_i}$.

tensor_where(condition, x, y)

Returns a tensor of elements selected from either x or y, depending on condition.

$out_i = \begin{cases} x_i & \text{if } condition_i \\ y_i & \text{otherwise} \end{cases}$

Args

view(input, shape)

@spec view(ExTorch.Tensor.t(), tuple()) :: ExTorch.Tensor.t()

Returns a new tensor with the same data but of a different shape.

The returned tensor shares the same data and must have the same number of elements. A single dimension may be -1, in which case it's inferred from the remaining dimensions.

Args

  • input (ExTorch.Tensor) - the input tensor.
  • shape (tuple) - the desired shape.

Reduction operations

all(input)

See ExTorch.all/4

Available signature calls:

  • all(input)

all(input, dim)

@spec all(ExTorch.Tensor.t(), nil | integer()) :: ExTorch.Tensor.t()
@spec all(ExTorch.Tensor.t(),
  dim: nil | integer(),
  keepdim: boolean(),
  out: nil | ExTorch.Tensor.t()
) ::
  ExTorch.Tensor.t()

See ExTorch.all/4

Available signature calls:

  • all(input, kwargs)
  • all(input, dim)

all(input, dim, keepdim)

@spec all(ExTorch.Tensor.t(), nil | integer(), boolean()) :: ExTorch.Tensor.t()
@spec all(ExTorch.Tensor.t(), nil | integer(),
  keepdim: boolean(),
  out: nil | ExTorch.Tensor.t()
) ::
  ExTorch.Tensor.t()

See ExTorch.all/4

Available signature calls:

  • all(input, dim, kwargs)
  • all(input, dim, keepdim)

all(input, dim, keepdim, kwargs)

@spec all(ExTorch.Tensor.t(), nil | integer(), boolean(), [
  {:out, nil | ExTorch.Tensor.t()}
]) ::
  ExTorch.Tensor.t()
@spec all(ExTorch.Tensor.t(), nil | integer(), boolean(), nil | ExTorch.Tensor.t()) ::
  ExTorch.Tensor.t()

Check if all elements (or in a dimension) in input evaluate to true.

If dim is nil, tests if all elements in input evaluate to true. Else, for each row of input in the given dimension dim, returns true if all elements in the row evaluate to true and false otherwise.

  • The keepdim and out options only work if dim is not nil.
  • If keepdim is true, the output tensor is of the same size as input, except in the dimension dim where it is of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the output tensor having 1 fewer dimension than input.

Arguments

Optional arguments

  • dim - the dimension to reduce. (nil | integer()). Default: nil

  • keepdim - whether the output tensor has dim retained or not. (boolean()). Default: false
  • out - the optional output pre-allocated tensor. (ExTorch.Tensor | nil). Default: nil

Examples

# Find if all elements in a tensor are true
iex> a = ExTorch.tensor([[true, true, true], [true, true, true]])
iex> ExTorch.all(a)
#Tensor<
true
[size: {}, dtype: :bool, device: :cpu, requires_grad: false]>

# Find if all elements (per dimension) are true
iex> b = ExTorch.empty({3, 3}, dtype: :bool)
#Tensor<
[[false,  true, false],
 [ true,  true,  true],
 [false, false, false]]
[
  size: {3, 3},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>
iex> ExTorch.all(b, -1)
#Tensor<
[false,  true, false]
[
  size: {3},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Preserve tensor dimensions
iex> ExTorch.all(b, -1, keepdim: true)
#Tensor<
[[false],
 [ true],
 [false]]
[
  size: {3, 1},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

amax(input, dim)

@spec amax(ExTorch.Tensor.t(), nil | integer() | tuple()) :: ExTorch.Tensor.t()

See ExTorch.amax/4

Available signature calls:

  • amax(input, dim)

amax(input, dim, keepdim)

@spec amax(ExTorch.Tensor.t(), nil | integer() | tuple(), boolean()) ::
  ExTorch.Tensor.t()
@spec amax(ExTorch.Tensor.t(), nil | integer() | tuple(),
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.amax/4

Available signature calls:

  • amax(input, dim, kwargs)
  • amax(input, dim, keepdim)

amax(input, dim, keepdim, kwargs)

@spec amax(ExTorch.Tensor.t(), nil | integer() | tuple(), boolean(), [
  {:out, ExTorch.Tensor.t() | nil}
]) ::
  ExTorch.Tensor.t()
@spec amax(
  ExTorch.Tensor.t(),
  nil | integer() | tuple(),
  boolean(),
  ExTorch.Tensor.t() | nil
) ::
  ExTorch.Tensor.t()

Returns the maximum value of each slice of the input tensor in the given dimension(s) dim.

If keepdim is true, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see Extorch.squeeze), resulting in the output tensor having 1 (or length(dim)) fewer dimension(s).

Arguments

  • input (ExTorch.Tensor) - the input tensor.
  • dim (nil | integer() | tuple()) - the dimension(s) to reduce. If nil, then it reduces all the dimensions.

Optional arguments

  • keepdim (boolean()) - whether the output tensor has dim retained or not, this . Default: false
  • out (ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default: nil

Notes

The difference between ExTorch.max/ExTorch.min and ExTorch.amax/ExTorch.amin is:

  • ExTorch.amax/ExTorch.amin supports reducing on multiple dimensions
  • ExTorch.amax/ExTorch.amin does not return indices
  • ExTorch.amax/ExTorch.amin evenly distributes gradient between equal values, while max(dim)/min(dim) propagates gradient only to a single index in the source tensor.

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[ 1.4814,  0.1511, -1.9243,  0.6649],
[ 0.1308,  0.5038, -0.0844, -0.8609],
[-0.9535,  0.1651, -0.5081,  0.7449],
[-1.5848,  0.1389, -0.5299, -0.0702]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Get the maximum values alongside the last dimension
iex> ExTorch.amax(a, -1)
#Tensor<
[1.4814, 0.5038, 0.7449, 0.1389]
[
  size: {4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Get the maximum value on all dimensions
iex> ExTorch.amax(a, {0, 1})
#Tensor<
1.4814
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

amin(input, dim)

@spec amin(ExTorch.Tensor.t(), nil | integer() | tuple()) :: ExTorch.Tensor.t()

See ExTorch.amin/4

Available signature calls:

  • amin(input, dim)

amin(input, dim, keepdim)

@spec amin(ExTorch.Tensor.t(), nil | integer() | tuple(), boolean()) ::
  ExTorch.Tensor.t()
@spec amin(ExTorch.Tensor.t(), nil | integer() | tuple(),
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.amin/4

Available signature calls:

  • amin(input, dim, kwargs)
  • amin(input, dim, keepdim)

amin(input, dim, keepdim, kwargs)

@spec amin(ExTorch.Tensor.t(), nil | integer() | tuple(), boolean(), [
  {:out, ExTorch.Tensor.t() | nil}
]) ::
  ExTorch.Tensor.t()
@spec amin(
  ExTorch.Tensor.t(),
  nil | integer() | tuple(),
  boolean(),
  ExTorch.Tensor.t() | nil
) ::
  ExTorch.Tensor.t()

Returns the minimum value of each slice of the input tensor in the given dimension(s) dim.

If keepdim is true, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see Extorch.squeeze), resulting in the output tensor having 1 (or length(dim)) fewer dimension(s).

Arguments

  • input (ExTorch.Tensor) - the input tensor.
  • dim (nil | integer() | tuple()) - the dimension(s) to reduce. If nil, then it reduces all the dimensions.

Optional arguments

  • keepdim (boolean()) - whether the output tensor has dim retained or not, this . Default: false
  • out (ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default: nil

Notes

The difference between ExTorch.max/ExTorch.min and ExTorch.amax/ExTorch.amin is:

  • ExTorch.amax/ExTorch.amin supports reducing on multiple dimensions
  • ExTorch.amax/ExTorch.amin does not return indices
  • ExTorch.amax/ExTorch.amin evenly distributes gradient between equal values, while max(dim)/min(dim) propagates gradient only to a single index in the source tensor.

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[ 0.4138,  0.9993,  0.1177, -0.0021],
 [ 0.0340, -0.7703,  1.5916, -0.2477],
 [ 0.4927,  0.7762, -0.9214, -0.3303],
 [-0.4098, -0.1762, -1.4085, -1.4918]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Get the minimum values alongside the last dimension
iex> ExTorch.amin(a, -1)
#Tensor<
[-0.0021, -0.7703, -0.9214, -1.4918]
[
  size: {4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Get the minimum value on all dimensions
iex> ExTorch.amin(a, {0, 1})
#Tensor<
-1.4918
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

aminmax(input)

See ExTorch.aminmax/4

Available signature calls:

  • aminmax(input)

aminmax(input, dim)

@spec aminmax(
  ExTorch.Tensor.t(),
  integer() | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec aminmax(ExTorch.Tensor.t(),
  dim: integer() | nil,
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.aminmax/4

Available signature calls:

  • aminmax(input, kwargs)
  • aminmax(input, dim)

aminmax(input, dim, keepdim)

@spec aminmax(
  ExTorch.Tensor.t(),
  integer() | nil,
  boolean()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec aminmax(
  ExTorch.Tensor.t(),
  integer() | nil,
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.aminmax/4

Available signature calls:

  • aminmax(input, dim, kwargs)
  • aminmax(input, dim, keepdim)

aminmax(input, dim, keepdim, kwargs)

@spec aminmax(
  ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}]
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec aminmax(
  ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

Computes the minimum and maximum values of the input tensor.

It will return a tuple {min, max} containing the maximum and minimum values, respectively.

Arguments

Optional arguments

  • dim (nil | integer()) - the dimension to reduce. If nil, then it computes the values over the entire input tensor. Default: nil

  • keepdim (boolean()) - whether the output tensors have dim retained or not. Default: false
  • out ({ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tensors in a tuple. Default: nil

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.7684,  0.8360,  0.1960, -0.7748],
 [ 0.9795, -0.3725,  0.1304, -0.3627],
 [-0.6206,  0.1624,  0.8514, -1.2361],
 [-1.5297, -0.6418, -0.8179,  1.7531]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Find the minimum and maximum values on the entire input
iex> {min, max} = ExTorch.aminmax(a)
iex> min
#Tensor<
1.7531
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
iex> max
#Tensor<
-1.5297
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Find the minimum and maximum values alongside the last dimension
iex> {min, max} = ExTorch.aminmax(a, -1, keepdim: true)
iex> min
#Tensor<
[[0.8360],
 [0.9795],
 [0.8514],
 [1.7531]]
[
  size: {4, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> max
#Tensor<
[[-0.7748],
 [-0.3725],
 [-1.2361],
 [-1.5297]]
[
  size: {4, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

any(input)

See ExTorch.any/4

Available signature calls:

  • any(input)

any(input, dim)

@spec any(ExTorch.Tensor.t(), nil | integer()) :: ExTorch.Tensor.t()
@spec any(ExTorch.Tensor.t(),
  dim: nil | integer(),
  keepdim: boolean(),
  out: nil | ExTorch.Tensor.t()
) ::
  ExTorch.Tensor.t()

See ExTorch.any/4

Available signature calls:

  • any(input, kwargs)
  • any(input, dim)

any(input, dim, keepdim)

@spec any(ExTorch.Tensor.t(), nil | integer(), boolean()) :: ExTorch.Tensor.t()
@spec any(ExTorch.Tensor.t(), nil | integer(),
  keepdim: boolean(),
  out: nil | ExTorch.Tensor.t()
) ::
  ExTorch.Tensor.t()

See ExTorch.any/4

Available signature calls:

  • any(input, dim, kwargs)
  • any(input, dim, keepdim)

any(input, dim, keepdim, kwargs)

@spec any(ExTorch.Tensor.t(), nil | integer(), boolean(), [
  {:out, nil | ExTorch.Tensor.t()}
]) ::
  ExTorch.Tensor.t()
@spec any(ExTorch.Tensor.t(), nil | integer(), boolean(), nil | ExTorch.Tensor.t()) ::
  ExTorch.Tensor.t()

Check if at least one element (or element in a dimension) in input evaluates to true.

If dim is nil, tests if at least one element in input evaluate to true. Else, for each row of input in the given dimension dim, returns true if at least one element in the row evaluates to true and false otherwise.

  • The keepdim and out options only work if dim is not nil.
  • If keepdim is true, the output tensor is of the same size as input, except in the dimension dim where it is of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the output tensor having 1 fewer dimension than input.

Arguments

Optional arguments

  • dim - the dimension to reduce. (nil | integer()). Default: nil

  • keepdim - whether the output tensor has dim retained or not. (boolean()). Default: false
  • out - the optional output pre-allocated tensor. (ExTorch.Tensor | nil). Default: nil

Examples

# Find if any element in a tensor is true
iex> a = ExTorch.tensor([[true, false, true], [false, true, true]])
iex> ExTorch.any(a)
#Tensor<
true
[size: {}, dtype: :bool, device: :cpu, requires_grad: false]>

# Find if any elements (per dimension) is true
iex> b = ExTorch.empty({3, 3}, dtype: :bool)
#Tensor<
[[false,  true, false],
 [ true,  true,  true],
 [false, false, false]]
[
  size: {3, 3},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>
iex> ExTorch.any(b, -1)
#Tensor<
[ true,  true, false]
[
  size: {3},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Preserve tensor dimensions
iex> ExTorch.any(b, -1, keepdim: true)
#Tensor<
[[ true],
 [ true],
 [false]]
[
  size: {3, 1},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

argmax(input)

@spec argmax(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.argmax/3

Available signature calls:

  • argmax(input)

argmax(input, dim)

@spec argmax(ExTorch.Tensor.t(), integer() | nil) :: ExTorch.Tensor.t()
@spec argmax(ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean()) ::
  ExTorch.Tensor.t()

See ExTorch.argmax/3

Available signature calls:

  • argmax(input, kwargs)
  • argmax(input, dim)

argmax(input, dim, keepdim)

@spec argmax(ExTorch.Tensor.t(), integer() | nil, boolean()) :: ExTorch.Tensor.t()
@spec argmax(ExTorch.Tensor.t(), integer() | nil, [{:keepdim, boolean()}]) ::
  ExTorch.Tensor.t()

Returns the indices of the maximum value of all elements (or elements in a dimension) in the input tensor.

  • If dim is nil, it will return the index of the maximum element on the overall input tensor, else it will return the maximum values alongside the specified dimension.

Arguments

Optional arguments

  • dim (nil | integer()) - the dimension to reduce. Default: nil

  • keepdim (boolean()) - whether the output tensor has dim retained or not. Default: false

Notes

If there are multiple maximal values then the indices of the first maximal value are returned.

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[ 1.2023, -0.1142,  0.5077,  1.2127],
 [ 0.5873, -0.7416, -0.0758,  0.0578],
 [-0.8066,  1.7030,  0.2894,  0.0539],
 [ 0.2353,  0.4396, -0.1846, -0.7395]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Get the overall maximum index.
iex> ExTorch.argmax(a)
#Tensor<
9
[size: {}, dtype: :long, device: :cpu, requires_grad: false]>

# Get the maximum index on the last dimension.
iex> ExTorch.argmax(a, -1)
#Tensor<
[3, 0, 1, 1]
[
  size: {4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Keep the reduced dimension on the output
iex> ExTorch.argmax(a, 0, keepdim: true)
#Tensor<
[[0, 2, 0, 0]]
[
  size: {1, 4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

argmin(input)

@spec argmin(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.argmin/3

Available signature calls:

  • argmin(input)

argmin(input, dim)

@spec argmin(ExTorch.Tensor.t(), integer() | nil) :: ExTorch.Tensor.t()
@spec argmin(ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean()) ::
  ExTorch.Tensor.t()

See ExTorch.argmin/3

Available signature calls:

  • argmin(input, kwargs)
  • argmin(input, dim)

argmin(input, dim, keepdim)

@spec argmin(ExTorch.Tensor.t(), integer() | nil, boolean()) :: ExTorch.Tensor.t()
@spec argmin(ExTorch.Tensor.t(), integer() | nil, [{:keepdim, boolean()}]) ::
  ExTorch.Tensor.t()

Returns the indices of the minimum value of all elements (or elements in a dimension) in the input tensor.

  • If dim is nil, it will return the index of the minimum element on the overall input tensor, else it will return the minimum values alongside the specified dimension.

Arguments

Optional arguments

  • dim (nil | integer()) - the dimension to reduce. Default: nil

  • keepdim (boolean()) - whether the output tensor has dim retained or not. Default: false

Notes

If there are multiple minimal values then the indices of the first minimal value are returned.

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.6192, -0.4204,  0.1524, -0.1544],
 [ 1.4040,  1.0165,  1.6355,  0.6480],
 [-0.6566,  1.0730, -0.1548, -0.2488],
 [-1.0406,  0.0883,  1.0485, -0.3025]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Get the overall minimum index.
iex> ExTorch.argmin(a)
#Tensor<
12
[size: {}, dtype: :long, device: :cpu, requires_grad: false]>

# Get the minimum index on the last dimension.
iex> ExTorch.argmin(a, -1)
#Tensor<
[0, 3, 0, 0]
[
  size: {4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Keep the reduced dimension on the output
iex> ExTorch.argmin(a, 0, keepdim: true)
#Tensor<
[[3, 0, 2, 3]]
[
  size: {1, 4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

count_nonzero(input)

@spec count_nonzero(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.count_nonzero/2

Available signature calls:

  • count_nonzero(input)

count_nonzero(input, dim)

@spec count_nonzero(ExTorch.Tensor.t(), integer() | tuple() | nil) ::
  ExTorch.Tensor.t()
@spec count_nonzero(
  ExTorch.Tensor.t(),
  [{:dim, integer() | tuple() | nil}]
) :: ExTorch.Tensor.t()

Counts the number of non-zero values in the tensor input along the given dim. If no dim is specified then all non-zeros in the tensor are counted.

Arguments

Optional arguments

  • dim (integer | tuple | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

Examples

iex> x = ExTorch.zeros({3, 3})
iex> x = ExTorch.index_put(x, ExTorch.gt(ExTorch.randn({3, 3}), 0.5), 1)
#Tensor<
[[0.0000, 0.0000, 1.0000],
 [0.0000, 0.0000, 0.0000],
 [1.0000, 0.0000, 0.0000]]
[
  size: {3, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Count overall nonzero elements
iex> ExTorch.count_nonzero(x)
#Tensor<
2
[size: {}, dtype: :long, device: :cpu, requires_grad: false]>

# Count nonzero elements in the first dimension
iex> ExTorch.count_nonzero(x, 0)
#Tensor<
[1, 0, 1]
[size: {3}, dtype: :long, device: :cpu, requires_grad: false]>

dist(input, other)

See ExTorch.dist/3

Available signature calls:

  • dist(input, other)

dist(input, other, kwargs)

Returns the p-norm of (input - other)

The shapes of input and other must be broadcastable.

Arguments

Optional arguments

  • p (number) - the norm to be computed. Default: 2.0

Examples

iex> a = ExTorch.randn(3)
#Tensor<
[ 0.3934,  0.6799, -0.1292]
[
  size: {3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> b = ExTorch.randn(3)
#Tensor<
[-0.3785, -1.5249,  0.2093]
[
  size: {3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Compute the Euclidean norm
iex> ExTorch.dist(a, b)
#Tensor<
2.3605
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Compute the L1 distance
iex> ExTorch.dist(a, b, 1)
#Tensor<
3.3152
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

logsumexp(input)

@spec logsumexp(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.logsumexp/4

Available signature calls:

  • logsumexp(input)

logsumexp(input, dim)

@spec logsumexp(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil
) :: ExTorch.Tensor.t()
@spec logsumexp(ExTorch.Tensor.t(),
  dim: integer() | tuple() | nil,
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.logsumexp/4

Available signature calls:

  • logsumexp(input, kwargs)
  • logsumexp(input, dim)

logsumexp(input, dim, keepdim)

@spec logsumexp(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean()
) :: ExTorch.Tensor.t()
@spec logsumexp(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.logsumexp/4

Available signature calls:

  • logsumexp(input, dim, kwargs)
  • logsumexp(input, dim, keepdim)

logsumexp(input, dim, keepdim, kwargs)

@spec logsumexp(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec logsumexp(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Returns the log of summed exponentials of each row of the input tensor in the given dimension dim. The computation is numerically stabilized.

For summation index $j$ given by dim and other indices $i$, the result is:

$$ \text{logsumexp}(x)\_i = \text{log}\sum\_j \text{exp}\left(x\_{ij}\right) $$

If keepdim is true, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the output tensor having 1 (or length(dim)) fewer dimension(s).

Arguments

  • input (ExTorch.Tensor) - the input tensor.
  • dim (nil | integer() | tuple()) - the dimension or dimensions to reduce. If nil, all dimensions are reduced.

Optional arguments

  • keepdim (boolean) - whether the output tensor has dim retained or not.
  • out (ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default: nil

Examples

iex> a = ExTorch.randn({3, 3})
#Tensor<
[[ 0.2292, -1.0899,  0.0889],
 [-2.0117,  0.4716, -0.3893],
 [-0.9382,  1.0590, -0.0838]]
[
  size: {3, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Compute logsumexp in all dimensions
iex> ExTorch.logsumexp(a)
#Tensor<
2.2295
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Compute logsumexp in the last dimension, preserve dimensions
iex> ExTorch.logsumexp(a, -1, keepdim: true)
#Tensor<
[[0.9883],
 [0.8812],
 [1.4338]]
[
  size: {3, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

max(input)

See ExTorch.max/4

Available signature calls:

  • max(input)

max(input, dim)

@spec max(
  ExTorch.Tensor.t(),
  integer() | nil
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec max(ExTorch.Tensor.t(),
  dim: integer() | nil,
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.max/4

Available signature calls:

  • max(input, kwargs)
  • max(input, dim)

max(input, dim, keepdim)

See ExTorch.max/4

Available signature calls:

  • max(input, dim, kwargs)
  • max(input, dim, keepdim)

max(input, dim, keepdim, kwargs)

Returns the maximum value of all elements (or elements in a dimension) in the input tensor.

If dim is nil, max returns the maximum element in the tensor. Else, it returns a namedtuple {max, argmax} where max are the maximum values of each row of the input tensor in the given dimension dim. And argmax is the index location of each maximum value found (See ExTorch.argmax/3).

Arguments

Optional arguments

  • dim (nil | integer()) - the dimension to reduce. Default: nil

  • keepdim (boolean()) - whether the output tensor has dim retained or not, this . Default: false
  • out (ExTorch.Tensor | {ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tensor if computing the overall maximum of a tensor, else it must be a pre-allocated tensor tuple {max, argmax}. Default: nil

Notes

  • keepdim and out won't take any effect if dim is nil.
  • Unlike PyTorch, max will not take two tensors as input as an alias to ExTorch.maximum/3, please use that function explicitly instead.

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.6730,  0.9223, -0.3803,  0.2369],
 [ 0.5956, -0.2750,  1.3838, -2.1479],
 [ 0.4648, -1.8987,  0.8329, -0.5854],
 [ 1.1679, -0.4866, -0.5227, -0.4399]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Find the overall maximum in the tensor
iex> ExTorch.max(a)
#Tensor<
1.3838
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Find the maximum in the last dimension
iex> {max, argmax} = ExTorch.max(a, -1)
iex> max
#Tensor<
[0.9223, 1.3838, 0.8329, 1.1679]
[
  size: {4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> argmax
#Tensor<
[1, 2, 2, 0]
[
  size: {4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Preserve the original number of dimensions
iex> {max, argmax} = ExTorch.max(a, -1, keepdim: true)
iex> max
#Tensor<
[[0.9223],
 [1.3838],
 [0.8329],
 [1.1679]]
[
  size: {4, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> argmax
#Tensor<
[[1],
 [2],
 [2],
 [0]]
[
  size: {4, 1},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

mean(input)

@spec mean(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.mean/5

Available signature calls:

  • mean(input)

mean(input, dim)

@spec mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil
) :: ExTorch.Tensor.t()
@spec mean(ExTorch.Tensor.t(),
  dim: integer() | tuple() | nil,
  keepdim: boolean(),
  dtype: ExTorch.DType.dtype(),
  out: ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.mean/5

Available signature calls:

  • mean(input, kwargs)
  • mean(input, dim)

mean(input, dim, keepdim)

@spec mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean()
) :: ExTorch.Tensor.t()
@spec mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  keepdim: boolean(),
  dtype: ExTorch.DType.dtype(),
  out: ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.mean/5

Available signature calls:

  • mean(input, dim, kwargs)
  • mean(input, dim, keepdim)

mean(input, dim, keepdim, dtype)

@spec mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean(),
  ExTorch.DType.dtype()
) :: ExTorch.Tensor.t()
@spec mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean(),
  dtype: ExTorch.DType.dtype(),
  out: ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.mean/5

Available signature calls:

  • mean(input, dim, keepdim, kwargs)
  • mean(input, dim, keepdim, dtype)

mean(input, dim, keepdim, dtype, kwargs)

Returns the mean value of all elements (or alongside an axis) in the input tensor.

Arguments

Optional arguments

  • dim (integer | tuple | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • dtype (ExTorch.DType or nil) - the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: nil.
  • out (ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default: nil

Examples

iex> a = ExTorch.rand({3, 3})
#Tensor<
[[0.0945, 0.3992, 0.5090],
 [0.0142, 0.1471, 0.4568],
 [0.1428, 0.2121, 0.6163]]
[
  size: {3, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Mean of all elements in a tensor.
iex> ExTorch.mean(a)
#Tensor<
0.2880
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Mean of elements in the last dimension, keeping dims and casting to double
iex> ExTorch.mean(a, -1, keepdim: true, dtype: :double)
#Tensor<
[[0.3343],
 [0.2060],
 [0.3237]]
[
  size: {3, 1},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

median(input)

See ExTorch.median/4

Available signature calls:

  • median(input)

median(input, dim)

@spec median(
  ExTorch.Tensor.t(),
  integer() | nil
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec median(ExTorch.Tensor.t(),
  dim: integer() | nil,
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.median/4

Available signature calls:

  • median(input, kwargs)
  • median(input, dim)

median(input, dim, keepdim)

@spec median(
  ExTorch.Tensor.t(),
  integer() | nil,
  boolean()
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec median(
  ExTorch.Tensor.t(),
  integer() | nil,
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.median/4

Available signature calls:

  • median(input, dim, kwargs)
  • median(input, dim, keepdim)

median(input, dim, keepdim, kwargs)

Returns the median of the values in input.

If dim is nil, it returns the median of all values in the tensor. Else, it returns a tuple {values, indices} where values contains the median of each row of input in the dimension dim, and indices contains the index of the median values found in the dimension dim.

If keepdim is true, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs tensor having 1 fewer dimension than input.

Arguments

Optional arguments

  • dim (integer | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • out ({ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tuple tensor. Default: nil

Notes

  • The median is not unique for input tensors with an even number of elements. In this case the lower of the two medians is returned. To compute the mean of both medians, use ExTorch.quantile with q=0.5 instead.

  • indices does not necessarily contain the first occurrence of each median value found, unless it is unique. The exact implementation details are device-specific. Do not expect the same result when run on CPU and GPU in general. For the same reason do not expect the gradients to be deterministic.

  • keepdim and out will not take effect when dim = nil.

Examples

iex> a = ExTorch.randn({3, 3})
#Tensor<
[[-0.7721, -2.0910, -0.4622],
 [ 0.1119,  2.4266,  1.3471],
 [-0.1450, -0.2876, -2.3025]]
[
  size: {3, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Compute overall median of the input tensor.
iex> ExTorch.median(a)
#Tensor<
-0.2876
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Compute the median of the last dimension, keeping dimensions.
iex> {values, indices} = ExTorch.median(a, -1, keepdim: true)
iex> values
#Tensor<
[[-0.7721],
 [ 1.3471],
 [-0.2876]]
[
  size: {3, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> indices
#Tensor<
[[0],
 [2],
 [1]]
[
  size: {3, 1},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

min(input)

See ExTorch.min/4

Available signature calls:

  • min(input)

min(input, dim)

@spec min(
  ExTorch.Tensor.t(),
  integer() | nil
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec min(ExTorch.Tensor.t(),
  dim: integer() | nil,
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.min/4

Available signature calls:

  • min(input, kwargs)
  • min(input, dim)

min(input, dim, keepdim)

See ExTorch.min/4

Available signature calls:

  • min(input, dim, kwargs)
  • min(input, dim, keepdim)

min(input, dim, keepdim, kwargs)

Returns the minimum value of all elements (or elements in a dimension) in the input tensor.

If dim is nil, min returns the minimum element in the tensor. Else, it returns a namedtuple {min, argmin} where min are the minimum values of each row of the input tensor in the given dimension dim. And argmin is the index location of each minimum value found (See ExTorch.argmin/3).

Arguments

Optional arguments

  • dim (nil | integer()) - the dimension to reduce. Default: nil

  • keepdim (boolean()) - whether the output tensor has dim retained or not, this . Default: false
  • out (ExTorch.Tensor | {ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tensor if computing the overall minimum of a tensor, else it must be a pre-allocated tensor tuple {min, argmin}. Default: nil

Notes

  • keepdim and out won't take any effect if dim is nil.
  • Unlike PyTorch, min will not take two tensors as input as an alias to ExTorch.minimum/3, please use that function explicitly instead.

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.6730,  0.9223, -0.3803,  0.2369],
 [ 0.5956, -0.2750,  1.3838, -2.1479],
 [ 0.4648, -1.8987,  0.8329, -0.5854],
 [ 1.1679, -0.4866, -0.5227, -0.4399]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Find the overall minimum in the tensor
iex> ExTorch.min(a)
#Tensor<
-2.1479
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Find the minimum in the last dimension
iex> {min, argmin} = ExTorch.min(a, -1)
iex> min
#Tensor<
[-0.6730, -2.1479, -1.8987, -0.5227]
[
  size: {4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> argmin
#Tensor<
[0, 3, 1, 2]
[
  size: {4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Preserve the original number of dimensions
iex> {min, argmin} = ExTorch.min(a, -1, keepdim: true)
iex> min
#Tensor<
[[-0.6730],
 [-2.1479],
 [-1.8987],
 [-0.5227]]
[
  size: {4, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex(16)> argmin
#Tensor<
[[0],
 [3],
 [1],
 [2]]
[
  size: {4, 1},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

mode(input)

See ExTorch.mode/4

Available signature calls:

  • mode(input)

mode(input, dim)

@spec mode(
  ExTorch.Tensor.t(),
  integer() | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec mode(ExTorch.Tensor.t(),
  dim: integer() | nil,
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.mode/4

Available signature calls:

  • mode(input, kwargs)
  • mode(input, dim)

mode(input, dim, keepdim)

@spec mode(
  ExTorch.Tensor.t(),
  integer() | nil,
  boolean()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec mode(
  ExTorch.Tensor.t(),
  integer() | nil,
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.mode/4

Available signature calls:

  • mode(input, dim, kwargs)
  • mode(input, dim, keepdim)

mode(input, dim, keepdim, kwargs)

@spec mode(
  ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}]
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec mode(
  ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

Returns the mode of the values in input across dimension dim.

It returns a tuple {values, indices} where values contains the median of each row of input in the dimension dim, and indices contains the index of the median values found in the dimension dim.

If keepdim is true, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs tensor having 1 fewer dimension than input.

Arguments

Optional arguments

  • dim (integer | nil) - the dimension or dimensions to reduce. Default: -1

  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • out ({ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tuple tensor. Default: nil

Notes

This function is not defined for CUDA tensors yet.

Examples

iex> a = ExTorch.randint(5, {3, 4}, dtype: :int32)
#Tensor<
[[4, 4, 4, 4],
 [3, 4, 1, 1],
 [3, 2, 2, 0]]
[
  size: {3, 4},
  dtype: :int,
  device: :cpu,
  requires_grad: false
]>

# Compute the mode in the last dimension.
iex> {values, indices} = ExTorch.mode(a)
iex> values
#Tensor<
[4, 1, 2]
[size: {3}, dtype: :int, device: :cpu, requires_grad: false]>
iex> indices
#Tensor<
[3, 3, 2]
[size: {3}, dtype: :long, device: :cpu, requires_grad: false]>

# Compute the mode in the first dimension, keeping output dimensions.
iex> {values, indices} = ExTorch.mode(a, 0, keepdim: true)
iex> values
#Tensor<
[[3, 4, 1, 0]]
[
  size: {1, 4},
  dtype: :int,
  device: :cpu,
  requires_grad: false
]>
iex> indices
#Tensor<
[[2, 1, 1, 2]]
[
  size: {1, 4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

nanmean(input)

@spec nanmean(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.nanmean/5

Available signature calls:

  • nanmean(input)

nanmean(input, dim)

@spec nanmean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil
) :: ExTorch.Tensor.t()
@spec nanmean(ExTorch.Tensor.t(),
  dim: integer() | tuple() | nil,
  keepdim: boolean(),
  dtype: ExTorch.DType.dtype(),
  out: ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.nanmean/5

Available signature calls:

  • nanmean(input, kwargs)
  • nanmean(input, dim)

nanmean(input, dim, keepdim)

@spec nanmean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean()
) :: ExTorch.Tensor.t()
@spec nanmean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  keepdim: boolean(),
  dtype: ExTorch.DType.dtype(),
  out: ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.nanmean/5

Available signature calls:

  • nanmean(input, dim, kwargs)
  • nanmean(input, dim, keepdim)

nanmean(input, dim, keepdim, dtype)

@spec nanmean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean(),
  ExTorch.DType.dtype()
) :: ExTorch.Tensor.t()
@spec nanmean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean(),
  dtype: ExTorch.DType.dtype(),
  out: ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.nanmean/5

Available signature calls:

  • nanmean(input, dim, keepdim, kwargs)
  • nanmean(input, dim, keepdim, dtype)

nanmean(input, dim, keepdim, dtype, kwargs)

@spec nanmean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean(),
  ExTorch.DType.dtype(),
  [{:out, ExTorch.Tensor.t()}]
) :: ExTorch.Tensor.t()
@spec nanmean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean(),
  ExTorch.DType.dtype(),
  ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

Computes the mean of all non-NaN elements along the specified dimensions.

This function is identical to ExTorch.mean/5 when there are no :nan values in the input tensor. In the presence of :nan, ExTorch.mean will propagate the :nan to the output whereas ExTorch.nanmean will ignore the NaN values.

If keepdim is true, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the output tensor having 1 (or length(dim)) fewer dimension(s).

Arguments

Optional arguments

  • dim (integer | tuple | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • dtype (ExTorch.DType or nil) - the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: nil.
  • out (ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default: nil

Examples

iex> a = ExTorch.tensor([[:nan, 1.0, 2.0], [-1.0, :nan, 2.0], [1.0, -1.0, :nan]])
#Tensor<
[[    nan,  1.0000,  2.0000],
[-1.0000,     nan,  2.0000],
[ 1.0000, -1.0000,     nan]]
[
  size: {3, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Compute mean of all array elements without :nan
iex> ExTorch.nanmean(a)
#Tensor<
0.6667
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Compute mean of all array elements on the last dimension, keep all dims
iex> ExTorch.nanmean(a, -1, keepdim: true)
#Tensor<
[[1.5000],
 [0.5000],
 [0.0000]]
[
  size: {3, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

nanmedian(input)

See ExTorch.nanmedian/4

Available signature calls:

  • nanmedian(input)

nanmedian(input, dim)

@spec nanmedian(
  ExTorch.Tensor.t(),
  integer() | nil
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec nanmedian(ExTorch.Tensor.t(),
  dim: integer() | nil,
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.nanmedian/4

Available signature calls:

  • nanmedian(input, kwargs)
  • nanmedian(input, dim)

nanmedian(input, dim, keepdim)

@spec nanmedian(
  ExTorch.Tensor.t(),
  integer() | nil,
  boolean()
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec nanmedian(
  ExTorch.Tensor.t(),
  integer() | nil,
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.nanmedian/4

Available signature calls:

  • nanmedian(input, dim, kwargs)
  • nanmedian(input, dim, keepdim)

nanmedian(input, dim, keepdim, kwargs)

@spec nanmedian(
  ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}]
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec nanmedian(
  ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

Returns the median of the values in input, ignoring NaN values.

This function is identical to ExTorch.median/4 when there are no :nan values in input. When input has one or more :nan values, ExTorch.median will always return :nan, while this function will return the median of the non-NaN elements in input. If all the elements in input are NaN it will also return :nan.

If dim is nil, it returns the median of all values in the tensor. Else, it returns a tuple {values, indices} where values contains the median of each row of input in the dimension dim, and indices contains the index of the median values found in the dimension dim.

Arguments

Optional arguments

  • dim (integer | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • out ({ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tuple tensor. Default: nil

Examples

iex> input =
...>   ExTorch.tensor([
...>     [:nan, 1.0, 2.0],
...>     [-1.0, :nan, 2.0],
...>     [1.0, -1.0, :nan]
...>   ])

# Compute median of the tensor without :nan
iex> ExTorch.nanmedian(input)
#Tensor<
1.
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Compute median of the tensor in the last dimension, keeping all dimensions
iex> {values, indices} = ExTorch.nanmedian(input, -1, keepdim: true)
iex> values
#Tensor<
[[ 1.],
 [-1.],
 [-1.]]
[
  size: {3, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> indices
#Tensor<
[[1],
 [0],
 [1]]
[
  size: {3, 1},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

nanquantile(input, q)

@spec nanquantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.nanquantile/6

Available signature calls:

  • nanquantile(input, q)

nanquantile(input, q, dim)

@spec nanquantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil
) :: ExTorch.Tensor.t()
@spec nanquantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  dim: integer() | nil,
  keepdim: boolean(),
  interpolation: :linear | :lower | :higher | :midpoint | :nearest,
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.nanquantile/6

Available signature calls:

  • nanquantile(input, q, kwargs)
  • nanquantile(input, q, dim)

nanquantile(input, q, dim, keepdim)

@spec nanquantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  boolean()
) :: ExTorch.Tensor.t()
@spec nanquantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  keepdim: boolean(),
  interpolation: :linear | :lower | :higher | :midpoint | :nearest,
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.nanquantile/6

Available signature calls:

  • nanquantile(input, q, dim, kwargs)
  • nanquantile(input, q, dim, keepdim)

nanquantile(input, q, dim, keepdim, interpolation)

@spec nanquantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  :linear | :lower | :higher | :midpoint | :nearest
) :: ExTorch.Tensor.t()
@spec nanquantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  interpolation: :linear | :lower | :higher | :midpoint | :nearest,
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.nanquantile/6

Available signature calls:

  • nanquantile(input, q, dim, keepdim, kwargs)
  • nanquantile(input, q, dim, keepdim, interpolation)

nanquantile(input, q, dim, keepdim, interpolation, kwargs)

@spec nanquantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  :linear | :lower | :higher | :midpoint | :nearest,
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec nanquantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  :linear | :lower | :higher | :midpoint | :nearest,
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

This is a variant of ExTorch.quantile/6 that “ignores” NaN values, computing the quantiles q as if NaN values in input did not exist. If all values in a reduced row are NaN then the quantiles for that reduction will be NaN. See the documentation for ExTorch.quantile/6.

Arguments

Optional arguments

  • dim (integer | nil) - the dimension to reduce. If nil, input will be flattened before computation. Default: nil

  • keepdim (boolean) - whether the output has dim retained or not. Default: false
  • interpolation (atom) interpolation method to use when the desired quantile lies between two data points. Can be :linear, :lower, :higher, :midpoint and :nearest. Default: :linear.
  • out (ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default: nil

Examples

# Compute quantiles throughout all tensor elements, ignoring :nan values
iex> a = ExTorch.tensor([:nan, 1, 2])
iex> ExTorch.nanquantile(a, 0.5)
#Tensor<
[1.5000]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>

# Compute quantiles across specific dimensions, ignoring :nan values
iex> a = ExTorch.tensor([[:nan, :nan], [1, 2]])
iex> ExTorch.nanquantile(a, 0.5, dim: 0)
#Tensor<
[[1., 2.]]
[
  size: {1, 2},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> ExTorch.nanquantile(a, 0.5, dim: 1)
#Tensor<
[[   nan, 1.5000]]
[
  size: {1, 2},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

nansum(input)

@spec nansum(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.nansum/4

Available signature calls:

  • nansum(input)

nansum(input, dim)

@spec nansum(ExTorch.Tensor.t(), integer() | tuple() | nil) :: ExTorch.Tensor.t()
@spec nansum(ExTorch.Tensor.t(),
  dim: integer() | tuple() | nil,
  keepdim: boolean(),
  dtype: ExTorch.DType.dtype()
) :: ExTorch.Tensor.t()

See ExTorch.nansum/4

Available signature calls:

  • nansum(input, kwargs)
  • nansum(input, dim)

nansum(input, dim, keepdim)

@spec nansum(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean()) ::
  ExTorch.Tensor.t()
@spec nansum(ExTorch.Tensor.t(), integer() | tuple() | nil,
  keepdim: boolean(),
  dtype: ExTorch.DType.dtype()
) :: ExTorch.Tensor.t()

See ExTorch.nansum/4

Available signature calls:

  • nansum(input, dim, kwargs)
  • nansum(input, dim, keepdim)

nansum(input, dim, keepdim, dtype)

@spec nansum(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean(),
  ExTorch.DType.dtype()
) ::
  ExTorch.Tensor.t()
@spec nansum(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), [
  {:dtype, ExTorch.DType.dtype()}
]) ::
  ExTorch.Tensor.t()

Returns the sum of all elements (or alongside an axis) in the input tensor, treating NaNs as zeros.

Arguments

Optional arguments

  • dim (integer | tuple | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • dtype (ExTorch.DType or nil) - the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: nil.

Examples

iex> input =
...>   ExTorch.tensor(
...>     [
...>       [4, 4, 4, :nan],
...>       [3, :nan, 1, 1],
...>       [3, 2, :nan, 0]
...>     ]
...>   )

# Sum all elements in the tensor, ignoring NaNs
iex> ExTorch.nansum(input)
#Tensor<
22.
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Sum all elements in the last dimension, keeping dims and casting to double.
iex> ExTorch.nansum(input, -1, keepdim: true, dtype: :double)
#Tensor<
[[12.],
 [ 5.],
 [ 5.]]
[
  size: {3, 1},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

prod(input)

@spec prod(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.prod/4

Available signature calls:

  • prod(input)

prod(input, dim)

@spec prod(ExTorch.Tensor.t(), integer() | tuple() | nil) :: ExTorch.Tensor.t()
@spec prod(ExTorch.Tensor.t(),
  dim: integer() | tuple() | nil,
  keepdim: boolean(),
  dtype: ExTorch.DType.dtype()
) :: ExTorch.Tensor.t()

See ExTorch.prod/4

Available signature calls:

  • prod(input, kwargs)
  • prod(input, dim)

prod(input, dim, keepdim)

@spec prod(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean()) ::
  ExTorch.Tensor.t()
@spec prod(ExTorch.Tensor.t(), integer() | tuple() | nil,
  keepdim: boolean(),
  dtype: ExTorch.DType.dtype()
) :: ExTorch.Tensor.t()

See ExTorch.prod/4

Available signature calls:

  • prod(input, dim, kwargs)
  • prod(input, dim, keepdim)

prod(input, dim, keepdim, dtype)

@spec prod(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean(),
  ExTorch.DType.dtype()
) ::
  ExTorch.Tensor.t()
@spec prod(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), [
  {:dtype, ExTorch.DType.dtype()}
]) ::
  ExTorch.Tensor.t()

Returns the product of all elements (or alongside an axis) in the input tensor.

Arguments

Optional arguments

  • dim (integer | tuple | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • dtype (ExTorch.DType or nil) - the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: nil.

Notes

  • keepdim does not apply when dim = nil.

Examples

iex> a = ExTorch.randint(1, 3, {3, 4})
#Tensor<
[[1., 1., 1., 2.],
 [1., 2., 1., 1.],
 [1., 2., 1., 2.]]
[
  size: {3, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Multiply all elements in the tensor
iex> ExTorch.prod(a)
#Tensor<
16.
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Multiply all elements in the last dimension, keep all dimensions
iex> ExTorch.prod(a, -1, keepdim: true)
#Tensor<
[[2.],
 [2.],
 [4.]]
[
  size: {3, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

quantile(input, q)

@spec quantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t()
) :: ExTorch.Tensor.t()

See ExTorch.quantile/6

Available signature calls:

  • quantile(input, q)

quantile(input, q, dim)

@spec quantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil
) :: ExTorch.Tensor.t()
@spec quantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  dim: integer() | nil,
  keepdim: boolean(),
  interpolation: :linear | :lower | :higher | :midpoint | :nearest,
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.quantile/6

Available signature calls:

  • quantile(input, q, kwargs)
  • quantile(input, q, dim)

quantile(input, q, dim, keepdim)

@spec quantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  boolean()
) :: ExTorch.Tensor.t()
@spec quantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  keepdim: boolean(),
  interpolation: :linear | :lower | :higher | :midpoint | :nearest,
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.quantile/6

Available signature calls:

  • quantile(input, q, dim, kwargs)
  • quantile(input, q, dim, keepdim)

quantile(input, q, dim, keepdim, interpolation)

@spec quantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  :linear | :lower | :higher | :midpoint | :nearest
) :: ExTorch.Tensor.t()
@spec quantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  interpolation: :linear | :lower | :higher | :midpoint | :nearest,
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.quantile/6

Available signature calls:

  • quantile(input, q, dim, keepdim, kwargs)
  • quantile(input, q, dim, keepdim, interpolation)

quantile(input, q, dim, keepdim, interpolation, kwargs)

@spec quantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  :linear | :lower | :higher | :midpoint | :nearest,
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec quantile(
  ExTorch.Tensor.t(),
  float() | ExTorch.Tensor.t(),
  integer() | nil,
  boolean(),
  :linear | :lower | :higher | :midpoint | :nearest,
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Computes the q-th quantiles of each row of the input tensor along the dimension dim.

To compute the quantile, we map q in [0, 1] to the range of indices [0, n] to find the location of the quantile in the sorted input. If the quantile lies between two data points a < b with indices i and j in the sorted order, result is computed according to the given interpolation method as follows:

  • :linear: a + (b - a) * fraction, where fraction is the fractional part of the computed quantile index.
  • lower: a.
  • higher: b.
  • nearest: a or b, whichever’s index is closer to the computed quantile index (rounding down for .5 fractions).
  • midpoint: (a + b) / 2.

If q is a 1D tensor, the first dimension of the output represents the quantiles and has size equal to the size of q, the remaining dimensions are what remains from the reduction.

Arguments

Optional arguments

  • dim (integer | nil) - the dimension to reduce. If nil, input will be flattened before computation. Default: nil

  • keepdim (boolean) - whether the output has dim retained or not. Default: false
  • interpolation (atom) interpolation method to use when the desired quantile lies between two data points. Can be :linear, :lower, :higher, :midpoint and :nearest. Default: :linear.
  • out (ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default: nil

Examples

iex> a = ExTorch.randn({2, 3})
#Tensor<
[[ 2.1818, -1.5810,  0.6152],
 [ 0.2525, -0.7425,  0.3769]]
[
  size: {2, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> q = torch.tensor([0.25, 0.5, 0.75])

# Quantiles in the last dimension, keep output dimensions
iex> ExTorch.quantile(a, q, dim: 1, keepdim: true)
#Tensor<
[[[-0.4829],
  [-0.2450]],

 [[ 0.6152],
  [ 0.2525]],

 [[ 1.3985],
  [ 0.3147]]]
[
  size: {3, 2, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

iex> a = ExTorch.arange(4)
#Tensor<
[0.0000, 1.0000, 2.0000, 3.0000]
[
  size: {4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Interpolation modes
iex> ExTorch.quantile(a, 0.6, interpolation: :linear)
#Tensor<
[1.8000]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.quantile(a, 0.6, interpolation: :lower)
#Tensor<
[1.]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.quantile(a, 0.6, interpolation: :higher)
#Tensor<
[2.]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.quantile(a, 0.6, interpolation: :midpoint)
#Tensor<
[1.5000]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.quantile(a, 0.6, interpolation: :nearest)
#Tensor<
[2.]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>

std(input)

Alias to std_dev/1

std(input, dim)

@spec std(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil
) :: ExTorch.Tensor.t()
@spec std(ExTorch.Tensor.t(),
  dim: integer() | tuple() | nil,
  correction: integer(),
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Alias to std_dev/2

std(input, dim, correction)

@spec std(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer()
) :: ExTorch.Tensor.t()
@spec std(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  correction: integer(),
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Alias to std_dev/3

std(input, dim, correction, keepdim)

@spec std(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean()
) :: ExTorch.Tensor.t()
@spec std(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Alias to std_dev/4

std(input, dim, correction, keepdim, kwargs)

@spec std(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec std(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Alias to std_dev/5

std_dev(input)

@spec std_dev(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.std_dev/5

Available signature calls:

  • std_dev(input)

std_dev(input, dim)

@spec std_dev(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil
) :: ExTorch.Tensor.t()
@spec std_dev(ExTorch.Tensor.t(),
  dim: integer() | tuple() | nil,
  correction: integer(),
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.std_dev/5

Available signature calls:

  • std_dev(input, kwargs)
  • std_dev(input, dim)

std_dev(input, dim, correction)

@spec std_dev(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer()
) :: ExTorch.Tensor.t()
@spec std_dev(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  correction: integer(),
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.std_dev/5

Available signature calls:

  • std_dev(input, dim, kwargs)
  • std_dev(input, dim, correction)

std_dev(input, dim, correction, keepdim)

@spec std_dev(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean()
) :: ExTorch.Tensor.t()
@spec std_dev(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.std_dev/5

Available signature calls:

  • std_dev(input, dim, correction, kwargs)
  • std_dev(input, dim, correction, keepdim)

std_dev(input, dim, correction, keepdim, kwargs)

@spec std_dev(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec std_dev(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Calculates the standard deviation over the dimensions specified by dim.

dim can be a single dimension, list of dimensions, or nil to reduce over all dimensions.

The standard deviation ($\sigma$) is calculated as

$$ \sigma = \sqrt{\frac{1}{N - \delta N} \sum\_{i=0}^{N - 1} (x\_i - \bar{x})^2} $$

where $x$ is the sample set of elements, $\bar{x}$ is the sample mean, $N$ is the number of samples and $\delta N$ is the correction.

If keepdim is true, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs tensor having 1 fewer dimension than input.

Arguments

Optional arguments

  • dim (integer | tuple | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

  • correction (integer) - difference between the sample size and sample degrees of freedom. Defaults to Bessel's correction. Default: 1
  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • out (ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default: nil

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[ 0.0686,  0.7169,  0.2143,  1.5755],
 [-1.6080,  0.9169, -0.0937,  1.2906],
 [ 0.5432,  2.4151, -0.3814,  0.2830],
 [-0.0724,  0.7037, -0.1951, -0.1191]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Compute the standard deviation of all tensor elements
iex> ExTorch.std(a)
#Tensor<
0.9167
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Compute the standard deviation of elements in the last dimension, keeping total dimensions
iex> ExTorch.std(a, -1, keepdim: true)
#Tensor<
[[0.6804],
 [1.2957],
 [1.1984],
 [0.4194]]
[
  size: {4, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

std_mean(input)

@spec std_mean(ExTorch.Tensor.t()) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.std_mean/5

Available signature calls:

  • std_mean(input)

std_mean(input, dim)

@spec std_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec std_mean(ExTorch.Tensor.t(),
  dim: integer() | tuple() | nil,
  correction: integer(),
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.std_mean/5

Available signature calls:

  • std_mean(input, kwargs)
  • std_mean(input, dim)

std_mean(input, dim, correction)

@spec std_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec std_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  correction: integer(),
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.std_mean/5

Available signature calls:

  • std_mean(input, dim, kwargs)
  • std_mean(input, dim, correction)

std_mean(input, dim, correction, keepdim)

@spec std_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec std_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.std_mean/5

Available signature calls:

  • std_mean(input, dim, correction, kwargs)
  • std_mean(input, dim, correction, keepdim)

std_mean(input, dim, correction, keepdim, kwargs)

@spec std_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean(),
  [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}]
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec std_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean(),
  {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

Calculates the standard deviation and mean over the dimensions specified by dim.

dim can be a single dimension, list of dimensions, or nil to reduce over all dimensions. It returns a tuple {std, mean} containing the standard deviation and mean, respectively.

The standard deviation ($ igma$) is calculated as

$$ igma = qrt{ rac{1}{N - elta N} um_{i=0}^{N - 1} (x_i - ar{x})^2} $$

where $x$ is the sample set of elements, $ar{x}$ is the sample mean, $N$ is the number of samples and $elta N$ is the correction.

If keepdim is true, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs tensor having 1 fewer dimension than input.

Arguments

Optional arguments

  • dim (integer | tuple | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

  • correction (integer) - difference between the sample size and sample degrees of freedom. Defaults to Bessel's correction. Default: 1
  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • out ({ExTorch.Tensor, ExTorch.Tensor} | nil) - a tuple containing the optional output pre-allocated tensors. Default: nil

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.8204,  0.0761, -0.5242,  0.7905],
 [ 0.4202,  0.5431, -0.9726,  0.7407],
 [-1.5224,  1.1669, -1.4509,  0.0034],
 [-0.8064,  1.2111,  1.3384, -1.2709]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Compute standard deviation and mean of all tensor elements
iex> {std, mean} = ExTorch.std_mean(a)
iex> std
#Tensor<
0.9926
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
iex> mean
#Tensor<
-0.0673
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Compute standard deviation and mean of all tensor elements in the last dimension
iex> {std, mean} = ExTorch.std_mean(a, -1, keepdim: true)
iex> std
#Tensor<
[[0.7121],
[0.7815],
[1.2874],
[1.3500]]
[
  size: {4, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> mean
#Tensor<
[[-0.1195],
[ 0.1829],
[-0.4507],
[ 0.1180]]
[
  size: {4, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

sum(input)

See ExTorch.sum/4

Available signature calls:

  • sum(input)

sum(input, dim)

@spec sum(ExTorch.Tensor.t(), integer() | tuple() | nil) :: ExTorch.Tensor.t()
@spec sum(ExTorch.Tensor.t(),
  dim: integer() | tuple() | nil,
  keepdim: boolean(),
  dtype: ExTorch.DType.dtype()
) :: ExTorch.Tensor.t()

See ExTorch.sum/4

Available signature calls:

  • sum(input, kwargs)
  • sum(input, dim)

sum(input, dim, keepdim)

@spec sum(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean()) ::
  ExTorch.Tensor.t()
@spec sum(ExTorch.Tensor.t(), integer() | tuple() | nil,
  keepdim: boolean(),
  dtype: ExTorch.DType.dtype()
) :: ExTorch.Tensor.t()

See ExTorch.sum/4

Available signature calls:

  • sum(input, dim, kwargs)
  • sum(input, dim, keepdim)

sum(input, dim, keepdim, dtype)

@spec sum(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  boolean(),
  ExTorch.DType.dtype()
) ::
  ExTorch.Tensor.t()
@spec sum(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), [
  {:dtype, ExTorch.DType.dtype()}
]) ::
  ExTorch.Tensor.t()

Returns the sum of all elements (or alongside an axis) in the input tensor.

Arguments

Optional arguments

  • dim (integer | tuple | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • dtype (ExTorch.DType or nil) - the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: nil.

Examples

iex> a = ExTorch.rand({3, 3})
#Tensor<
[[0.7281, 0.9280, 0.5829],
[0.4569, 0.4785, 0.1352],
[0.9905, 0.0698, 0.1905]]
[
  size: {3, 3},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Sum all elements in a tensor.
iex> ExTorch.sum(a)
#Tensor<
4.5604
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Sum all elements in the last dimension, keeping dims and casting to double
iex> ExTorch.sum(a, 1, keepdim: true, dtype: :double)
#Tensor<
[[2.2390],
 [1.0707],
 [1.2507]]
[
  size: {3, 1},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

unique(input)

See ExTorch.unique/5

Available signature calls:

  • unique(input)

unique(input, kwargs)

See ExTorch.unique/5

Available signature calls:

  • unique(input, sorted)
  • unique(input, kwargs)

unique(input, sorted, kwargs)

See ExTorch.unique/5

Available signature calls:

  • unique(input, sorted, return_inverse)
  • unique(input, sorted, kwargs)

unique(input, sorted, return_inverse, kwargs)

See ExTorch.unique/5

Available signature calls:

  • unique(input, sorted, return_inverse, return_counts)
  • unique(input, sorted, return_inverse, kwargs)

unique(input, sorted, return_inverse, return_counts, dim)

Returns the unique elements of the input tensor.

Depending on the value of return_inverse and return_counts, this function can return either a single tensor, a tuple of two tensors, or a tuple of three tensors.

Arguments

Optional arguments

  • sorted (boolean) - whether to sort the unique elements in ascending order before returning. Default: true
  • return_inverse (boolean) - whether to also return the indices for where elements in the original input ended up in the returned unique list. Default: false
  • return_counts (boolean) - whether to also return the counts for each unique element. Default: false
  • dim (integer | nil) - the dimension to operate upon. If nil, the unique of the flattened input is returned. Otherwise, each of the tensors indexed by the given dimension is treated as one of the elements to apply the unique operation upon. See examples for more details. Default: nil

Examples

iex> a = ExTorch.randint(-1, 4, {4, 4}, dtype: :int64)
#Tensor<
[[ 2,  2, -1,  0],
 [ 1,  3,  1, -1],
 [-1,  2,  0,  1],
 [ 3,  2, -1,  3]]
[
  size: {4, 4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Compute a tensor's unique values
iex> ExTorch.unique(a)
#Tensor<
[-1,  0,  1,  2,  3]
[
  size: {5},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Compute unique values and inverse tensor
iex> {unique, inverse} = ExTorch.unique(a, return_inverse: true)
iex> inverse
#Tensor<
[[3, 3, 0, 1],
 [2, 4, 2, 0],
 [0, 3, 1, 2],
 [4, 3, 0, 4]]
[
  size: {4, 4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Compute unique values and count tensor
iex> {unique, count} = ExTorch.unique(a, return_counts: true)
iex> count
#Tensor<
[4, 2, 3, 4, 3]
[
  size: {5},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Compute unique values, inverse and count tensors
iex> {unique, inverse, count} = ExTorch.unique(a, return_inverse: true, return_counts: true)

# Compute unique values across a dimension
iex> a = ExTorch.tensor([[0, 1, 1], [-1, -1, -1], [0, 1, 1]], dtype: :int64)
iex> ExTorch.unique(a, dim: 0)
#Tensor<
[[-1, -1, -1],
 [ 0,  1,  1]]
[
  size: {2, 3},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

unique_consecutive(input)

See ExTorch.unique_consecutive/4

Available signature calls:

  • unique_consecutive(input)

unique_consecutive(input, kwargs)

@spec unique_consecutive(ExTorch.Tensor.t(),
  return_inverse: boolean(),
  return_counts: boolean(),
  dim: integer() | nil
) ::
  ExTorch.Tensor.t()
  | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
  | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec unique_consecutive(ExTorch.Tensor.t(), boolean()) ::
  ExTorch.Tensor.t()
  | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
  | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.unique_consecutive/4

Available signature calls:

  • unique_consecutive(input, return_inverse)
  • unique_consecutive(input, kwargs)

unique_consecutive(input, return_inverse, kwargs)

See ExTorch.unique_consecutive/4

Available signature calls:

  • unique_consecutive(input, return_inverse, return_counts)
  • unique_consecutive(input, return_inverse, kwargs)

unique_consecutive(input, return_inverse, return_counts, dim)

Eliminates all but the first element from every consecutive group of equivalent elements.

Depending on the value of return_inverse and return_counts, this function can return either a single tensor, a tuple of two tensors, or a tuple of three tensors.

This function is different from ExTorch.unique/5 in the sense that this function only eliminates consecutive duplicate values. This semantics is similar to std::unique in C++.

Arguments

Optional arguments

  • return_inverse (boolean) - whether to also return the indices for where elements in the original input ended up in the returned unique list. Default: false
  • return_counts (boolean) - whether to also return the counts for each unique element. Default: false
  • dim (integer | nil) - the dimension to operate upon. If nil, the unique of the flattened input is returned. Otherwise, each of the tensors indexed by the given dimension is treated as one of the elements to apply the unique operation upon. See examples for more details. Default: nil

Examples

iex> a = ExTorch.tensor([1, 1, 2, 2, 3, 1, 1, 2], dtype: :int32)

# Find unique consecutive elements in a tensor
iex> ExTorch.unique_consecutive(a)
#Tensor<
[1, 2, 3, 1, 2]
[
  size: {5},
  dtype: :int,
  device: :cpu,
  requires_grad: false
]>

# Compute unique values and inverse tensor
iex> {unique, inverse} = ExTorch.unique_consecutive(a, return_inverse: true)
iex> inverse
#Tensor<
[0, 0, 1, 1, 2, 3, 3, 4]
[
  size: {8},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Compute unique values and count tensor
iex> {unique, count} = ExTorch.unique_consecutive(a, return_counts: true)
iex> count
#Tensor<
[2, 2, 1, 2, 1]
[
  size: {5},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>


# Compute unique values, inverse and count tensors
iex> {unique, inverse, count} = ExTorch.unique(a, return_inverse: true, return_counts: true)

# Compute unique consecutive values across a dimension
iex> a = ExTorch.tensor([[-1, -1, -1], [0, 1, 1], [0, 1, 1], [-1, -1, -1]], dtype: :int64)
iex> ExTorch.unique(a, dim: 0)
#Tensor<
[[-1, -1, -1],
 [ 0,  1,  1]]
[
  size: {2, 3},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

var(input)

See ExTorch.var/5

Available signature calls:

  • var(input)

var(input, dim)

@spec var(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil
) :: ExTorch.Tensor.t()
@spec var(ExTorch.Tensor.t(),
  dim: integer() | tuple() | nil,
  correction: integer(),
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.var/5

Available signature calls:

  • var(input, kwargs)
  • var(input, dim)

var(input, dim, correction)

@spec var(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer()
) :: ExTorch.Tensor.t()
@spec var(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  correction: integer(),
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.var/5

Available signature calls:

  • var(input, dim, kwargs)
  • var(input, dim, correction)

var(input, dim, correction, keepdim)

@spec var(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean()
) :: ExTorch.Tensor.t()
@spec var(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  keepdim: boolean(),
  out: ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

See ExTorch.var/5

Available signature calls:

  • var(input, dim, correction, kwargs)
  • var(input, dim, correction, keepdim)

var(input, dim, correction, keepdim, kwargs)

@spec var(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec var(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean(),
  ExTorch.Tensor.t() | nil
) :: ExTorch.Tensor.t()

Calculates the variance over the dimensions specified by dim.

dim can be a single dimension, list of dimensions, or nil to reduce over all dimensions.

The variance ($\sigma^2$) is calculated as

$$ \sigma^2 = \frac{1}{N - \delta N} \sum\_{i=0}^{N - 1} (x\_i - \bar{x})^2 $$

where $x$ is the sample set of elements, $\bar{x}$ is the sample mean, $N$ is the number of samples and $\delta N$ is the correction.

If keepdim is true, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs tensor having 1 fewer dimension than input.

Arguments

Optional arguments

  • dim (integer | tuple | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

  • correction (integer) - difference between the sample size and sample degrees of freedom. Defaults to Bessel's correction. Default: 1
  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • out (ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default: nil

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.9319,  0.1259,  0.0744,  0.3516],
 [-0.1965,  0.8596, -1.2986, -0.6350],
 [-0.0211,  0.2856, -1.3375, -1.4459],
 [-0.0489,  0.4821, -0.5326, -2.3099]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Compute the variance of all tensor elements
iex> ExTorch.var(a)
#Tensor<
0.7327
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Compute the variance of elements in the last dimension, keeping total dimensions
iex> ExTorch.var(a, -1, keepdim: true)
#Tensor<
[[0.3258],
 [0.8211],
 [0.7917],
 [1.4677]]
[
  size: {4, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

var_mean(input)

@spec var_mean(ExTorch.Tensor.t()) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.var_mean/5

Available signature calls:

  • var_mean(input)

var_mean(input, dim)

@spec var_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec var_mean(ExTorch.Tensor.t(),
  dim: integer() | tuple() | nil,
  correction: integer(),
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.var_mean/5

Available signature calls:

  • var_mean(input, kwargs)
  • var_mean(input, dim)

var_mean(input, dim, correction)

@spec var_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec var_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  correction: integer(),
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.var_mean/5

Available signature calls:

  • var_mean(input, dim, kwargs)
  • var_mean(input, dim, correction)

var_mean(input, dim, correction, keepdim)

@spec var_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec var_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.var_mean/5

Available signature calls:

  • var_mean(input, dim, correction, kwargs)
  • var_mean(input, dim, correction, keepdim)

var_mean(input, dim, correction, keepdim, kwargs)

@spec var_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean(),
  [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}]
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec var_mean(
  ExTorch.Tensor.t(),
  integer() | tuple() | nil,
  integer(),
  boolean(),
  {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

Calculates the variance and mean over the dimensions specified by dim.

dim can be a single dimension, list of dimensions, or nil to reduce over all dimensions. It returns a tuple {var, mean} containing the variance and mean, respectively.

The variance ($\sigma^2$) is calculated as

$$ \sigma^2 = \frac{1}{N - \delta N} \sum\_{i=0}^{N - 1} (x\_i - \bar{x})^2 $$

where $x$ is the sample set of elements, $\bar{x}$ is the sample mean, $N$ is the number of samples and $\delta N$ is the correction.

If keepdim is true, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs tensor having 1 fewer dimension than input.

Arguments

Optional arguments

  • dim (integer | tuple | nil) - the dimension or dimensions to reduce. If nil, all dimensions are reduced. Default: nil

  • correction (integer) - difference between the sample size and sample degrees of freedom. Defaults to Bessel's correction. Default: 1
  • keepdim (boolean) - whether the output tensor has dim retained or not. Default: false
  • out ({ExTorch.Tensor, ExTorch.Tensor} | nil) - a tuple containing the optional output pre-allocated tensors. Default: nil

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.9319,  0.1259,  0.0744,  0.3516],
 [-0.1965,  0.8596, -1.2986, -0.6350],
 [-0.0211,  0.2856, -1.3375, -1.4459],
 [-0.0489,  0.4821, -0.5326, -2.3099]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Compute variance and mean of all tensor elements
iex> {var, mean} = ExTorch.var_mean(a)
iex> var
#Tensor<
0.7327
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
iex> mean
#Tensor<
-0.4112
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>

# Compute variance and mean of all tensor elements in the last dimension
iex> {var, mean} = ExTorch.var_mean(a, -1, keepdim: true)
iex> var
#Tensor<
[[0.3258],
 [0.8211],
 [0.7917],
 [1.4677]]
[
  size: {4, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> mean
#Tensor<
[[-0.0950],
 [-0.3176],
 [-0.6297],
 [-0.6023]]
[
  size: {4, 1},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

Comparison operations

allclose(input, other)

@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: boolean()

See ExTorch.allclose/5

Available signature calls:

  • allclose(input, other)

allclose(input, other, kwargs)

@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(),
  rtol: float(),
  atol: float(),
  equal_nan: boolean()
) :: boolean()
@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float()) :: boolean()

See ExTorch.allclose/5

Available signature calls:

  • allclose(input, other, rtol)
  • allclose(input, other, kwargs)

allclose(input, other, rtol, atol)

@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float()) :: boolean()
@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(),
  atol: float(),
  equal_nan: boolean()
) ::
  boolean()

See ExTorch.allclose/5

Available signature calls:

  • allclose(input, other, rtol, kwargs)
  • allclose(input, other, rtol, atol)

allclose(input, other, rtol, atol, equal_nan)

@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float(), boolean()) ::
  boolean()
@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float(), [
  {:equal_nan, boolean()}
]) ::
  boolean()

This function checks if input and other satisfy the condition: $$ |\text{input} - \text{other}| \leq \texttt{atol} + \texttt{rtol} \times |\text{other}| $$ elementwise, for all elements of input and other.

Arguments

Optional arguments

  • rtol - Relative tolerance (float). Default: 1.0e-5
  • atol - Absolute tolerance (float). Default: 1.0e-8
  • equal_nan - If true, then two NaNs will be considered equal. Default: false.

Examples

iex> ExTorch.allclose(ExTorch.tensor([10000.0, 1.0e-07]), ExTorch.tensor([10000.1, 1.0e-08]))
false
iex> ExTorch.allclose(ExTorch.tensor([10000.0, 1.0e-08]), ExTorch.tensor([10000.1, 1.0e-09]))
true
iex> ExTorch.allclose(ExTorch.tensor([1.0, :nan]), ExTorch.tensor([1.0, :nan]))
false
iex> ExTorch.allclose(ExTorch.tensor([1.0, :nan]), ExTorch.tensor([1.0, :nan]), equal_nan: true)
true

argsort(input)

@spec argsort(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.argsort/4

Available signature calls:

  • argsort(input)

argsort(input, dim)

@spec argsort(ExTorch.Tensor.t(), integer()) :: ExTorch.Tensor.t()
@spec argsort(ExTorch.Tensor.t(),
  dim: integer(),
  descending: boolean(),
  stable: boolean()
) ::
  ExTorch.Tensor.t()

See ExTorch.argsort/4

Available signature calls:

  • argsort(input, kwargs)
  • argsort(input, dim)

argsort(input, dim, descending)

@spec argsort(ExTorch.Tensor.t(), integer(), boolean()) :: ExTorch.Tensor.t()
@spec argsort(ExTorch.Tensor.t(), integer(), descending: boolean(), stable: boolean()) ::
  ExTorch.Tensor.t()

See ExTorch.argsort/4

Available signature calls:

  • argsort(input, dim, kwargs)
  • argsort(input, dim, descending)

argsort(input, dim, descending, kwargs)

@spec argsort(ExTorch.Tensor.t(), integer(), boolean(), [{:stable, boolean()}]) ::
  ExTorch.Tensor.t()
@spec argsort(ExTorch.Tensor.t(), integer(), boolean(), boolean()) ::
  ExTorch.Tensor.t()

Returns the indices that sort a tensor along a given dimension in ascending order by value.

This is the second value returned by ExTorch.sort/5. See its documentation for the exact semantics of this method. If stable is true then the sorting routine becomes stable, preserving the order of equivalent elements. If false, the relative order of values which compare equal is not guaranteed. true is slower.

Arguments

Optional arguments

  • dim - the dimension to sort along ('integer()'). Default: -1
  • descending - controls the sorting order (ascending or descending) (boolean()). Default: false
  • stable - controls the relative order of equivalent elements (boolean()). Default: false

Examples

iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-1.2732,  0.8419, -0.0140,  0.4717],
 [-1.1627, -0.2813, -0.5655, -0.1348],
 [ 1.5269, -0.2712,  0.5134, -1.5580],
 [ 0.6169, -1.0332,  0.4478, -0.9864]]
[
  size: {4, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Sort alongside an specific dimension
iex> ExTorch.argsort(a, dim: 1)
#Tensor<
[[0, 2, 3, 1],
 [0, 2, 1, 3],
 [3, 1, 2, 0],
 [1, 3, 2, 0]]
[
  size: {4, 4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

eq(input, other)

See ExTorch.eq/3

Available signature calls:

  • eq(input, other)

eq(input, other, kwargs)

Computes element-wise equality.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument. It will return a boolean tensor of the same shape as input, where a true entry represents a value that is equal on both input and other, and false otherwise.

Arguments

Optional arguments

  • out - an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)

Examples

# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.eq(a, 1)
#Tensor<
[[ true, false],
 [false, false]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against a broadcastable value.
iex> ExTorch.eq(a, [1, 2])
#Tensor<
[[ true,  true],
 [false, false]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against another tensor.
iex> ExTorch.eq(a, ExTorch.tensor([[1, 1], [4, 4]]))
#Tensor<
[[ true, false],
 [false,  true]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

equal(input, other)

@spec equal(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: boolean()

Strict element-wise equality for two tensors.

This function will return true if both inputs have the same size and elements. false, otherwise.

Arguments

Examples

iex> ExTorch.equal(ExTorch.tensor([1, 2]), ExTorch.tensor([1, 2]))
true

iex> ExTorch.equal(ExTorch.tensor([1, 2]), ExTorch.tensor([1]))
false

fmax(input, other)

See ExTorch.fmax/3

Available signature calls:

  • fmax(input, other)

fmax(input, other, kwargs)

Computes the element-wise maximum of input and other.

This is like ExTorch.maximum/3 except it handles NaNs differently: if exactly one of the two elements being compared is a NaN then the non-NaN element is taken as the maximum. Only if both elements are NaN is NaN propagated.

This function is a wrapper around C++’s std::fmax and is similar to NumPy’s fmax function.

Supports broadcasting to a common shape, type promotion, and integer and floating-point inputs.

Arguments

Optional arguments

  • out - an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)

Examples

iex> a = ExTorch.tensor([9.7, :nan, 3.1, :nan])
iex> b = ExTorch.tensor([-2.2, 0.5, :nan, :nan])
iex> ExTorch.fmax(a, b)
#Tensor<
[9.7000, 0.5000, 3.1000,    nan]
[
  size: {4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

fmin(input, other)

See ExTorch.fmin/3

Available signature calls:

  • fmin(input, other)

fmin(input, other, kwargs)

Computes the element-wise minimum of input and other.

This is like ExTorch.minimum/3 except it handles NaNs differently: if exactly one of the two elements being compared is a NaN then the non-NaN element is taken as the minimum. Only if both elements are NaN is NaN propagated.

This function is a wrapper around C++’s std::fmin and is similar to NumPy’s fmin function.

Supports broadcasting to a common shape, type promotion, and integer and floating-point inputs.

Arguments

Optional arguments

  • out - an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)

Examples

iex> a = ExTorch.tensor([9.7, :nan, 3.1, :nan])
iex> b = ExTorch.tensor([-2.2, 0.5, :nan, :nan])
iex> ExTorch.fmin(a, b)
#Tensor<
[-2.2000,  0.5000,  3.1000,     nan]
[
  size: {4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

ge(input, other)

See ExTorch.ge/3

Available signature calls:

  • ge(input, other)

ge(input, other, kwargs)

Computes input >= other element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument. It will return a boolean tensor of the same shape as input, where a true entry represents a value that is equal on both input and other, and false otherwise.

Arguments

Optional arguments

  • out - an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)

Examples

# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.ge(a, 2)
#Tensor<
[[false,  true],
 [ true,  true]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against a broadcastable value.
iex> ExTorch.ge(a, [2, 5])
#Tensor<
[[false, false],
 [ true, false]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against another tensor.
iex> ExTorch.ge(a, ExTorch.tensor([[3, 1], [2, 5]]))
#Tensor<
[[false,  true],
 [ true, false]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

greater(input, other)

Alias to gt/2

greater(input, other, kwargs)

Alias to gt/3

greater_equal(input, other)

Alias to ge/2

greater_equal(input, other, kwargs)

Alias to ge/3

gt(input, other)

See ExTorch.gt/3

Available signature calls:

  • gt(input, other)

gt(input, other, kwargs)

Computes input > other element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument. It will return a boolean tensor of the same shape as input, where a true entry represents a value that is equal on both input and other, and false otherwise.

Arguments

Optional arguments

  • out - an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)

Examples

# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.gt(a, 2)
#Tensor<
[[false, false],
 [ true,  true]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against a broadcastable value.
iex> ExTorch.gt(a, [0, 5])
#Tensor<
[[ true, false],
 [ true, false]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against another tensor.
iex> ExTorch.gt(a, ExTorch.tensor([[0, 1], [2, 5]]))
#Tensor<
[[ true,  true],
 [ true, false]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

isclose(input, other)

See ExTorch.isclose/5

Available signature calls:

  • isclose(input, other)

isclose(input, other, kwargs)

@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(),
  rtol: float(),
  atol: float(),
  equal_nan: boolean()
) :: ExTorch.Tensor.t()
@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float()) :: ExTorch.Tensor.t()

See ExTorch.isclose/5

Available signature calls:

  • isclose(input, other, rtol)
  • isclose(input, other, kwargs)

isclose(input, other, rtol, atol)

@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float()) ::
  ExTorch.Tensor.t()
@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(),
  atol: float(),
  equal_nan: boolean()
) ::
  ExTorch.Tensor.t()

See ExTorch.isclose/5

Available signature calls:

  • isclose(input, other, rtol, kwargs)
  • isclose(input, other, rtol, atol)

isclose(input, other, rtol, atol, equal_nan)

@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float(), boolean()) ::
  ExTorch.Tensor.t()
@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float(), [
  {:equal_nan, boolean()}
]) ::
  ExTorch.Tensor.t()

Returns a new tensor with boolean elements representing if each element of input is “close” to the corresponding element of other. Closeness is defined as:

$$ |\text{input} - \text{other}| \leq \texttt{atol} + \texttt{rtol} \times |\text{other}| $$

Where input and/or other are nonfinite they are close if and only if they are equal, with :nans being considered equal to each other when equal_nan is true.

Arguments

Optional arguments

  • rtol - Relative tolerance (float). Default: 1.0e-5
  • atol - Absolute tolerance (float). Default: 1.0e-8
  • equal_nan - If true, then two NaNs will be considered equal. Default: false.

Examples

iex> ExTorch.isclose(ExTorch.tensor([10000.0, 1.0e-07]), ExTorch.tensor([10000.1, 1.0e-08]))
#Tensor<
[ true, false]
[
  size: {2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.isclose(ExTorch.tensor([10000.0, 1.0e-08]), ExTorch.tensor([10000.1, 1.0e-09]))
#Tensor<
[true, true]
[
  size: {2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.isclose(ExTorch.tensor([1.0, :nan]), ExTorch.tensor([1.0, :nan]))
#Tensor<
[ true, false]
[
  size: {2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.isclose(ExTorch.tensor([1.0, :nan]), ExTorch.tensor([1.0, :nan]), equal_nan: true)
#Tensor<
[true, true]
[
  size: {2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

isfinite(input)

@spec isfinite(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with boolean elements representing if each element is finite or not.

Real values are finite when they are not NaN (:nan), negative infinity (:ninf), or infinity (:inf). ExTorch.Complex values are finite when both their real and imaginary parts are finite.

Arguments

Examples

iex> input = ExTorch.tensor([1, :inf, 2, :ninf, :nan])
iex> ExTorch.isfinite(input)
#Tensor<
[ true, false,  true, false, false]
[
  size: {5},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

isin(elements, test_elements)

See ExTorch.isin/4

Available signature calls:

  • isin(elements, test_elements)

isin(elements, test_elements, assume_unique)

See ExTorch.isin/4

Available signature calls:

  • isin(elements, test_elements, kwargs)
  • isin(elements, test_elements, assume_unique)

isin(elements, test_elements, assume_unique, invert)

Tests if each element of elements is in test_elements. Returns a boolean tensor of the same shape as elements that is true for elements in test_elements and false otherwise.

Arguments

  • elements - input elements (ExTorch.Tensor | ExTorch.Scalar)

  • test_elements - values to compare against for each input element. (ExTorch.Tensor | ExTorch.Scalar)

Optional arguments

  • assume_unique - If true, assumes both elements and test_elements contain unique elements, which can speed up the calculation. Default: false. (boolean)
  • invert - If true, inverts the boolean return tensor, resulting in true values for elements not in test_elements. Default: false. (boolean)

Examples

# Check if any of the values is 2
iex> x = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.isin(x, 2)
#Tensor<
[[false,  true],
 [false, false]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Check if any of the values in x is in [1, 3, 5]
iex> ExTorch.isin(x, [1, 3, 5])
#Tensor<
[[ true, false],
 [ true, false]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Invert result
iex> ExTorch.isin(x, ExTorch.tensor([[1, 3], [5, 4]]), invert: true)
#Tensor<
[[false,  true],
 [false, false]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

isinf(input)

@spec isinf(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with boolean elements representing if each element is infinity (both positive and negative).

ExTorch.Complex values are infinity when either of their real or imaginary parts are infinite.

Arguments

Examples

iex> input = ExTorch.tensor([1, :inf, 2, :ninf, :nan])
iex> ExTorch.isinf(input)
#Tensor<
[ false, true,  false, true, false]
[
  size: {5},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

isnan(input)

@spec isnan(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with boolean elements representing if each element is :nan.

ExTorch.Complex values are infinity when either of their real or imaginary parts are :nan.

Arguments

Examples

iex> input = ExTorch.tensor([1, :inf, 2, :ninf, :nan])
iex> ExTorch.isnan(input)
#Tensor<
[ false, false, false, false, true]
[
  size: {5},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

isneginf(input)

@spec isneginf(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with boolean elements representing if each element is negative infinity.

ExTorch.Complex values are infinity when either of their real or imaginary parts are negative infinite.

Arguments

Examples

iex> input = ExTorch.tensor([1, :inf, 2, :ninf, :nan])
iex> ExTorch.isneginf(input)
#Tensor<
[ false, false, false, true, false]
[
  size: {5},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

isposinf(input)

@spec isposinf(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with boolean elements representing if each element is positive infinity.

ExTorch.Complex values are infinity when either of their real or imaginary parts are positive infinite.

Arguments

Examples

iex> input = ExTorch.tensor([1, :inf, 2, :ninf, :nan])
iex> ExTorch.isposinf(input)
#Tensor<
[ false, true,  false, false, false]
[
  size: {5},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

isreal(input)

@spec isreal(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with boolean elements representing if each element is real valued.

All real-valued types are considered real. ExTorch.Complex values are real when their imaginary parts are zero.

Arguments

Examples

iex> input = ExTorch.tensor([1, ExTorch.Complex.complex(-2, 0), ExTorch.Complex.complex(0, 1)])
iex> ExTorch.isreal(input)
#Tensor<
[ true,  true, false]
[
  size: {3},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

kthvalue(input, k)

@spec kthvalue(
  ExTorch.Tensor.t(),
  integer()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.kthvalue/5

Available signature calls:

  • kthvalue(input, k)

kthvalue(input, k, dim)

@spec kthvalue(
  ExTorch.Tensor.t(),
  integer(),
  integer()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec kthvalue(
  ExTorch.Tensor.t(),
  integer(),
  dim: integer(),
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.kthvalue/5

Available signature calls:

  • kthvalue(input, k, kwargs)
  • kthvalue(input, k, dim)

kthvalue(input, k, dim, keepdim)

@spec kthvalue(
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  boolean()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec kthvalue(
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  keepdim: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.kthvalue/5

Available signature calls:

  • kthvalue(input, k, dim, kwargs)
  • kthvalue(input, k, dim, keepdim)

kthvalue(input, k, dim, keepdim, kwargs)

Returns a tuple {values, indices} where values is the kth smallest element of each row of the input tensor in the given dimension dim. And indices is the index location of each element found.

  • If dim is not given, the last dimension of the input is chosen.
  • If keepdim is true, both the values and indices tensors are the same size as input, except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in both the values and indices tensors having 1 fewer dimension than the input tensor.

Arguments

  • input - the input tensor. (ExTorch.Tensor)
  • k - k for the kth smallest value. (integer)

Optional arguments

  • dim - the dimension to find the kth smallest value along. Default: -1. (integer)
  • keepdim - whether the output tensor has dim retained or not. Default: false (boolean)
  • out - the output tuple of {values, indices} that can be optionally given as output buffers. ({ExTorch.Tensor, ExTorch.Tensor}). Default: nil

Notes

When input is a CUDA tensor and there are multiple valid k th values, this function may nondeterministically return indices for any of them.

Examples

# Retrieve the fourth smallest value.
iex> x = ExTorch.arange(6)
#Tensor<
[0.0000, 1.0000, 2.0000, 3.0000, 4.0000, 5.0000]
[
  size: {6},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> {values, indices} = ExTorch.kthvalue(x, 4)
iex> values
#Tensor<
3.
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
iex> indices
#Tensor<
3
[size: {}, dtype: :long, device: :cpu, requires_grad: false]>

# Retrieve the second smallest value in the first dimension
iex> x = ExTorch.rand({3, 4})
#Tensor<
[[0.7375, 0.2798, 0.7146, 0.0654],
 [0.0163, 0.8829, 0.8946, 0.3852],
 [0.2225, 0.3258, 0.6905, 0.1512]]
[
  size: {3, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> {values, indices} = ExTorch.kthvalue(x, 2, 0, keepdim: true)
iex> values
#Tensor<
[[0.2225, 0.3258, 0.7146, 0.1512]]
[
  size: {1, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex(12)> indices
#Tensor<
[[2, 2, 0, 2]]
[
  size: {1, 4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

le(input, other)

See ExTorch.le/3

Available signature calls:

  • le(input, other)

le(input, other, kwargs)

Computes input <= other element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument. It will return a boolean tensor of the same shape as input, where a true entry represents a value that is equal on both input and other, and false otherwise.

Arguments

Optional arguments

  • out - an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)

Examples

# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.le(a, 2)
#Tensor<
[[ true,  true],
 [false, false]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against a broadcastable value.
iex> ExTorch.le(a, [2, 4])
#Tensor<
[[ true,  true],
 [false,  true]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against another tensor.
iex> ExTorch.le(a, ExTorch.tensor([[3, 1], [2, 5]]))
#Tensor<
[[ true, false],
 [false,  true]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

less(input, other)

Alias to lt/2

less(input, other, kwargs)

Alias to lt/3

less_equal(input, other)

Alias to le/2

less_equal(input, other, kwargs)

Alias to le/3

lt(input, other)

See ExTorch.lt/3

Available signature calls:

  • lt(input, other)

lt(input, other, kwargs)

Computes input < other element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument. It will return a boolean tensor of the same shape as input, where a true entry represents a value that is equal on both input and other, and false otherwise.

Arguments

Optional arguments

  • out - an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)

Examples

# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.lt(a, 2)
#Tensor<
[[ true, false],
 [false, false]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against a broadcastable value.
iex> ExTorch.lt(a, [1, 5])
#Tensor<
[[false,  true],
 [false,  true]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against another tensor.
iex> ExTorch.lt(a, ExTorch.tensor([[0, 1], [2, 5]]))
#Tensor<
[[false, false],
 [false,  true]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

maximum(input, other)

See ExTorch.maximum/3

Available signature calls:

  • maximum(input, other)

maximum(input, other, kwargs)

@spec maximum(ExTorch.Tensor.t(), ExTorch.Tensor.t(), [
  {:out, ExTorch.Tensor.t() | nil}
]) ::
  ExTorch.Tensor.t()
@spec maximum(ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Computes the element-wise maximum of input and other.

Arguments

Optional arguments

  • out - an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)

Notes

If one of the elements being compared is a NaN, then that element is returned. ExTorch.maximum/3 is not supported for tensors with complex dtypes.

Examples

iex> a = ExTorch.tensor([1, 2, -1])
iex> b = ExTorch.tensor([3, 0, 4])
iex> ExTorch.maximum(a, b)
#Tensor<
[3, 2, 4]
[size: {3}, dtype: :int, device: :cpu, requires_grad: false]>

minimum(input, other)

See ExTorch.minimum/3

Available signature calls:

  • minimum(input, other)

minimum(input, other, kwargs)

@spec minimum(ExTorch.Tensor.t(), ExTorch.Tensor.t(), [
  {:out, ExTorch.Tensor.t() | nil}
]) ::
  ExTorch.Tensor.t()
@spec minimum(ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) ::
  ExTorch.Tensor.t()

Computes the element-wise minimum of input and other.

Arguments

Optional arguments

  • out - an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)

Notes

If one of the elements being compared is a NaN, then that element is returned. ExTorch.minimum/3 is not supported for tensors with complex dtypes.

Examples

iex> a = ExTorch.tensor([1, 2, -1])
iex> b = ExTorch.tensor([3, 0, 4])
iex> ExTorch.minimum(a, b)
#Tensor<
[ 1,  0, -1]
[
  size: {3},
  dtype: :int,
  device: :cpu,
  requires_grad: false
]>

msort(input)

@spec msort(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

See ExTorch.msort/2

Available signature calls:

  • msort(input)

msort(input, kwargs)

@spec msort(
  ExTorch.Tensor.t(),
  [{:out, ExTorch.Tensor.t() | nil}]
) :: ExTorch.Tensor.t()
@spec msort(ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()

Sorts the elements of the input tensor along its first dimension in ascending order by value.

Arguments

Optional arguments

  • out (ExTorch.Tensor) - an optional pre-allocated tensor used to store the comparison result. Default: nil.

Examples

iex> t = ExTorch.randn({3, 4})
#Tensor<
[[-1.5470, -1.5603, -0.9216,  3.0246],
 [ 0.3064,  1.1371,  0.3475,  1.3003],
 [-2.0710, -0.0693, -1.5537, -0.3430]]
[
  size: {3, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.msort(t)
#Tensor<
[[-2.0710, -1.5603, -1.5537, -0.3430],
 [-1.5470, -0.0693, -0.9216,  1.3003],
 [ 0.3064,  1.1371,  0.3475,  3.0246]]
[
  size: {3, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

ne(input, other)

See ExTorch.ne/3

Available signature calls:

  • ne(input, other)

ne(input, other, kwargs)

Computes input != other element-wise.

The second argument can be a number or a tensor whose shape is broadcastable with the first argument. It will return a boolean tensor of the same shape as input, where a true entry represents a value that is equal on both input and other, and false otherwise.

Arguments

Optional arguments

  • out - an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)

Examples

# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.ne(a, 2)
#Tensor<
[[ true, false],
 [ true,  true]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against a broadcastable value.
iex> ExTorch.ne(a, [1, 5])
#Tensor<
[[false,  true],
 [true,  true]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

# Compare against another tensor.
iex> ExTorch.ne(a, ExTorch.tensor([[0, 2], [2, 5]]))
#Tensor<
[[true, false],
 [true,  true]]
[
  size: {2, 2},
  dtype: :bool,
  device: :cpu,
  requires_grad: false
]>

not_equal(input, other)

Alias to ne/2

not_equal(input, other, kwargs)

Alias to ne/3

sort(input)

See ExTorch.sort/5

Available signature calls:

  • sort(input)

sort(input, dim)

@spec sort(
  ExTorch.Tensor.t(),
  integer()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec sort(ExTorch.Tensor.t(),
  dim: integer(),
  descending: boolean(),
  stable: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.sort/5

Available signature calls:

  • sort(input, kwargs)
  • sort(input, dim)

sort(input, dim, descending)

@spec sort(
  ExTorch.Tensor.t(),
  integer(),
  boolean()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec sort(
  ExTorch.Tensor.t(),
  integer(),
  descending: boolean(),
  stable: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.sort/5

Available signature calls:

  • sort(input, dim, kwargs)
  • sort(input, dim, descending)

sort(input, dim, descending, kwargs)

See ExTorch.sort/5

Available signature calls:

  • sort(input, dim, descending, stable)
  • sort(input, dim, descending, kwargs)

sort(input, dim, descending, stable, kwargs)

Sorts the elements of the input tensor along a given dimension in ascending order by value.

  • If dim is not given, the last dimension of the input is chosen.
  • If descending is true then the elements are sorted in descending order by value.
  • If stable is true then the sorting routine becomes stable, preserving the order of equivalent elements.

A tuple of {values, indices} is returned, where the values are the sorted values and indices are the indices of the elements in the original input tensor.

Arguments

Optional arguments

  • dim - the dimension to sort along. ('integer()'). Default: -1
  • descending - controls the sorting order. (ascending or descending) (boolean()). Default: false
  • stable - controls the relative order of equivalent elements. (boolean()). Default: false
  • out - the output tuple of {values, indices} that can be optionally given as output buffers. ({ExTorch.Tensor, ExTorch.Tensor}). Default: nil

Examples

iex> a = ExTorch.randn({3, 4})
#Tensor<
[[ 0.7517,  0.5590, -0.1417, -0.1662],
 [-0.1247,  0.5669,  0.0484,  0.4289],
 [ 0.0876, -0.5951, -1.0296,  0.0093]]
[
  size: {3, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>

# Sort tensor on the last dimension
iex> {values, indices} = ExTorch.sort(a)
iex> values
#Tensor<
[[-0.1662, -0.1417,  0.5590,  0.7517],
 [-0.1247,  0.0484,  0.4289,  0.5669],
 [-1.0296, -0.5951,  0.0093,  0.0876]]
[
  size: {3, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> indices
#Tensor<
[[3, 2, 1, 0],
 [0, 2, 3, 1],
 [2, 1, 3, 0]]
[
  size: {3, 4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Sort tensor on the first dimension while reusing values and indices
iex> ExTorch.sort(a, 0, out: {values, indices})
iex> values
#Tensor<
[[-0.1247, -0.5951, -1.0296, -0.1662],
 [ 0.0876,  0.5590, -0.1417,  0.0093],
 [ 0.7517,  0.5669,  0.0484,  0.4289]]
[
  size: {3, 4},
  dtype: :float,
  device: :cpu,
  requires_grad: false
]>
iex> indices
#Tensor<
[[1, 2, 2, 0],
 [2, 0, 0, 2],
 [0, 1, 1, 1]]
[
  size: {3, 4},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

topk(input, k)

See ExTorch.topk/6

Available signature calls:

  • topk(input, k)

topk(input, k, dim)

@spec topk(
  ExTorch.Tensor.t(),
  integer(),
  integer()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec topk(
  ExTorch.Tensor.t(),
  integer(),
  dim: integer(),
  largest: boolean(),
  sorted: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.topk/6

Available signature calls:

  • topk(input, k, kwargs)
  • topk(input, k, dim)

topk(input, k, dim, kwargs)

@spec topk(
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  largest: boolean(),
  sorted: boolean(),
  out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec topk(
  ExTorch.Tensor.t(),
  integer(),
  integer(),
  boolean()
) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}

See ExTorch.topk/6

Available signature calls:

  • topk(input, k, dim, largest)
  • topk(input, k, dim, kwargs)

topk(input, k, dim, largest, kwargs)

See ExTorch.topk/6

Available signature calls:

  • topk(input, k, dim, largest, sorted)
  • topk(input, k, dim, largest, kwargs)

topk(input, k, dim, largest, sorted, kwargs)

Returns the k largest elements of the given input tensor along a given dimension.

  • If dim is not given, the last dimension of the input is chosen.
  • If largest is false then the k smallest elements are returned.
  • A tuple of {values, indices} is returned with the values and indices of the largest k elements of each row of the input tensor in the given dimension dim.
  • The boolean option sorted if true, will make sure that the returned k elements are themselves sorted.

Arguments

Optional arguments

  • dim (integer) - the dimension to sort along. Default: -1
  • largest (boolean) - controls whether to return largest or smallest elements. Default: true
  • sorted (boolean) - controls whether to return the elements in sorted order. Default: true
  • out ({ExTorch.Tensor, ExTorch.Tensor} | nil) - the output tuple of {values, indices} that can be optionally given as output buffers. Default: nil

Examples

# Retrieve the top-3 elements in the last dimension.
iex> input = ExTorch.tensor([
...>   [-1, 3, 10, -2, 0, 4, 5],
...>   [5, -5, 2, 3, 7, 20, 1],
...>   [0, 1, 2, 3, 4, 5, 6]
...> ])
iex> {values, indices} = ExTorch.topk(input, 3)
iex> values
#Tensor<
[[10,  5,  4],
 [20,  7,  5],
 [ 6,  5,  4]]
[
  size: {3, 3},
  dtype: :int,
  device: :cpu,
  requires_grad: false
]>
iex> indices
#Tensor<
[[2, 6, 5],
 [5, 4, 0],
 [6, 5, 4]]
[
  size: {3, 3},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

# Retrieve the top-2 smallest elements in the first dimension.
iex> {values, indices} = ExTorch.topk(input, 2, dim: 0, largest: false)
iex> values
#Tensor<
[[-1, -5,  2, -2,  0,  4,  1],
 [ 0,  1,  2,  3,  4,  5,  5]]
[
  size: {2, 7},
  dtype: :int,
  device: :cpu,
  requires_grad: false
]>
iex(10)> indices
#Tensor<
[[0, 1, 1, 0, 0, 0, 1],
 [2, 2, 2, 1, 2, 2, 0]]
[
  size: {2, 7},
  dtype: :long,
  device: :cpu,
  requires_grad: false
]>

Other operations

resolve_conj(input)

@spec resolve_conj(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a new tensor with materialized conjugation if input’s conjugate bit is set to true, else returns input. The output tensor will always have its conjugate bit set to false.

Arguments

Examples

# Create a conjugated view.
iex> a = ExTorch.rand({3, 3}, dtype: :complex128)
iex> b = ExTorch.conj(a)
iex> ExTorch.Tensor.is_conj(b)
true

# Materialize the view.
iex> c = ExTorch.resolve_conj(b)
iex> ExTorch.Tensor.is_conj(c)
false

view_as_complex(input)

@spec view_as_complex(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()

Returns a view of input as a complex tensor.

For an input complex tensor of size $$ (\text{m1}, \text{m2}, \cdots, \text{mi}, 2) $$, this function returns a new complex tensor of size $$ (\text{m1}, \text{m2}, \cdots, \text{mi}) $$ where the last dimension of the input tensor is expected to represent the real and imaginary components of complex numbers.

Arguments

Notes

view_as_complex/1 is only supported for tensors with ExTorch.DType :float64 and :float32. The input is expected to have the last dimension of size 2. In addition, the tensor must have a stride of 1 for its last dimension. The strides of all other dimensions must be even numbers.

Examples

iex> x = ExTorch.randint(-3, 3, {5, 2})
#Tensor<
[[ 2., -1.],
 [ 0., -1.],
 [-2., -2.],
 [ 2.,  0.],
 [ 1., -1.]]
[
  size: {5, 2},
  dtype: :double,
  device: :cpu,
  requires_grad: false
]>

iex> ExTorch.view_as_complex(x)
#Tensor<
[ 2.-1.j,  0.-1.j, -2.-2.j,  2.+0.j,  1.-1.j]
[
  size: {5},
  dtype: :complex_double,
  device: :cpu,
  requires_grad: false
]>