ExTorch (extorch v0.2.0)
Copy MarkdownThe ExTorch namespace contains data structures for multi-dimensional tensors and mathematical operations over these are defined.
Additionally, it provides many utilities for efficient serializing of Tensors and arbitrary types, and other useful utilities.
It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0
Summary
Per-process settings
Get the default device on which ExTorch.Tensor structs are allocated.
Get the current default floating point dtype for the current process.
Sets the default device on which ExTorch.Tensor structs are allocated.
Sets the default floating point dtype of the current process to dtype.
Tensor creation
Returns a 1-D tensor of size $\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil$
with values from the interval [start, end) taken with common difference
step beginning from start.
Constructs a complex tensor with its real part equal to real and its
imaginary part equal to imag.
Returns a tensor filled with uninitialized data. The shape of the tensor is
defined by the tuple argument size.
Returns an uninitialized tensor, with the same size as input.
See ExTorch.eye/3
Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.
Returns a tensor filled with the scalar value scalar, with the shape defined
by the variable argument size.
Returns a tensor filled with the scalar value fill_value, with the same size as input.
Creates a one-dimensional tensor of size steps whose values are evenly
spaced from start to end, inclusive. That is, the value are
Creates a one-dimensional tensor of size steps whose values are evenly
spaced from ${{\text{{base}}}}^{{\text{{start}}}}$ to
${{\text{{base}}}}^{{\text{{end}}}}$, inclusive, on a logarithmic scale
with base base. That is, the values are
Returns a tensor filled with the scalar value 1, with the shape defined
by the variable argument size.
Returns a tensor filled with the scalar value 1, with the same size as input.
Constructs a complex tensor whose elements are Cartesian coordinates
corresponding to the polar coordinates with absolute value abs and
angle angle.
Returns a tensor filled with random numbers from a uniform distribution on the interval $[0, 1)$
Returns a tensor filled with random numbers from a uniform distribution
on the interval $[0, 1)$, with the same size as input.
Returns a tensor filled with random integers generated uniformly
between low (inclusive) and high (exclusive).
Returns a tensor filled with random integers generated uniformly
between low (inclusive) and high (exclusive),
with the same size as input.
Returns a tensor filled with random numbers from a normal distribution
with mean 0 and variance 1 (also called the standard normal
distribution).
Returns a tensor filled with random numbers from a normal distribution
with mean 0 and variance 1 (also called the standard normal
distribution), with the same size as input.
Constructs a tensor with data.
Returns a tensor filled with the scalar value 0, with the shape defined
by the variable argument size.
Returns a tensor filled with the scalar value 0, with the same size as input.
Tensor manipulation
Returns a view of the tensor conjugated and with the last two dimensions transposed.
Concatenates the given sequence of seq tensors in the given dimension.
All tensors must either have the same shape (except in the concatenating dimension) or be empty.
Attempts to split a tensor into the specified number of chunks. Each chunk is a view of the input tensor.
Creates a new tensor by horizontally stacking the tensors in tensors.
Alias to cat/1
Alias to cat/2
Alias to cat/3
Alias to cat/1
Alias to cat/2
Alias to cat/3
Returns a view of input with a flipped conjugate bit.
If input has a non-complex dtype, this function just returns input.
Embeds the values of the src tensor into input along the diagonal elements of input,
with respect to dim1 and dim2.
Splits input, a tensor with three or more dimensions, into multiple tensors
depthwise according to indices_or_sections. Each split is a view of input.
Stack tensors in sequence depthwise (along third axis).
Gathers values along an axis specified by dim.
Splits input, a tensor with one or more dimensions, into multiple tensors horizontally according to indices_or_sections.
Each split is a view of input.
Stack tensors in sequence horizontally (column wise).
Moves the dimension(s) of input at the position(s) in source to the position(s) in destination.
Returns a new tensor that is a narrowed version of input tensor.
Same as ExTorch.narrow/4 except this returns a copy rather than shared storage.
This is primarily for sparse tensors, which do not have a shared-storage narrow method.
Retrieve the indices of all non-zero elements in a tensor.
Returns a view of the original tensor input with its dimensions permuted.
Returns a tensor with the same data and number of elements as input, but
with the specified shape.
Alias to vstack/1
Alias to vstack/2
Writes all values from the tensor src into input at the indices specified in the index tensor.
For each value in src, its output index is specified by its index in src for dimension != dim and by
the corresponding value in index for dimension = dim.
Adds all values from the tensor src into input at the indices specified in the index tensor in a similar
fashion to ExTorch.scatter/6.
Reduces all values from the src tensor to the indices specified in the index tensor in the input tensor
using the applied reduction defined via the reduce argument (:sum, :prod, :mean, :amax, :amin).
For each value in src, it is reduced to an index in input which is specified by its index in src for
dimension != dim and by the corresponding value in index for dimension = dim. If include_self: true,
the values in the input tensor are included in the reduction.
Embeds the values of the src tensor into input at the given index.
This function returns a tensor with fresh storage; it does not create a view.
Embeds the values of the src tensor into input at the given dimension.
This function returns a tensor with fresh storage; it does not create a view.
Splits the tensor into chunks. Each chunk is a view of the original tensor.
Returns a tensor with all specified dimensions of input of size 1 removed.
Concatenates a sequence of tensors along a new dimension.
Alias to transpose/3
Alias to transpose/3
Expects input to be <= 2-D tensor and transposes dimensions 0 and 1.
Returns a new tensor with the elements of input at the given indices.
Selects values from input at the 1-dimensional indices from indices along the given dim.
Splits a tensor into multiple sub-tensors, all of which are views of input,
along dimension dim according to the indices or number of sections specified
by indices_or_sections.
Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped.
Append an empty dimension to a tensor on a given dimension.
Stack tensors in sequence vertically (row wise).
Tensor indexing
Index a tensor given a list of integers, ranges, tensors, nil or
:ellipsis.
Accumulate the elements of alpha times source into the input tensor by adding
to the indices in the order given in index.
Copies the elements of source into the input tensor by selecting the indices
in the order given in index.
Assign a value into a tensor given a single or a sequence of indices.
Accumulate the elements of source into the self tensor by accumulating to the indices in
the order given in index using the reduction given by the reduce argument.
Returns a new tensor which indexes the input tensor along dimension dim
using the entries in index (whose dtype is :long).
Returns a new 1-D tensor which indexes the input tensor according to the
boolean mask mask which has dtype :bool.
Slices the input tensor along the selected dimension at the given index.
This function returns a view of the original tensor with the given dimension removed.
Create a slice to index a tensor.
Pointwise math operations
Returns a new tensor containing imaginary values of the input tensor.
The returned tensor and input share the same underlying storage.
Returns a new tensor containing real values of the input tensor.
The returned tensor and input share the same underlying storage.
Reduction operations
Check if all elements (or in a dimension) in input evaluate to true.
Returns the maximum value of each slice of the input tensor in the given dimension(s) dim.
Returns the minimum value of each slice of the input tensor in the given dimension(s) dim.
Computes the minimum and maximum values of the input tensor.
Check if at least one element (or element in a dimension) in input evaluates to true.
Returns the indices of the maximum value of all elements (or elements in a dimension) in the input tensor.
Returns the indices of the minimum value of all elements (or elements in a dimension) in the input tensor.
Counts the number of non-zero values in the tensor input along the given dim.
If no dim is specified then all non-zeros in the tensor are counted.
Returns the p-norm of (input - other)
Returns the log of summed exponentials of each row of the input tensor in the given dimension dim.
The computation is numerically stabilized.
Returns the maximum value of all elements (or elements in a dimension) in the
input tensor.
Returns the mean value of all elements (or alongside an axis) in the input tensor.
Returns the median of the values in input.
Returns the minimum value of all elements (or elements in a dimension) in the
input tensor.
Returns the mode of the values in input across dimension dim.
Computes the mean of all non-NaN elements along the specified dimensions.
Returns the median of the values in input, ignoring NaN values.
This is a variant of ExTorch.quantile/6 that “ignores” NaN values, computing the quantiles q as
if NaN values in input did not exist. If all values in a reduced row are NaN then the quantiles
for that reduction will be NaN. See the documentation for ExTorch.quantile/6.
Returns the sum of all elements (or alongside an axis) in the input tensor, treating NaNs as zeros.
Returns the product of all elements (or alongside an axis) in the input tensor.
Computes the q-th quantiles of each row of the input tensor along the dimension dim.
Alias to std_dev/1
Alias to std_dev/2
Alias to std_dev/3
Calculates the standard deviation over the dimensions specified by dim.
Calculates the standard deviation and mean over the dimensions specified by dim.
Returns the sum of all elements (or alongside an axis) in the input tensor.
Returns the unique elements of the input tensor.
Eliminates all but the first element from every consecutive group of equivalent elements.
Calculates the variance over the dimensions specified by dim.
Calculates the variance and mean over the dimensions specified by dim.
Comparison operations
This function checks if input and other satisfy the condition:
$$
|\text{input} - \text{other}| \leq \texttt{atol} + \texttt{rtol} \times |\text{other}|
$$
elementwise, for all elements of input and other.
Returns the indices that sort a tensor along a given dimension in ascending order by value.
Computes element-wise equality.
Strict element-wise equality for two tensors.
Computes the element-wise maximum of input and other.
Computes the element-wise minimum of input and other.
Computes input >= other element-wise.
Alias to gt/2
Alias to gt/3
Alias to ge/2
Alias to ge/3
Computes input > other element-wise.
Returns a new tensor with boolean elements representing if each element of input is
“close” to the corresponding element of other. Closeness is defined as
Returns a new tensor with boolean elements representing if each element is finite or not.
Tests if each element of elements is in test_elements.
Returns a boolean tensor of the same shape as elements that is true for
elements in test_elements and false otherwise.
Returns a new tensor with boolean elements representing if each element is infinity (both positive and negative).
Returns a new tensor with boolean elements representing if each element is
:nan.
Returns a new tensor with boolean elements representing if each element is negative infinity.
Returns a new tensor with boolean elements representing if each element is positive infinity.
Returns a new tensor with boolean elements representing if each element is real valued.
Returns a tuple {values, indices} where values is the kth smallest element of
each row of the input tensor in the given dimension dim. And indices is the
index location of each element found.
Computes input <= other element-wise.
Alias to lt/2
Alias to lt/3
Alias to le/2
Alias to le/3
Computes input < other element-wise.
Computes the element-wise maximum of input and other.
Computes the element-wise minimum of input and other.
Sorts the elements of the input tensor along its first dimension in ascending order by value.
Computes input != other element-wise.
Alias to ne/2
Alias to ne/3
Sorts the elements of the input tensor along a given dimension in ascending order by value.
Returns the k largest elements of the given input tensor along a given dimension.
Other operations
Returns a new tensor with materialized conjugation if input’s conjugate
bit is set to true, else returns input. The output tensor will always
have its conjugate bit set to false.
Returns a view of input as a complex tensor.
Per-process settings
@spec get_default_device() :: ExTorch.Device.device()
Get the default device on which ExTorch.Tensor structs are allocated.
Notes
By default, ExTorch will set :cpu as the default device.
@spec get_default_dtype() :: ExTorch.DType.dtype()
Get the current default floating point dtype for the current process.
Notes
By default, ExTorch will set the default dtype to :float32.
@spec set_default_device(ExTorch.Device.device()) :: ExTorch.Device.device()
Sets the default device on which ExTorch.Tensor structs are allocated.
This function does not affect factory function calls which are called with an
explicit device argument. Factory calls will be performed as if they were
passed device as an argument.
The default device is initially :cpu. If you set the default tensor device to
another device (e.g., :cuda) without a device index, tensors will be
allocated on whatever the current device for the device type.
Examples
# By default, the device will be :cpu
iex> a = ExTorch.tensor([1.2, 3])
iex> a.device
:cpu
# Change the default device to :cuda
iex> ExTorch.set_default_device(:cuda)
# Check that tensors are now being created on gpu
iex> a = ExTorch.tensor([1.2, 3])
iex> a.device
{:cuda, 0}
@spec set_default_dtype(ExTorch.DType.dtype()) :: ExTorch.DType.dtype()
Sets the default floating point dtype of the current process to dtype.
Supports :float32 and :float64 as inputs. Other dtypes may be accepted
without complaint but are not supported and are unlikely to work as expected.
When PyTorch is initialized its default floating point dtype is :float32,
and the intent of set_default_dtype(:float64) is to facilitate NumPy-like
type inference. The default floating point dtype is used to:
Implicitly determine the default complex dtype. When the default floating point type is
:float32the default complex dtype is:complex64, and when the default floating point type is:float64the default complex type is:complex128.Infer the dtype for tensors constructed using Elixir floats or
ExTorch.Complexnumbers. See examples below.Determine the result of type promotion between bool and integer tensors and Elixir floats and
ExTorch.Complexnumbers.
Examples
# Initial default for floating point is :float32
iex> a = ExTorch.tensor([1.2, 3])
iex> a.dtype
:float
# Initial default for floating point complex numbers is :complex64
iex> b = ExTorch.tensor([1.2, ExTorch.Complex.complex(0, 3)])
iex> b.dtype
:complex_float
# Changing the default dtype to :float64
iex> ExTorch.set_default_dtype(:float64)
# Floats are now interpreted as float64
iex> a = ExTorch.tensor([1.2, 3])
iex> a.dtype
:double
# Complex numbers are now interpreted as :complex128
iex> b = ExTorch.tensor([1.2, ExTorch.Complex.complex(0, 3)])
iex> b.dtype
:complex_double
Tensor creation
@spec arange(number()) :: ExTorch.Tensor.t()
See ExTorch.arange/4
Available signature calls:
arange(end_bound)
@spec arange(number(), step: number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec arange( number(), number() ) :: ExTorch.Tensor.t()
See ExTorch.arange/4
Available signature calls:
arange(start, end_bound)arange(end_bound, kwargs)
@spec arange( number(), number(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
@spec arange( number(), number(), step: number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec arange( number(), number(), number() ) :: ExTorch.Tensor.t()
See ExTorch.arange/4
Available signature calls:
arange(start, end_bound, step)arange(start, end_bound, kwargs)arange(end_bound, step, opts)
@spec arange( number(), number(), number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec arange( number(), number(), number(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a 1-D tensor of size $\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil$
with values from the interval [start, end) taken with common difference
step beginning from start.
Note that non-integer step is subject to floating point rounding errors when
comparing against end; to avoid inconsistency, we advise adding a small epsilon
to end in such cases.
$$ out_{i + 1} = out_i + step $$
Arguments
start: the starting value for the set of points. Default:0.end: the ending value for the set of points.step: the gap between each pair of adjacent points. Default:1.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
# Single argument, end only
iex> ExTorch.arange(5)
#Tensor<
[0., 1., 2., 3., 4.]
[
size: {5},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# End only with options
iex> ExTorch.arange(5, dtype: :uint8)
#Tensor<
[0, 1, 2, 3, 4]
[
size: {5},
dtype: :byte,
device: :cpu,
requires_grad: false
]>
# Start to end
iex> ExTorch.arange(1, 7)
#Tensor<
[1., 2., 3., 4., 5., 6.]
[
size: {6},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Start to end with options
iex> ExTorch.arange(1, 7, device: :cuda, dtype: :float16)
#Tensor<
[1., 2., 3., 4., 5., 6.]
[
size: {6},
dtype: :half,
device: {:cuda, 0},
requires_grad: false
]>
# Start to end with step
iex> ExTorch.arange(-1.3, 2.4, 0.5)
#Tensor<
[-1.3000, -0.8000, -0.3000, 0.2000, 0.7000, 1.2000, 1.7000, 2.2000]
[
size: {8},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Start to end with step and options
iex> ExTorch.arange(-1.3, 2.4, 0.5, dtype: :float64)
#Tensor<
[-1.3000, -0.8000, -0.3000, 0.2000, 0.7000, 1.2000, 1.7000, 2.2000]
[
size: {8},
dtype: :double,
device: :cpu,
requires_grad: false
]>
@spec complex(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Constructs a complex tensor with its real part equal to real and its
imaginary part equal to imag.
Arguments
real: AnExTorch.Tensorcontaining the real parts.imag: AnExTorch.Tensorcontaining the imag parts.
Notes
If both the inputs are :float32, the output will be :complex64.
Comparatively, if both the inputs are :float64, the output will be
:complex128.
Examples
iex> real = ExTorch.arange(5)
iex> imag = ExTorch.arange(-5, 0)
iex> ExTorch.complex(real, imag)
#Tensor<
[0.-5.j, 1.-4.j, 2.-3.j, 3.-2.j, 4.-1.j]
[
size: {5},
dtype: :complex_float,
device: :cpu,
requires_grad: false
]>
@spec empty(tuple() | [integer()]) :: ExTorch.Tensor.t()
See ExTorch.empty/2
Available signature calls:
empty(size)
@spec empty(tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec empty( tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with uninitialized data. The shape of the tensor is
defined by the tuple argument size.
Arguments
size: a tuple/list of integers defining the shape of the output tensor.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
iex> ExTorch.empty({2, 3})
#Tensor<
[[ 6.7262e-44, 0.0000e+00, 7.2868e-44],
[ 0.0000e+00, -2.7524e+24, 4.5880e-41]]
[
size: {2, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.empty({2, 3}, dtype: :int64, device: :cuda)
#Tensor<
[[0, 0, 0],
[0, 0, 0]]
[
size: {2, 3},
dtype: :long,
device: {:cuda, 0},
requires_grad: false
]>
@spec empty_like(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
empty_like(input)
@spec empty_like(ExTorch.Tensor.t(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec empty_like(ExTorch.Tensor.t(), ExTorch.Tensor.Options.t()) :: ExTorch.Tensor.t()
Returns an uninitialized tensor, with the same size as input.
ExTorch.empty_like(input) is equivalent to
ExTorch.empty(input.size, dtype: input.dtype, layout: input.layout, device: input.device)
Arguments
input: The input tensor (ExTorch.Tensor)
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default:auto. Ifauto, it will use the same data type as the input. Ifnil, it will use a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default:nil. Ifnil, it will use the same layout as the input.device (
ExTorch.Device, optional): the desired device of returned tensor. Default:auto. Ifauto, it will use the same device as the input. Ifnil, it will use the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::preserve. Ifpreserve, it will use the same memory format as the input.
Examples
# Create an empty tensor from another
iex> a = ExTorch.empty({4, 5})
iex> ExTorch.empty_like(a)
#Tensor<
[[ 8.3624e+06, 4.5880e-41, -2.8874e+24, 4.5880e-41, 2.5223e-44],
[ 0.0000e+00, 2.5223e-44, 0.0000e+00, 5.1482e+22, 1.6816e-43],
[ 9.8511e-43, 0.0000e+00, 8.3624e+06, 4.5880e-41, -3.1780e+24],
[ 4.5880e-41, 2.5223e-44, 0.0000e+00, 2.5223e-44, 0.0000e+00]]
[
size: {4, 5},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Create an empty tensor in GPU from a CPU one
iex> a = ExTorch.empty({3, 3})
iex> ExTorch.empty_like(a, device: :cuda)
#Tensor<
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]]
[
size: {3, 3},
dtype: :float,
device: {:cuda, 0},
requires_grad: false
]>
@spec eye(integer()) :: ExTorch.Tensor.t()
See ExTorch.eye/3
Available signature calls:
eye(n)
@spec eye(integer(), m: integer(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec eye( integer(), integer() ) :: ExTorch.Tensor.t()
See ExTorch.eye/3
Available signature calls:
eye(n, m)eye(n, kwargs)
@spec eye( integer(), integer(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec eye( integer(), integer(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.
Arguments
n: the number of rowsm: the number of columns
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
iex> ExTorch.eye(3)
#Tensor<
[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]
[
size: {3, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.eye(3, 3)
#Tensor<
[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]
[
size: {3, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.eye(4, 6, dtype: :uint8, device: :cuda)
#Tensor<
[[1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0]]
[
size: {4, 6},
dtype: :byte,
device: {:cuda, 0},
requires_grad: false
]>
@spec full( tuple() | [integer()], ExTorch.Scalar.t() ) :: ExTorch.Tensor.t()
See ExTorch.full/3
Available signature calls:
full(size, scalar)
@spec full( tuple() | [integer()], ExTorch.Scalar.t(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec full( tuple() | [integer()], ExTorch.Scalar.t(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with the scalar value scalar, with the shape defined
by the variable argument size.
Arguments
size: a tuple/list of integers defining the shape of the output tensor.scalar: the value to fill the output tensor with.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
iex> ExTorch.full({2, 3}, 2)
#Tensor<
[[2., 2., 2.],
[2., 2., 2.]]
[
size: {2, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.full({2, 3}, 23, dtype: :uint8, device: :cuda)
#Tensor<
[[2, 2, 2],
[2, 2, 2]]
[
size: {2, 3},
dtype: :byte,
device: {:cuda, 0},
requires_grad: false
]>
@spec full_like( ExTorch.Tensor.t(), ExTorch.Scalar.t() ) :: ExTorch.Tensor.t()
Available signature calls:
full_like(input, fill_value)
@spec full_like( ExTorch.Tensor.t(), ExTorch.Scalar.t(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec full_like( ExTorch.Tensor.t(), ExTorch.Scalar.t(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with the scalar value fill_value, with the same size as input.
ExTorch.full_like(input, fill_value) is equivalent to
ExTorch.full(input.size, fill_value, dtype: input.dtype, layout: input.layout, device: input.device)
Arguments
input: The input tensor (ExTorch.Tensor)fill_value: the value to fill the output tensor with.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default:auto. Ifauto, it will use the same data type as the input. Ifnil, it will use a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default:nil. Ifnil, it will use the same layout as the input.device (
ExTorch.Device, optional): the desired device of returned tensor. Default:auto. Ifauto, it will use the same device as the input. Ifnil, it will use the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::preserve. Ifpreserve, it will use the same memory format as the input.
Examples
# Create a tensor filled with -1 from an int64 input.
iex> a = ExTorch.empty({1, 2, 2}, dtype: :int64)
iex> ExTorch.full_like(a, -1)
#Tensor<
[[[-1, -1],
[-1, -1]]]
[
size: {1, 2, 2},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Create a CUDA complex tensor filled with a given value from a CPU input.
iex> b = ExTorch.ones({3, 3}, dtype: :complex128)
iex> ExTorch.full_like(b, ExTorch.Complex.complex(0.8, -0.5), device: :cuda)
#Tensor<
[[0.8000-0.5000j, 0.8000-0.5000j, 0.8000-0.5000j],
[0.8000-0.5000j, 0.8000-0.5000j, 0.8000-0.5000j],
[0.8000-0.5000j, 0.8000-0.5000j, 0.8000-0.5000j]]
[
size: {3, 3},
dtype: :complex_double,
device: {:cuda, 0},
requires_grad: false
]>
@spec linspace( number(), number(), integer() ) :: ExTorch.Tensor.t()
Available signature calls:
linspace(start, end_bound, steps)
@spec linspace( number(), number(), integer(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec linspace( number(), number(), integer(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Creates a one-dimensional tensor of size steps whose values are evenly
spaced from start to end, inclusive. That is, the value are:
$$ (\text{start}, \text{start} + \frac{\text{end} - \text{start}}{\text{steps} - 1}, \ldots, \text{start} + (\text{steps} - 2) * \frac{\text{end} - \text{start}}{\text{steps} - 1}, \text{end}) $$
Arguments
start: the starting value for the set of points.end: the ending value for the set of points.steps: size of the constructed tensor.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
# Returns a tensor with 10 evenly-spaced values between -2 and 10
iex> ExTorch.linspace(-2, 10, 10)
#Tensor<
[-2.0000, -0.6667, 0.6667, 2.0000, 3.3333, 4.6667, 6.0000, 7.3333,
8.6667, 10.0000]
[
size: {10},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Returns a tensor with 10 evenly-spaced int32 values between -2 and 10
iex> ExTorch.linspace(-2, 10, 10, dtype: :int32)
#Tensor<
[-2, 0, 0, 1, 3, 4, 6, 7, 8, 10]
[
size: {10},
dtype: :int,
device: :cpu,
requires_grad: false
]>
@spec logspace( number(), number(), integer() ) :: ExTorch.Tensor.t()
Available signature calls:
logspace(start, end_bound, steps)
@spec logspace( number(), number(), integer(), number() ) :: ExTorch.Tensor.t()
@spec logspace( number(), number(), integer(), base: number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
Available signature calls:
logspace(start, end_bound, steps, kwargs)logspace(start, end_bound, steps, base)
@spec logspace( number(), number(), integer(), number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec logspace( number(), number(), integer(), number(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Creates a one-dimensional tensor of size steps whose values are evenly
spaced from ${{\text{{base}}}}^{{\text{{start}}}}$ to
${{\text{{base}}}}^{{\text{{end}}}}$, inclusive, on a logarithmic scale
with base base. That is, the values are:
$$ (\text{base}^{\text{start}}, \text{base}^{(\text{start} + \frac{\text{end} - \text{start}}{ \text{steps} - 1})}, \ldots, \text{base}^{(\text{start} + (\text{steps} - 2) * \frac{\text{end} - \text{start}}{ \text{steps} - 1})}, \text{base}^{\text{end}}) $$
Arguments
start: the starting value for the set of points.end: the ending value for the set of points.steps: size of the constructed tensor.base: base of the logarithm function. Default:10.0.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
# Returns a tensor containing five logarithmic-spaced values between -10 and 10
iex> ExTorch.logspace(-10, 10, 5)
#Tensor<
[1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10]
[
size: {5},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Returns a tensor containing five logarithmic-spaced values between 0.1 and 1.0
iex> ExTorch.logspace(0.1, 1.0, 5)
#Tensor<
[ 1.2589, 2.1135, 3.5481, 5.9566, 10.0000]
[
size: {5},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Returns a tensor containing three logarithmic-spaced (base 2) values between 0.1 and 1.0
iex> ExTorch.logspace(0.1, 1.0, 3, base: 2)
#Tensor<
[1.0718, 1.4641, 2.0000]
[
size: {3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Returns a float64 tensor containing three logarithmic-spaced (base 2) values between 0.1 and 1.0
iex> ExTorch.logspace(0.1, 1.0, 3, base: 2, dtype: :float64)
#Tensor<
[1.0718, 1.4641, 2.0000]
[
size: {3},
dtype: :double,
device: :cpu,
requires_grad: false
]>
@spec ones(tuple() | [integer()]) :: ExTorch.Tensor.t()
See ExTorch.ones/2
Available signature calls:
ones(size)
@spec ones(tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec ones( tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with the scalar value 1, with the shape defined
by the variable argument size.
Arguments
size: a tuple/list of integers defining the shape of the output tensor.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
iex> ExTorch.ones({2, 3})
#Tensor<
[[1., 1., 1.],
[1., 1., 1.]]
[
size: {2, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.ones({2, 3}, dtype: :uint8, device: :cuda)
#Tensor<
[[1, 1, 1],
[1, 1, 1]]
[
size: {2, 3},
dtype: :byte,
device: {:cuda, 0},
requires_grad: false
]>
@spec ones_like(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
ones_like(input)
@spec ones_like(ExTorch.Tensor.t(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec ones_like(ExTorch.Tensor.t(), ExTorch.Tensor.Options.t()) :: ExTorch.Tensor.t()
Returns a tensor filled with the scalar value 1, with the same size as input.
ExTorch.ones_like(input) is equivalent to
ExTorch.ones(input.size, dtype: input.dtype, layout: input.layout, device: input.device)
Arguments
input: The input tensor (ExTorch.Tensor)
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default:auto. Ifauto, it will use the same data type as the input. Ifnil, it will use a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default:nil. Ifnil, it will use the same layout as the input.device (
ExTorch.Device, optional): the desired device of returned tensor. Default:auto. Ifauto, it will use the same device as the input. Ifnil, it will use the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::preserve. Ifpreserve, it will use the same memory format as the input.
Examples
# Create a tensor filled with ones from another float64 tensor.
iex> a = ExTorch.rand({3, 4})
iex> ExTorch.ones_like(a)
#Tensor<
[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]]
[
size: {3, 4},
dtype: :double,
device: :cpu,
requires_grad: false
]>
# Create a complex tensor with real part equal to one in GPU from another CPU tensor.
iex> a = ExTorch.rand({3, 4}, dtype: :complex64)
iex> ExTorch.ones_like(a, device: :cuda)
#Tensor<
[[1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j],
[1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j],
[1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j]]
[
size: {3, 4},
dtype: :complex_float,
device: {:cuda, 0},
requires_grad: false
]>
@spec polar(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Constructs a complex tensor whose elements are Cartesian coordinates
corresponding to the polar coordinates with absolute value abs and
angle angle.
$$ (\text{out} = \text{abs} \cdot \cos(\text{angle}) + \text{abs} \cdot \sin(\text{angle}) \cdot j) $$
Arguments
real: AnExTorch.Tensorcontaining the real parts.imag: AnExTorch.Tensorcontaining the imag parts.
Notes
If both the inputs are :float32, the output will be :complex64.
Comparatively, if both the inputs are :float64, the output will be
:complex128.
Examples
iex> real = ExTorch.arange(5)
iex> imag = ExTorch.arange(-5, 0)
iex> ExTorch.polar(real, imag)
#Tensor<
[ 0.0000+0.0000j, -0.6536+0.7568j, -1.9800-0.2822j, -1.2484-2.7279j,
2.1612-3.3659j]
[
size: {5},
dtype: :complex_float,
device: :cpu,
requires_grad: false
]>
@spec rand(tuple() | [integer()]) :: ExTorch.Tensor.t()
See ExTorch.rand/2
Available signature calls:
rand(size)
@spec rand(tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec rand( tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with random numbers from a uniform distribution on the interval $[0, 1)$
The shape of the tensor is defined by the variable argument size.
Arguments
size: a tuple/list of integers defining the shape of the output tensor.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
iex> ExTorch.rand({3, 3, 3})
#Tensor<
[[[0.4099, 0.8473, 0.6221],
[0.9906, 0.3174, 0.9849],
[0.6988, 0.1157, 0.9424]],
[[0.0550, 0.9723, 0.4380],
[0.9304, 0.2973, 0.4920],
[0.1860, 0.9460, 0.2602]],
[[0.9208, 0.9713, 0.8194],
[0.8109, 0.1395, 0.1245],
[0.5742, 0.5222, 0.0937]]]
[
size: {3, 3, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.rand({2, 3}, dtype: :float32, device: :cuda)
#Tensor<
[[0.1583, 0.5184, 0.6711],
[0.3829, 0.3248, 0.3524]]
[
size: {2, 3},
dtype: :float,
device: {:cuda, 0},
requires_grad: false
]>
@spec rand_like(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
rand_like(input)
@spec rand_like(ExTorch.Tensor.t(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec rand_like(ExTorch.Tensor.t(), ExTorch.Tensor.Options.t()) :: ExTorch.Tensor.t()
Returns a tensor filled with random numbers from a uniform distribution
on the interval $[0, 1)$, with the same size as input.
ExTorch.rand_like(input) is equivalent to
ExTorch.rand(input.size, dtype: input.dtype, layout: input.layout, device: input.device)
Arguments
input: The input tensor (ExTorch.Tensor)
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default:auto. Ifauto, it will use the same data type as the input. Ifnil, it will use a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default:nil. Ifnil, it will use the same layout as the input.device (
ExTorch.Device, optional): the desired device of returned tensor. Default:auto. Ifauto, it will use the same device as the input. Ifnil, it will use the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::preserve. Ifpreserve, it will use the same memory format as the input.
Examples
# Derive a new float64 tensor from another one
iex> a = ExTorch.empty({3, 2, 2}, dtype: :float64)
iex> ExTorch.rand_like(a)
#Tensor<
[[[0.6495, 0.9480],
[0.3083, 0.7135]],
[[0.5482, 0.3676],
[0.2825, 0.1806]],
[[0.4742, 0.8673],
[0.4542, 0.4239]]]
[
size: {3, 2, 2},
dtype: :double,
device: :cpu,
requires_grad: false
]>
# Derive a GPU tensor from a CPU one
iex> b = ExTorch.ones({2, 3}, dtype: :complex64)
iex> ExTorch.rand_like(b, device: :cuda)
#Tensor<
[[0.1554+0.6794j, 0.5356+0.2049j, 0.7555+0.3877j],
[0.0148+0.0772j, 0.8368+0.3802j, 0.6820+0.1727j]]
[
size: {2, 3},
dtype: :complex_float,
device: {:cuda, 0},
requires_grad: false
]>
@spec randint( integer(), tuple() | [integer()] ) :: ExTorch.Tensor.t()
Available signature calls:
randint(high, size)
@spec randint( integer(), tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec randint( integer(), tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
@spec randint( integer(), integer(), tuple() | [integer()] ) :: ExTorch.Tensor.t()
Available signature calls:
randint(low, high, size)randint(high, size, opts)randint(high, size, kwargs)
@spec randint( integer(), integer(), tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec randint( integer(), integer(), tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with random integers generated uniformly
between low (inclusive) and high (exclusive).
The shape of the tensor is defined by the variable argument size.
Arguments
low: Lowest integer to be drawn from the distribution. Default:0.high: One above the highest integer to be drawn from the distribution.size: a tuple/list of integers defining the shape of the output tensor.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
# Sample numbers between 0 and 3
iex> ExTorch.randint(3, {3, 3, 4})
#Tensor<
[[[0., 2., 0., 0.],
[1., 2., 0., 2.],
[2., 2., 1., 2.]],
[[2., 1., 1., 1.],
[2., 1., 0., 1.],
[1., 1., 0., 0.]],
[[1., 1., 1., 1.],
[2., 2., 0., 1.],
[1., 2., 1., 1.]]]
[
size: {3, 3, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Sample numbers between 0 and 3 of type int64
iex> ExTorch.randint(3, {3, 3, 4}, dtype: :int64)
#Tensor<
[[[0, 0, 0, 2],
[1, 1, 0, 0],
[1, 0, 1, 1]],
[[0, 1, 1, 1],
[2, 0, 2, 0],
[2, 1, 0, 2]],
[[1, 0, 2, 1],
[2, 2, 1, 0],
[0, 0, 0, 2]]]
[
size: {3, 3, 4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Sample numbers between -2 and 4
iex> ExTorch.randint(-2, 3, {2, 2, 4})
#Tensor<
[[[ 2., 1., 0., -1.],
[ 2., 2., -2., 2.]],
[[-1., -1., 1., -1.],
[ 2., -1., 1., -1.]]]
[
size: {2, 2, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Sample numbers between -2 and 4 on gpu
iex> ExTorch.randint(-2, 3, {2, 2, 4}, device: :cuda)
#Tensor<
[[[-2., 2., 0., -2.],
[ 0., 0., 0., -2.]],
[[ 0., 0., -2., 0.],
[ 0., 1., -2., 1.]]]
[
size: {2, 2, 4},
dtype: :float,
device: {:cuda, 0},
requires_grad: false
]>
@spec randint_like(ExTorch.Tensor.t(), integer()) :: ExTorch.Tensor.t()
Available signature calls:
randint_like(input, high)
@spec randint_like(ExTorch.Tensor.t(), integer(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec randint_like(ExTorch.Tensor.t(), integer(), ExTorch.Tensor.Options.t()) :: ExTorch.Tensor.t()
@spec randint_like(ExTorch.Tensor.t(), integer(), integer()) :: ExTorch.Tensor.t()
Available signature calls:
randint_like(input, low, high)randint_like(input, high, opts)randint_like(input, high, kwargs)
@spec randint_like(ExTorch.Tensor.t(), integer(), integer(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec randint_like( ExTorch.Tensor.t(), integer(), integer(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with random integers generated uniformly
between low (inclusive) and high (exclusive),
with the same size as input.
ExTorch.randint_like(input, low, high) is equivalent to
ExTorch.randint(low, high, input.size, dtype: input.dtype, layout: input.layout, device: input.device)
Arguments
input: The input tensor (ExTorch.Tensor)low: Lowest integer to be drawn from the distribution. Default:0.high: One above the highest integer to be drawn from the distribution.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default:auto. Ifauto, it will use the same data type as the input. Ifnil, it will use a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default:nil. Ifnil, it will use the same layout as the input.device (
ExTorch.Device, optional): the desired device of returned tensor. Default:auto. Ifauto, it will use the same device as the input. Ifnil, it will use the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::preserve. Ifpreserve, it will use the same memory format as the input.
Examples
# Create a random tensor with values between 0 and 10 from a float32 one.
iex> a = ExTorch.zeros({3, 4, 5}, dtype: :float32)
iex> ExTorch.randint_like(a, 10)
#Tensor<
[[[2., 5., 0., 7., 5.],
[9., 0., 9., 1., 4.],
[6., 3., 6., 0., 2.],
[2., 6., 5., 9., 0.]],
[[9., 4., 7., 9., 8.],
[2., 8., 0., 8., 3.],
[6., 6., 1., 9., 0.],
[5., 2., 1., 7., 8.]],
[[8., 3., 6., 8., 9.],
[5., 7., 0., 7., 6.],
[5., 4., 0., 3., 3.],
[4., 3., 7., 3., 5.]]]
[
size: {3, 4, 5},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Create a CUDA random tensor with values between 0 and 5 from a CPU one
iex> b = ExTorch.rand({3, 3})
iex> ExTorch.randint_like(b, 5, device: :cuda)
#Tensor<
[[4., 1., 4.],
[1., 1., 1.],
[2., 4., 3.]]
[
size: {3, 3},
dtype: :float,
device: {:cuda, 0},
requires_grad: false
]>
# Create a random tensor with values between -1 and 5 from a int32 one.
iex> c = ExTorch.ones({3, 3}, dtype: :int32)
iex> ExTorch.randint_like(c, -1, 5)
#Tensor<
[[ 2, 2, 4],
[-1, 4, -1],
[ 0, 3, 1]]
[
size: {3, 3},
dtype: :int,
device: :cpu,
requires_grad: false
]>
# Create a float32 CUDA random tensor with values between -1 and 5 from a int32 one.
iex> ExTorch.randint_like(c, -1, 5, dtype: :float32, device: :cuda)
#Tensor<
[[4., 0., 4.],
[0., 1., 2.],
[1., 2., 1.]]
[
size: {3, 3},
dtype: :float,
device: {:cuda, 0},
requires_grad: false
]>
@spec randn(tuple() | [integer()]) :: ExTorch.Tensor.t()
See ExTorch.randn/2
Available signature calls:
randn(size)
@spec randn(tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec randn( tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with random numbers from a normal distribution
with mean 0 and variance 1 (also called the standard normal
distribution).
$$ \text{{out}}_{{i}} \sim \mathcal{{N}}(0, 1) $$
The shape of the tensor is defined by the variable argument :attr:size.
Arguments
size: a tuple/list of integers defining the shape of the output tensor.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
iex> ExTorch.randn({3, 3, 5})
#Tensor<
[[[ 0.6246, -0.4914, 1.1007, -0.0740, -1.6833],
[-0.3883, 1.2653, -0.7250, 0.4994, -0.0219],
[-1.3880, 1.8336, -1.7369, -0.2781, -0.0703]],
[[ 0.2841, 0.7564, -0.3294, 0.1375, 2.0717],
[-0.6085, -0.8361, 0.5009, 1.5529, 0.5856],
[-0.3905, -0.3704, 1.1392, 0.3159, -0.5587]],
[[ 0.8050, -0.0064, -0.6925, -0.0121, -1.2824],
[-1.7309, -1.4089, -1.0207, 0.2222, -0.5027],
[-0.4363, -0.1095, 1.3950, -0.4580, 0.2475]]]
[
size: {3, 3, 5},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.randn({3, 3, 5}, device: :cuda)
#Tensor<
[[[ 3.5948e-01, 1.9308e-01, -1.0206e-01, -8.1509e-01, -1.6322e+00],
[ 5.3390e-02, -1.2340e-01, -4.0909e-01, 3.5126e-01, -1.4023e-01],
[ 6.5496e-01, 1.4283e+00, -1.2375e+00, 1.3729e+00, 4.2116e-01]],
[[ 1.4638e+00, 6.9129e-03, -1.4147e+00, -1.8253e+00, -1.9235e+00],
[-1.3941e-01, -7.3455e-01, 3.7658e-01, -1.0569e-01, 6.8978e-01],
[ 3.7640e-01, -3.5241e-01, -1.1376e-01, -5.2477e-01, -1.6157e-01]],
[[-2.8951e-01, -1.5665e+00, 3.4778e-01, -2.1329e+00, -1.0400e+00],
[ 4.7831e-04, 1.2714e+00, 1.6693e+00, -2.1787e+00, 4.4486e-01],
[-3.2052e-01, 2.3278e+00, 6.2929e-01, 2.5321e-01, -1.4433e+00]]]
[
size: {3, 3, 5},
dtype: :float,
device: {:cuda, 0},
requires_grad: false
]>
@spec randn_like(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
randn_like(input)
@spec randn_like(ExTorch.Tensor.t(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec randn_like(ExTorch.Tensor.t(), ExTorch.Tensor.Options.t()) :: ExTorch.Tensor.t()
Returns a tensor filled with random numbers from a normal distribution
with mean 0 and variance 1 (also called the standard normal
distribution), with the same size as input.
ExTorch.randn_like(input) is equivalent to
ExTorch.randn(input.size, dtype: input.dtype, layout: input.layout, device: input.device)
Arguments
input: The input tensor (ExTorch.Tensor)
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default:auto. Ifauto, it will use the same data type as the input. Ifnil, it will use a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default:nil. Ifnil, it will use the same layout as the input.device (
ExTorch.Device, optional): the desired device of returned tensor. Default:auto. Ifauto, it will use the same device as the input. Ifnil, it will use the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::preserve. Ifpreserve, it will use the same memory format as the input.
Examples
# Derive a new float64 tensor from another one
iex> a = ExTorch.empty({3, 2, 2}, dtype: :float64)
iex> ExTorch.rand_like(a)
#Tensor<
[[[0.6394, 0.0540],
[0.8050, 0.6426]],
[[0.7196, 0.6789],
[0.2813, 0.4029]],
[[0.0898, 0.4235],
[0.3301, 0.2744]]]
[
size: {3, 2, 2},
dtype: :double,
device: :cpu,
requires_grad: false
]>
# Derive a new cuda float64 tensor from another one
iex> b = ExTorch.empty({3, 2}, device: :cuda)
iex> ExTorch.rand_like(b, dtype: :float64)
#Tensor<
[[0.2639, 0.7628],
[0.5935, 0.4772],
[0.0176, 0.2496]]
[
size: {3, 2},
dtype: :double,
device: {:cuda, 0},
requires_grad: false
]>
@spec tensor(ExTorch.Scalar.scalar_or_list()) :: ExTorch.Tensor.t()
See ExTorch.tensor/2
Available signature calls:
tensor(list)
@spec tensor(ExTorch.Scalar.scalar_or_list(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec tensor( ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Constructs a tensor with data.
Arguments
list: Initial data for the tensor. Can be a list, tuple or number.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
iex> ExTorch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
#Tensor<
[[0.1000, 1.2000],
[2.2000, 3.1000],
[4.9000, 5.2000]]
[
size: {3, 2},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Type inference
iex> ExTorch.tensor([0, 1])
#Tensor<
[0, 1]
[size: {2}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> ExTorch.tensor([[0.11111, 0.222222, 0.3333333]], dtype: :float64)
#Tensor<
[[0.1111, 0.2222, 0.3333]]
[
size: {1, 3},
dtype: :double,
device: :cpu,
requires_grad: false
]>
@spec zeros(tuple() | [integer()]) :: ExTorch.Tensor.t()
See ExTorch.zeros/2
Available signature calls:
zeros(size)
@spec zeros(tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec zeros( tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with the scalar value 0, with the shape defined
by the variable argument size.
Arguments
size: a tuple/list of integers defining the shape of the output tensor.
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default: ifnil, uses a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default::strided.device (
ExTorch.Device, optional): the desired device of returned tensor. Default: ifnil, uses the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::contiguous
Examples
iex> ExTorch.zeros({2, 3})
#Tensor<
[[0., 0., 0.],
[0., 0., 0.]]
[
size: {2, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.zeros({2, 3}, dtype: :uint8, device: :cuda)
#Tensor<
[[0, 0, 0],
[0, 0, 0]]
[
size: {2, 3},
dtype: :byte,
device: {:cuda, 0},
requires_grad: false
]>
@spec zeros_like(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
zeros_like(input)
@spec zeros_like(ExTorch.Tensor.t(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec zeros_like(ExTorch.Tensor.t(), ExTorch.Tensor.Options.t()) :: ExTorch.Tensor.t()
Returns a tensor filled with the scalar value 0, with the same size as input.
ExTorch.zeros_like(input) is equivalent to
ExTorch.zeros(input.size, dtype: input.dtype, layout: input.layout, device: input.device)
Arguments
input: The input tensor (ExTorch.Tensor)
Keyword args
dtype (
ExTorch.DType, optional): the desired data type of returned tensor. Default:auto. Ifauto, it will use the same data type as the input. Ifnil, it will use a global default (seeExTorch.set_default_dtype).layout (
ExTorch.Layout, optional): the desired layout of returned Tensor. Default:nil. Ifnil, it will use the same layout as the input.device (
ExTorch.Device, optional): the desired device of returned tensor. Default:auto. Ifauto, it will use the same device as the input. Ifnil, it will use the current device for the default tensor type (seeExTorch.set_default_device).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean(), optional): If autograd should record operations on the returned tensor. Default:false.pin_memory (
bool, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false.memory_format (
ExTorch.MemoryFormat, optional): the desired memory format of returned Tensor. Default::preserve. Ifpreserve, it will use the same memory format as the input.
Examples
# Create a tensor filled with ones from another float64 tensor.
iex> a = ExTorch.rand({3, 4})
iex> ExTorch.zeros_like(a)
#Tensor<
[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]]
[
size: {3, 4},
dtype: :double,
device: :cpu,
requires_grad: false
]>
# Create a complex tensor with real part equal to one in GPU from another CPU tensor.
iex> a = ExTorch.rand({3, 4}, dtype: :complex64)
iex> ExTorch.zeros_like(a, device: :cuda)
#Tensor<
[[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]]
[
size: {3, 4},
dtype: :complex_float,
device: {:cuda, 0},
requires_grad: false
]>
Tensor manipulation
@spec adjoint(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a view of the tensor conjugated and with the last two dimensions transposed.
ExTorch.adjoint(x) is equivalent to ExTorch.conj(ExTorch.transpose(x, -2, -1)) and to
ExTorch.transpose(x, -2, -1) for real tensors.
Arguments
input(ExTorch.Tensor) - the input tensor.
Examples
iex> x = ExTorch.arange(4)
iex> a = ExTorch.complex(x, x) |> ExTorch.reshape({2, 2})
#Tensor<
[[0.0000+0.0000j, 1.0000+1.0000j],
[2.0000+2.0000j, 3.0000+3.0000j]]
[
size: {2, 2},
dtype: :complex_float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.adjoint(a)
#Tensor<
[[0.0000-0.0000j, 2.0000-2.0000j],
[1.0000-1.0000j, 3.0000-3.0000j]]
[
size: {2, 2},
dtype: :complex_float,
device: :cpu,
requires_grad: false
]>
@spec cat([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()
See ExTorch.cat/3
Available signature calls:
cat(input)
@spec cat([ExTorch.Tensor.t()] | tuple(), integer()) :: ExTorch.Tensor.t()
@spec cat([ExTorch.Tensor.t()] | tuple(), dim: integer(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
See ExTorch.cat/3
Available signature calls:
cat(input, kwargs)cat(input, dim)
@spec cat([ExTorch.Tensor.t()] | tuple(), integer(), [ {:out, ExTorch.Tensor.t() | nil} ]) :: ExTorch.Tensor.t()
@spec cat([ExTorch.Tensor.t()] | tuple(), integer(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Concatenates the given sequence of seq tensors in the given dimension.
All tensors must either have the same shape (except in the concatenating dimension) or be empty.
Arguments
tensors([ExTorch.Tensor] | tuple()) - A sequence of tensors of the same type. Non-empty tensors provided must have the same shape, except in the cat dimension.
Optional arguments
dim(integer()) - the dimension over which the tensors are concatenated. Default: 0out(ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the concatenation output. Default: nil
Examples
iex> x = ExTorch.arange(5) |> ExTorch.unsqueeze(-1)
#Tensor<
[[0.0000],
[1.0000],
[2.0000],
[3.0000],
[4.0000]]
[
size: {5, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.cat([x, x], -1)
#Tensor<
[[0.0000, 0.0000],
[1.0000, 1.0000],
[2.0000, 2.0000],
[3.0000, 3.0000],
[4.0000, 4.0000]]
[
size: {5, 2},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec chunk(ExTorch.Tensor.t(), integer()) :: [ExTorch.Tensor.t()]
See ExTorch.chunk/3
Available signature calls:
chunk(input, chunks)
@spec chunk(ExTorch.Tensor.t(), integer(), integer()) :: [ExTorch.Tensor.t()]
@spec chunk(ExTorch.Tensor.t(), integer(), [{:dim, integer()}]) :: [ ExTorch.Tensor.t() ]
Attempts to split a tensor into the specified number of chunks. Each chunk is a view of the input tensor.
If the tensor size along the given dimension dim is divisible by chunks, all returned chunks
will be the same size. If the tensor size along the given dimension dim is not divisible by chunks,
all returned chunks will be the same size, except the last one. If such division is not possible,
this function may return fewer than the specified number of chunks.
Arguments
input(ExTorch.Tensor) - the tensor to splitchunks(integer) - number of chunks to return
Optional arguments
dim(integer) - dimension along which to split the tensor. Default: 0
Notes
- This function may return fewer than the specified number of chunks!
- Use
ExTorch.tensor_split/3to ensure that the result will have the exact number of chunks.
Examples
iex> ExTorch.arange(11) |> ExTorch.chunk(6)
[
#Tensor<
[0.0000, 1.0000]
[
size: {2},
dtype: :float,
device: :cpu,
requires_grad: false
]>,
#Tensor<
[2., 3.]
[
size: {2},
dtype: :float,
device: :cpu,
requires_grad: false
]>,
#Tensor<
[4., 5.]
[
size: {2},
dtype: :float,
device: :cpu,
requires_grad: false
]>,
#Tensor<
[6., 7.]
[
size: {2},
dtype: :float,
device: :cpu,
requires_grad: false
]>,
#Tensor<
[8., 9.]
[
size: {2},
dtype: :float,
device: :cpu,
requires_grad: false
]>,
#Tensor<
[10.]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
]
@spec column_stack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()
Available signature calls:
column_stack(tensors)
@spec column_stack( [ExTorch.Tensor.t()] | tuple(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec column_stack([ExTorch.Tensor.t()] | tuple(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Creates a new tensor by horizontally stacking the tensors in tensors.
Equivalent to ExTorch.hstack(tensors), except each zero or one dimensional
tensor t in tensors is first reshaped into a (ExTorch.Tensor.numel(t), 1)
column before being stacked horizontally.
Arguments
tensors (
[ExTorch.Tensor] | tuple()) - sequence of tensors to concatenate.
Optional arguments
out (
ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default:nil
Examples
# Stack two 1D tensors
iex> a = ExTorch.tensor([1, 2, 3])
iex> b = ExTorch.tensor([4, 5, 6])
iex> ExTorch.column_stack([a, b])
#Tensor<
[[1, 4],
[2, 5],
[3, 6]]
[size: {3, 2}, dtype: :byte, device: :cpu, requires_grad: false]>
# Stack 2D tensors
iex> a = ExTorch.arange(5)
iex> b = ExTorch.arange(10) |> ExTorch.reshape({5, 2})
iex> ExTorch.column_stack({a, b, b})
#Tensor<
[[0.0000, 0.0000, 1.0000, 0.0000, 1.0000],
[1.0000, 2.0000, 3.0000, 2.0000, 3.0000],
[2.0000, 4.0000, 5.0000, 4.0000, 5.0000],
[3.0000, 6.0000, 7.0000, 6.0000, 7.0000],
[4.0000, 8.0000, 9.0000, 8.0000, 9.0000]]
[size: {5, 5}, dtype: :float, device: :cpu, requires_grad: false]>
@spec concat([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()
Alias to cat/1
@spec concat([ExTorch.Tensor.t()] | tuple(), integer()) :: ExTorch.Tensor.t()
@spec concat([ExTorch.Tensor.t()] | tuple(), dim: integer(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Alias to cat/2
@spec concat([ExTorch.Tensor.t()] | tuple(), integer(), [ {:out, ExTorch.Tensor.t() | nil} ]) :: ExTorch.Tensor.t()
@spec concat([ExTorch.Tensor.t()] | tuple(), integer(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Alias to cat/3
@spec concatenate([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()
Alias to cat/1
@spec concatenate([ExTorch.Tensor.t()] | tuple(), integer()) :: ExTorch.Tensor.t()
@spec concatenate([ExTorch.Tensor.t()] | tuple(), dim: integer(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Alias to cat/2
@spec concatenate([ExTorch.Tensor.t()] | tuple(), integer(), [ {:out, ExTorch.Tensor.t() | nil} ]) :: ExTorch.Tensor.t()
@spec concatenate([ExTorch.Tensor.t()] | tuple(), integer(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Alias to cat/3
@spec conj(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a view of input with a flipped conjugate bit.
If input has a non-complex dtype, this function just returns input.
Arguments
tensor: Input tensor (ExTorch.Tensor)
Notes
ExTorch.conj/1 performs a lazy conjugation, but the actual conjugated
tensor can be materialized at any time using ExTorch.resolve_conj/1.
Examples
iex> a = ExTorch.rand({2, 2}, dtype: :complex64)
#Tensor<
[[0.5885+0.0263j, 0.8141+0.0605j],
[0.9169+0.3126j, 0.6344+0.2768j]]
[
size: {2, 2},
dtype: :complex_float,
device: :cpu,
requires_grad: false
]>
# Conjugate the input
iex> b = ExTorch.conj(a)
#Tensor<
[[0.5885-0.0263j, 0.8141-0.0605j],
[0.9169-0.3126j, 0.6344-0.2768j]]
[
size: {2, 2},
dtype: :complex_float,
device: :cpu,
requires_grad: false
]>
# Check that conj bit is set to true
iex> ExTorch.Tensor.is_conj(b)
true
@spec diagonal_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
See ExTorch.diagonal_scatter/6
Available signature calls:
diagonal_scatter(input, src)
@spec diagonal_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), offset: integer(), dim1: integer(), dim2: integer(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
@spec diagonal_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer() ) :: ExTorch.Tensor.t()
See ExTorch.diagonal_scatter/6
Available signature calls:
diagonal_scatter(input, src, offset)diagonal_scatter(input, src, kwargs)
@spec diagonal_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer() ) :: ExTorch.Tensor.t()
@spec diagonal_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), dim1: integer(), dim2: integer(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
See ExTorch.diagonal_scatter/6
Available signature calls:
diagonal_scatter(input, src, offset, kwargs)diagonal_scatter(input, src, offset, dim1)
@spec diagonal_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer(), integer() ) :: ExTorch.Tensor.t()
@spec diagonal_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer(), dim2: integer(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
See ExTorch.diagonal_scatter/6
Available signature calls:
diagonal_scatter(input, src, offset, dim1, kwargs)diagonal_scatter(input, src, offset, dim1, dim2)
@spec diagonal_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer(), integer(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec diagonal_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer(), integer(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Embeds the values of the src tensor into input along the diagonal elements of input,
with respect to dim1 and dim2.
This function returns a tensor with fresh storage; it does not return a view.
The argument offset controls which diagonal to consider:
- If
offset = 0, it is the main diagonal. - If
offset > 0, it is above the main diagonal. - If
offset < 0, it is below the main diagonal.
Arguments
input(ExTorch.Tensor) - the input tensor. Must be at least 2-dimensional.src(ExTorch.Tensor) - the tensor to embed intoinput.offset(integer) - which diagonal to consider. Default: 0 (main diagonal).dim1(integer) - first dimension with respect to which to take diagonal. Default: 0.dim2(integer) - second dimension with respect to which to take diagonal. Default: 1.
Optional arguments
out(ExTorch.Tensorornil) - an optional pre-allocated tensor used to store the output result. Default:nil
Notes
src must be of the proper size in order to be embedded into input. Specifically, it should have
the same shape as ExTorch.diagonal(input, offset, dim1, dim2)
Examples
iex> a = ExTorch.zeros({3, 3})
#Tensor<
[[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]]
[size: {3, 3}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.diagonal_scatter(a, ExTorch.ones(3), 0)
#Tensor<
[[1.0000, 0.0000, 0.0000],
[0.0000, 1.0000, 0.0000],
[0.0000, 0.0000, 1.0000]]
[size: {3, 3}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.diagonal_scatter(a, ExTorch.ones(2), 1)
#Tensor<
[[0.0000, 1.0000, 0.0000],
[0.0000, 0.0000, 1.0000],
[0.0000, 0.0000, 0.0000]]
[size: {3, 3}, dtype: :float, device: :cpu, requires_grad: false]>
@spec dsplit(ExTorch.Tensor.t(), integer() | [integer()] | tuple()) :: [ ExTorch.Tensor.t() ]
Splits input, a tensor with three or more dimensions, into multiple tensors
depthwise according to indices_or_sections. Each split is a view of input.
This is equivalent to calling ExTorch.tensor_split(input, indices_or_sections, dim: 2)
(the split dimension is 2), except that if indices_or_sections is an integer it must
evenly divide the split dimension or a runtime error will be thrown.
Arguments
input(ExTorch.Tensor) - tensor to split.indices_or_sections(integer() | [integer()] | tuple()) - See argument inExTorch.tensor_split/3
Examples
iex> t = ExTorch.arange(16) |> ExTorch.reshape({2, 2, 4})
#Tensor<
[[[ 0.0000, 1.0000, 2.0000, 3.0000],
[ 4.0000, 5.0000, 6.0000, 7.0000]],
[[ 8.0000, 9.0000, 10.0000, 11.0000],
[12.0000, 13.0000, 14.0000, 15.0000]]]
[size: {2, 2, 4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.dsplit(t, 2)
[
#Tensor<
[[[ 0.0000, 1.0000],
[ 4.0000, 5.0000]],
[[ 8.0000, 9.0000],
[12.0000, 13.0000]]]
[size: {2, 2, 2}, dtype: :float, device: :cpu, requires_grad: false]>,
#Tensor<
[[[ 2., 3.],
[ 6., 7.]],
[[10., 11.],
[14., 15.]]]
[size: {2, 2, 2}, dtype: :float, device: :cpu, requires_grad: false]>
]
@spec dstack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()
See ExTorch.dstack/2
Available signature calls:
dstack(tensors)
@spec dstack( [ExTorch.Tensor.t()] | tuple(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec dstack([ExTorch.Tensor.t()] | tuple(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Stack tensors in sequence depthwise (along third axis).
This is equivalent to concatenation along the third axis after 1-D and 2-D tensors have been reshaped by ensuring each tensor has at least 3 dimensions.
Arguments
tensors([ExTorch.Tensor.t()] | tuple()) - sequence of tensors to concatenate.
Optional arguments
out (
ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default:nil
Examples
iex> a = ExTorch.tensor([1, 2, 3])
iex> b = ExTorch.tensor([4, 5, 6])
iex> ExTorch.dstack({a, b})
#Tensor<
[[[1, 4],
[2, 5],
[3, 6]]]
[size: {1, 3, 2}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> a = ExTorch.tensor([[1],[2],[3]])
#Tensor<
[[1],
[2],
[3]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> b = ExTorch.tensor([[4],[5],[6]])
#Tensor<
[[4],
[5],
[6]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> ExTorch.dstack([a, b])
#Tensor<
[[[1, 4]],
[[2, 5]],
[[3, 6]]]
[size: {3, 1, 2}, dtype: :byte, device: :cpu, requires_grad: false]>
@spec gather( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
See ExTorch.gather/5
Available signature calls:
gather(input, dim, index)
@spec gather( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), sparse_grad: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
@spec gather( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), boolean() ) :: ExTorch.Tensor.t()
See ExTorch.gather/5
Available signature calls:
gather(input, dim, index, sparse_grad)gather(input, dim, index, kwargs)
@spec gather( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), boolean(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec gather( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), boolean(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Gathers values along an axis specified by dim.
For a 3-D tensor the output is specified by:
out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0
out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1
out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2input and index must have the same number of dimensions. It is also required
that ExTorch.Tensor.size(index, d) <= ExTorch.Tensor.size(input, d) for all dimensions d != dim.
out will have the same shape as index. Note that input and index do not broadcast
against each other.
Arguments
input(ExTorch.Tensor) - the source tensor.dim(integer()) - the axis along which to index.index(ExTorch.Tensor) - the indices of elements to gather. Its dtype must be:int64or:long
Optional arguments
sparse_grad(boolean()) - iftrue, then the gradient w.r.t.inputwill be a sparse tensor. Default:falseout(ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default:nil
Examples
iex> t = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.gather(t, 1, ExTorch.tensor([[0, 0], [1, 0]], dtype: :int64))
#Tensor<
[[1, 1],
[4, 3]]
[size: {2, 2}, dtype: :byte, device: :cpu, requires_grad: false]>
@spec hsplit(ExTorch.Tensor.t(), integer() | [integer()] | tuple()) :: [ ExTorch.Tensor.t() ]
Splits input, a tensor with one or more dimensions, into multiple tensors horizontally according to indices_or_sections.
Each split is a view of input.
If input is one dimensional this is equivalent to calling ExTorch.tensor_split(input, indices_or_sections, dim: 0)
(the split dimension is zero), and if input has two or more dimensions it’s equivalent to calling
ExTorch.tensor_split(input, indices_or_sections, dim: 1) (the split dimension is 1),
except that if indices_or_sections is an integer it must evenly divide the split
dimension or a runtime error will be thrown.
Arguments
input(ExTorch.Tensor) - tensor to split.indices_or_sections(integer() | [integer()] | tuple()) - See argument inExTorch.tensor_split/3
Examples
iex> a = ExTorch.arange(16) |> ExTorch.reshape({4, 4})
#Tensor<
[[ 0.0000, 1.0000, 2.0000, 3.0000],
[ 4.0000, 5.0000, 6.0000, 7.0000],
[ 8.0000, 9.0000, 10.0000, 11.0000],
[12.0000, 13.0000, 14.0000, 15.0000]]
[size: {4, 4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.hsplit(a, 2)
[
#Tensor<
[[ 0.0000, 1.0000],
[ 4.0000, 5.0000],
[ 8.0000, 9.0000],
[12.0000, 13.0000]]
[size: {4, 2}, dtype: :float, device: :cpu, requires_grad: false]>,
#Tensor<
[[ 2., 3.],
[ 6., 7.],
[10., 11.],
[14., 15.]]
[size: {4, 2}, dtype: :float, device: :cpu, requires_grad: false]>
]
iex> ExTorch.hsplit(a, [3, 6])
[
#Tensor<
[[ 0.0000, 1.0000, 2.0000],
[ 4.0000, 5.0000, 6.0000],
[ 8.0000, 9.0000, 10.0000],
[12.0000, 13.0000, 14.0000]]
[size: {4, 3}, dtype: :float, device: :cpu, requires_grad: false]>,
#Tensor<
[[ 3.],
[ 7.],
[11.],
[15.]]
[size: {4, 1}, dtype: :float, device: :cpu, requires_grad: false]>,
#Tensor<
[]
[size: {4, 0}, dtype: :float, device: :cpu, requires_grad: false]>
]
@spec hstack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()
See ExTorch.hstack/2
Available signature calls:
hstack(tensors)
@spec hstack( [ExTorch.Tensor.t()] | tuple(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec hstack([ExTorch.Tensor.t()] | tuple(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Stack tensors in sequence horizontally (column wise).
This is equivalent to concatenation along the first axis for 1-D tensors, and along the second axis for all other tensors.
Arguments
tensors([ExTorch.Tensor] | tuple()) - sequence of tensors to concatenate.
Optional arguments
out(ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default:nil
Examples
iex> a = ExTorch.tensor([1, 2, 3])
iex> b = ExTorch.tensor([4, 5, 6])
iex> ExTorch.hstack({a, b})
#Tensor<
[1, 2, 3, 4, 5, 6]
[size: {6}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> a = ExTorch.tensor([[1],[2],[3]])
#Tensor<
[[1],
[2],
[3]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> b = ExTorch.tensor([[4],[5],[6]])
#Tensor<
[[4],
[5],
[6]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> ExTorch.hstack([a, b])
#Tensor<
[[1, 4],
[2, 5],
[3, 6]]
[size: {3, 2}, dtype: :byte, device: :cpu, requires_grad: false]>
@spec moveaxis(ExTorch.Tensor.t(), tuple() | integer(), tuple() | integer()) :: ExTorch.Tensor.t()
Alias to movedim/3
@spec movedim(ExTorch.Tensor.t(), tuple() | integer(), tuple() | integer()) :: ExTorch.Tensor.t()
Moves the dimension(s) of input at the position(s) in source to the position(s) in destination.
Other dimensions of input that are not explicitly moved remain in their original order and appear
at the positions not specified in destination.
Arguments
input(ExTorch.Tensor) - the input tensor.source(integer() | tuple()) - original positions of the dims to move. These must be unique.destination(integer() | tuple()) - destination positions of the dims to move. These must be unique.
Examples
iex> a = ExTorch.randn({3, 2, 1})
#Tensor<
[[[-0.0404],
[ 0.5073]],
[[ 0.3008],
[-0.6428]],
[[-0.8649],
[ 0.3615]]]
[size: {3, 2, 1}, dtype: :float, device: :cpu, requires_grad: false]>
# Swap two singular dimensions
iex> ExTorch.movedim(a, 1, 0)
#Tensor<
[[[-0.0404],
[ 0.3008],
[-0.8649]],
[[ 0.5073],
[-0.6428],
[ 0.3615]]]
[size: {2, 3, 1}, dtype: :float, device: :cpu, requires_grad: false]>
# Swap multiple dimensions
iex> ExTorch.movedim(a, {1, 2}, {0, 1})
#Tensor<
[[[-0.0404, 0.3008, -0.8649]],
[[ 0.5073, -0.6428, 0.3615]]]
[size: {2, 1, 3}, dtype: :float, device: :cpu, requires_grad: false]>
@spec narrow(ExTorch.Tensor.t(), integer(), integer() | ExTorch.Tensor.t(), integer()) :: ExTorch.Tensor.t()
Returns a new tensor that is a narrowed version of input tensor.
The dimension dim is input from start to start + length.
The returned tensor and input tensor share the same underlying storage.
Arguments
input(ExTorch.Tensor) - the tensor to narrow.dim(integer) - the dimension along which to narrow.start(integer | ExTorch.Tensor) - index of the element to start the narrowed dimension from. Can be negative, which means indexing from the end ofdim. IfExTorch.Tensor, it must be an 0-dim integral Tensor (bools not allowed).length(integer) - length of the narrowed dimension, must be weakly positive.
Examples
iex> a = ExTorch.arange(12) |> ExTorch.reshape({4, 3})
#Tensor<
[[ 0.0000, 1.0000, 2.0000],
[ 3.0000, 4.0000, 5.0000],
[ 6.0000, 7.0000, 8.0000],
[ 9.0000, 10.0000, 11.0000]]
[size: {4, 3}, dtype: :float, device: :cpu, requires_grad: false]>
# Narrow tensor from 0 to 2 in the first dimension
iex> ExTorch.narrow(a, 0, 0, 2)
#Tensor<
[[0.0000, 1.0000, 2.0000],
[3.0000, 4.0000, 5.0000]]
[size: {2, 3}, dtype: :float, device: :cpu, requires_grad: false]>
# Narrow tensor from 1 to 3 in the second dimension
iex> ExTorch.narrow(a, 1, 1, 2)
#Tensor<
[[ 1., 2.],
[ 4., 5.],
[ 7., 8.],
[10., 11.]]
[size: {4, 2}, dtype: :float, device: :cpu, requires_grad: false]>
# Narrow tensor using a `start` tensor
iex> ExTorch.narrow(a, -1, ExTorch.tensor(-1), 1)
#Tensor<
[[ 2.],
[ 5.],
[ 8.],
[11.]]
[size: {4, 1}, dtype: :float, device: :cpu, requires_grad: false]>
@spec narrow_copy( ExTorch.Tensor.t(), integer(), integer(), integer() ) :: ExTorch.Tensor.t()
Available signature calls:
narrow_copy(input, dim, start, length)
@spec narrow_copy( ExTorch.Tensor.t(), integer(), integer(), integer(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec narrow_copy( ExTorch.Tensor.t(), integer(), integer(), integer(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Same as ExTorch.narrow/4 except this returns a copy rather than shared storage.
This is primarily for sparse tensors, which do not have a shared-storage narrow method.
Arguments
input(ExTorch.Tensor) - the tensor to narrow.dim(integer) - the dimension along which to narrow.start(integer) - index of the element to start the narrowed dimension from. Can be negative, which means indexing from the end ofdim.length(integer) - length of the narrowed dimension, must be weakly positive.
Optional arguments
out(ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default:nil
Examples
iex> a = ExTorch.arange(12) |> ExTorch.reshape({4, 3})
#Tensor<
[[ 0.0000, 1.0000, 2.0000],
[ 3.0000, 4.0000, 5.0000],
[ 6.0000, 7.0000, 8.0000],
[ 9.0000, 10.0000, 11.0000]]
[size: {4, 3}, dtype: :float, device: :cpu, requires_grad: false]>
# Narrow tensor from 0 to 2 in the first dimension
iex> ExTorch.narrow_copy(a, 0, 0, 2)
#Tensor<
[[0.0000, 1.0000, 2.0000],
[3.0000, 4.0000, 5.0000]]
[size: {2, 3}, dtype: :float, device: :cpu, requires_grad: false]>
# Narrow tensor from 1 to 3 in the second dimension
iex> ExTorch.narrow_copy(a, 1, 1, 2)
#Tensor<
[[ 1., 2.],
[ 4., 5.],
[ 7., 8.],
[10., 11.]]
[size: {4, 2}, dtype: :float, device: :cpu, requires_grad: false]>
@spec nonzero(ExTorch.Tensor.t()) :: ExTorch.Tensor.t() | tuple()
Available signature calls:
nonzero(input)
@spec nonzero(ExTorch.Tensor.t(), out: ExTorch.Tensor.t() | nil, as_tuple: boolean()) :: ExTorch.Tensor.t() | tuple()
@spec nonzero(ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t() | tuple()
Available signature calls:
nonzero(input, out)nonzero(input, kwargs)
@spec nonzero(ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil, boolean()) :: ExTorch.Tensor.t() | tuple()
@spec nonzero(ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil, [{:as_tuple, boolean()}]) :: ExTorch.Tensor.t() | tuple()
Retrieve the indices of all non-zero elements in a tensor.
This function can behave differently depending of the value set by the
as_tuple parameter:
When as_tuple is false (default):
Returns a tensor containing the indices of all non-zero elements of input.
Each row in the result contains the indices of a non-zero element in input.
The result is sorted lexicographically, with the last index changing the fastest (C-style).
If input has $n$ dimensions, then the resulting indices tensor out is of size $(z \times n)$,
where $z$ is the total number of non-zero elements in the input tensor.
When as_tuple is true
Returns a tuple of 1-D tensors, one for each dimension in input, each containing the indices
(in that dimension) of all non-zero elements of input .
If input has $n$ dimensions, then the resulting tuple contains $n$ tensors of size $z$, where $z$
is the total number of non-zero elements in the input tensor.
As a special case, when input has zero dimensions and a nonzero scalar value, it is treated as a
one-dimensional tensor with one element.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
out(ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. This will only take effect whenas_tuple = false. Default:nilas_tuple(boolean) - iffalse, the function will return the output tensor containing indices. Else, it returns one 1-D tensor for each dimension, containing the indices of each nonzero element along that dimension. Default:false
Examples
iex> input1 = ExTorch.tensor([1, 1, 1, 0, 1])
iex> input2 = ExTorch.tensor([[0.6, 0.0, 0.0, 0.0],
...> [0.0, 0.4, 0.0, 0.0],
...> [0.0, 0.0, 1.2, 0.0],
...> [0.0, 0.0, 0.0,-0.4]])
# Return tensor indices
iex> ExTorch.nonzero(input1)
#Tensor<
[[0],
[1],
[2],
[4]]
[size: {4, 1}, dtype: :long, device: :cpu, requires_grad: false]>
iex> ExTorch.nonzero(input2)
#Tensor<
[[0, 0],
[1, 1],
[2, 2],
[3, 3]]
[size: {4, 2}, dtype: :long, device: :cpu, requires_grad: false]>
# Return tuple indices
iex> ExTorch.nonzero(input1, as_tuple: true)
#Tensor<
[0, 1, 2, 4]
[size: {4}, dtype: :long, device: :cpu, requires_grad: false]>
iex> ExTorch.nonzero(input2, as_tuple: true)
{#Tensor<
[0, 1, 2, 3]
[size: {4}, dtype: :long, device: :cpu, requires_grad: false]>,
#Tensor<
[0, 1, 2, 3]
[size: {4}, dtype: :long, device: :cpu, requires_grad: false]>}
@spec permute(ExTorch.Tensor.t(), tuple() | [integer()]) :: ExTorch.Tensor.t()
Returns a view of the original tensor input with its dimensions permuted.
Arguments
input(ExTorch.Tensor) - the input tensor.dims(tuple() | [integer()]) - The desired ordering of dimensions.
Examples
iex> a = ExTorch.rand({3, 2, 4, 5})
iex> out = ExTorch.permute(a, {2, -1, 0, 1})
iex> out.size
{4, 5, 3, 2}
@spec reshape(ExTorch.Tensor.t(), tuple() | [integer()]) :: ExTorch.Tensor.t()
Returns a tensor with the same data and number of elements as input, but
with the specified shape.
When possible, the returned tensor will be a view of input. Otherwise, it
will be a copy. Contiguous inputs and inputs with compatible strides can be
reshaped without copying, but you should not depend on the copying vs.
viewing behavior.
A single dimension may be -1, in which case it’s inferred from theremaining dimensions and the number of elements in input.
Arguments
tensor: input tensor (ExTorch.Tensor)shape: the new shape ([integer()] | tuple())
Examples
iex> a = ExTorch.arange(0, 20) |> ExTorch.reshape({5, 4})
#Tensor<
[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.],
[16., 17., 18., 19.]]
[
size: {5, 4},
dtype: :double,
device: :cpu,
requires_grad: false
]>
iex> b = ExTorch.tensor([[0, 1], [2, 3]]) |> ExTorch.reshape({-1})
#Tensor<
[0, 1, 2, 3]
[
size: {4},
dtype: :byte,
device: :cpu,
requires_grad: false
]>
@spec row_stack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()
Alias to vstack/1
@spec row_stack( [ExTorch.Tensor.t()] | tuple(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec row_stack([ExTorch.Tensor.t()] | tuple(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Alias to vstack/2
@spec scatter( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | float() ) :: ExTorch.Tensor.t()
Available signature calls:
scatter(input, dim, index, src)
@spec scatter( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | float(), out: ExTorch.Tensor.t() | nil, inplace: boolean() ) :: ExTorch.Tensor.t()
@spec scatter( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | float(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
scatter(input, dim, index, src, out)scatter(input, dim, index, src, kwargs)
@spec scatter( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | float(), ExTorch.Tensor.t() | nil, boolean() ) :: ExTorch.Tensor.t()
@spec scatter( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | float(), ExTorch.Tensor.t() | nil, [{:inplace, boolean()}] ) :: ExTorch.Tensor.t()
Writes all values from the tensor src into input at the indices specified in the index tensor.
For each value in src, its output index is specified by its index in src for dimension != dim and by
the corresponding value in index for dimension = dim.
For a 3-D tensor, input is updated as:
input[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0
input[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1
input[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2This is the reverse operation of the manner described in ExTorch.gather/5.
input, index and src (if it is a ExTorch.Tensor) should all have the same number of dimensions.
It is also required that index.size(d) <= src.size(d) for all dimensions d, and that
index.size(d) <= input.size(d) for all dimensions d != dim. Note that index and src do not broadcast.
Moreover, as for ExTorch.gather/5, the values of index must be between 0 and input.size(dim) - 1
inclusive.
Arguments
input(ExTorch.Tensor) - the input tensor.dim(integer) - the axis along which to index.index(ExTorch.Tensor) - the indices of elements to scatter, can be either empty or of the same dimensionality assrc. When empty, the operation returnsinputunchanged. It's dtype must be:longor:int64src(ExTorch.Tensororfloat) - the source element(s) to scatter.
Optional arguments
out(ExTorch.Tensorornil) - an optional pre-allocated tensor used to store the output result. Ifinplace = trueit will not take any effect. Default:nilinplace(bool) - iftruethen the scatter operation will take inplace on theinputargument. Else it will return a separate tensor with the result. Default:false
Warnings
When indices are not unique, the behavior is non-deterministic (one of the values from src will be picked arbitrarily)
and the gradient will be incorrect (it will be propagated to all locations in the source that correspond to the same index)!
Notes
- The backward pass is implemented only for
src.size == index.size. - This function does not expose the
reduceargument since it is set for deprecation. It is recommended to use theExTorch.scatter_reducefunction instead.
Examples
iex> src = ExTorch.arange(1, 11) |> ExTorch.reshape({2, 5})
#Tensor<
[[ 1., 2., 3., 4., 5.],
[ 6., 7., 8., 9., 10.]]
[size: {2, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex> index = ExTorch.tensor([[0, 1, 2, 0]], dtype: :int64)
#Tensor<
[[0, 1, 2, 0]]
[size: {1, 4}, dtype: :long, device: :cpu, requires_grad: false]>
iex> input = ExTorch.zeros({3, 5}, dtype: src.dtype)
#Tensor<
[[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter(input, 0, index, src)
#Tensor<
[[1.0000, 0.0000, 0.0000, 4.0000, 0.0000],
[0.0000, 2.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 3.0000, 0.0000, 0.0000]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex> index = ExTorch.tensor([[0, 1, 2], [0, 1, 4]], dtype: :int64)
#Tensor<
[[0, 1, 2],
[0, 1, 4]]
[size: {2, 3}, dtype: :long, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter(input, 1, index, src)
#Tensor<
[[1.0000, 2.0000, 3.0000, 0.0000, 0.0000],
[6.0000, 7.0000, 0.0000, 0.0000, 8.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>
@spec scatter_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Available signature calls:
scatter_add(input, dim, index, src)
@spec scatter_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), out: ExTorch.Tensor.t() | nil, inplace: boolean() ) :: ExTorch.Tensor.t()
@spec scatter_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
scatter_add(input, dim, index, src, out)scatter_add(input, dim, index, src, kwargs)
@spec scatter_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil, boolean() ) :: ExTorch.Tensor.t()
@spec scatter_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil, [{:inplace, boolean()}] ) :: ExTorch.Tensor.t()
Adds all values from the tensor src into input at the indices specified in the index tensor in a similar
fashion to ExTorch.scatter/6.
For each value in src, its output index is specified by its index in src for dimension != dim and by
the corresponding value in index for dimension = dim.
For a 3-D tensor, input is updated as:
input[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0
input[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1
input[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2This is the reverse operation of the manner described in ExTorch.gather/5.
input, index and src (if it is a ExTorch.Tensor) should all have the same number of dimensions.
It is also required that index.size(d) <= src.size(d) for all dimensions d, and that
index.size(d) <= input.size(d) for all dimensions d != dim. Note that index and src do not broadcast.
Moreover, as for ExTorch.gather/5, the values of index must be between 0 and input.size(dim) - 1
inclusive.
Arguments
input(ExTorch.Tensor) - the input tensor.dim(integer) - the axis along which to index.index(ExTorch.Tensor) - the indices of elements to scatter, can be either empty or of the same dimensionality assrc. When empty, the operation returnsinputunchanged. It's dtype must be:longor:int64src(ExTorch.Tensor) - the source elements to scatter and add.
Optional arguments
out(ExTorch.Tensorornil) - an optional pre-allocated tensor used to store the output result. Ifinplace = trueit will not take any effect. Default:nilinplace(bool) - iftruethen the scatter operation will take inplace on theinputargument. Else it will return a separate tensor with the result. Default:false
Notes
- The backward pass is implemented only for
src.size == index.size. - This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information.
Examples
iex> src = ExTorch.ones({2, 5})
#Tensor<
[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]]
[size: {2, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex> index = ExTorch.tensor([[0, 1, 2, 0, 0]], dtype: :int64)
#Tensor<
[[0, 1, 2, 0, 0]]
[size: {1, 5}, dtype: :long, device: :cpu, requires_grad: false]>
iex> input = ExTorch.zeros({3, 5}, dtype: src.dtype)
#Tensor<
[[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex(4)> ExTorch.scatter_add(input, 0, index, src)
#Tensor<
[[1.0000, 0.0000, 0.0000, 1.0000, 1.0000],
[0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 1.0000, 0.0000, 0.0000]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex> index = ExTorch.tensor([[0, 1, 2, 0, 0], [0, 1, 2, 2, 2]], dtype: :int64)
#Tensor<
[[0, 1, 2, 0, 0],
[0, 1, 2, 2, 2]]
[size: {2, 5}, dtype: :long, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter_add(input, 0, index, src)
#Tensor<
[[2.0000, 0.0000, 0.0000, 1.0000, 1.0000],
[0.0000, 2.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 2.0000, 1.0000, 1.0000]]
[size: {3, 5}, dtype: :float, device: :cpu, requires_grad: false]>
@spec scatter_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :sum | :prod | :mean | :amax | :amin ) :: ExTorch.Tensor.t()
Available signature calls:
scatter_reduce(input, dim, index, src, reduce)
@spec scatter_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :sum | :prod | :mean | :amax | :amin, boolean() ) :: ExTorch.Tensor.t()
@spec scatter_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :sum | :prod | :mean | :amax | :amin, include_self: boolean(), out: ExTorch.Tensor.t() | nil, inplace: boolean() ) :: ExTorch.Tensor.t()
Available signature calls:
scatter_reduce(input, dim, index, src, reduce, kwargs)scatter_reduce(input, dim, index, src, reduce, include_self)
@spec scatter_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :sum | :prod | :mean | :amax | :amin, boolean(), out: ExTorch.Tensor.t() | nil, inplace: boolean() ) :: ExTorch.Tensor.t()
@spec scatter_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :sum | :prod | :mean | :amax | :amin, boolean(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
scatter_reduce(input, dim, index, src, reduce, include_self, out)scatter_reduce(input, dim, index, src, reduce, include_self, kwargs)
@spec scatter_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :sum | :prod | :mean | :amax | :amin, boolean(), ExTorch.Tensor.t() | nil, boolean() ) :: ExTorch.Tensor.t()
@spec scatter_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :sum | :prod | :mean | :amax | :amin, boolean(), ExTorch.Tensor.t() | nil, [{:inplace, boolean()}] ) :: ExTorch.Tensor.t()
Reduces all values from the src tensor to the indices specified in the index tensor in the input tensor
using the applied reduction defined via the reduce argument (:sum, :prod, :mean, :amax, :amin).
For each value in src, it is reduced to an index in input which is specified by its index in src for
dimension != dim and by the corresponding value in index for dimension = dim. If include_self: true,
the values in the input tensor are included in the reduction.
For each value in src, its output index is specified by its index in src for dimension != dim and by
the corresponding value in index for dimension = dim.
For a 3-D tensor with reduce: :sum and include_self: true, input is updated as:
input[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0
input[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1
input[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2This is the reverse operation of the manner described in ExTorch.gather/5.
input, index and src (if it is a ExTorch.Tensor) should all have the same number of dimensions.
It is also required that index.size(d) <= src.size(d) for all dimensions d, and that
index.size(d) <= input.size(d) for all dimensions d != dim. Note that index and src do not broadcast.
Moreover, as for ExTorch.gather/5, the values of index must be between 0 and input.size(dim) - 1
inclusive.
Arguments
input(ExTorch.Tensor) - the input tensor.dim(integer) - the axis along which to index.index(ExTorch.Tensor) - the indices of elements to scatter, can be either empty or of the same dimensionality assrc. When empty, the operation returnsinputunchanged. It's dtype must be:longor:int64src(ExTorch.Tensor) - the source elements to scatter and add.reduce(:sumor:prodor:meanor:amaxor:amin) - the reduction operation to apply for non-unique indices.s
Optional arguments
include_self(boolean) - whether elements from theinputtensor are included in the reduction. Default:trueout(ExTorch.Tensorornil) - an optional pre-allocated tensor used to store the output result. Ifinplace = trueit will not take any effect. Default:nilinplace(bool) - iftruethen the scatter operation will take inplace on theinputargument. Else it will return a separate tensor with the result. Default:false
Notes
- The backward pass is implemented only for
src.size == index.size. - This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information.
- This function is in beta and may change in the near future.
Examples
iex> src = ExTorch.tensor([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
#Tensor<
[1., 2., 3., 4., 5., 6.]
[size: {6}, dtype: :float, device: :cpu, requires_grad: false]>
iex> index = ExTorch.tensor([0, 1, 0, 1, 2, 1], dtype: :long)
#Tensor<
[0, 1, 0, 1, 2, 1]
[size: {6}, dtype: :long, device: :cpu, requires_grad: false]>
iex> input = ExTorch.tensor([1.0, 2.0, 3.0, 4.0])
#Tensor<
[1., 2., 3., 4.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter_reduce(input, 0, index, src, :sum)
#Tensor<
[ 5., 14., 8., 4.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter_reduce(input, 0, index, src, :sum, include_self: false)
#Tensor<
[ 4., 12., 5., 4.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> input2 = ExTorch.tensor([5.0, 4.0, 3.0, 2.0])
#Tensor<
[5., 4., 3., 2.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter_reduce(input2, 0, index, src, :amax)
#Tensor<
[5., 6., 5., 2.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.scatter_reduce(input2, 0, index, src, :amax, include_self: false)
#Tensor<
[3., 6., 5., 2.]
[size: {4}, dtype: :float, device: :cpu, requires_grad: false]>
@spec select_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer() ) :: ExTorch.Tensor.t()
Available signature calls:
select_scatter(input, src, dim, index)
@spec select_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec select_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Embeds the values of the src tensor into input at the given index.
This function returns a tensor with fresh storage; it does not create a view.
Arguments
input(ExTorch.Tensor) - the input tensor.src(ExTorch.Tensor) - the tensor to embed intoinput.dim(integer) - the dimension to insert the slice into.index(integer) - the index to select with.
Optional arguments
out(ExTorch.Tensorornil) - an optional pre-allocated tensor used to store the output result. Default:nil
Note
src must be of the proper size in order to be embedded into input.
Specifically, it should have the same shape as ExTorch.select(input, dim, index)
Examples
iex> a = ExTorch.zeros({2, 2})
#Tensor<
[[ 0., 0.],
[ 0., 0.]]
[size: {2, 2}, dtype: :float, device: :cpu, requires_grad: false]>
iex> b = ExTorch.ones(2)
#Tensor<
[1., 1.]
[size: {2}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.select_scatter(a, b, 0, 0)
#Tensor<
[[1.0000, 1.0000],
[0.0000, 0.0000]]
[size: {2, 2}, dtype: :float, device: :cpu, requires_grad: false]>
@spec slice_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Available signature calls:
slice_scatter(input, src)
@spec slice_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer() ) :: ExTorch.Tensor.t()
@spec slice_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), dim: integer(), start: integer() | nil, stop: integer() | nil, step: integer() | nil, out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
slice_scatter(input, src, kwargs)slice_scatter(input, src, dim)
@spec slice_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), start: integer() | nil, stop: integer() | nil, step: integer() | nil, out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
@spec slice_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
slice_scatter(input, src, dim, start)slice_scatter(input, src, dim, kwargs)
@spec slice_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer() | nil, stop: integer() | nil, step: integer() | nil, out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
@spec slice_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer() | nil, integer() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
slice_scatter(input, src, dim, start, stop)slice_scatter(input, src, dim, start, kwargs)
@spec slice_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer() | nil, integer() | nil, step: integer() | nil, out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
@spec slice_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer() | nil, integer() | nil, integer() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
slice_scatter(input, src, dim, start, stop, step)slice_scatter(input, src, dim, start, stop, kwargs)
@spec slice_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer() | nil, integer() | nil, integer() | nil, [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec slice_scatter( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer(), integer() | nil, integer() | nil, integer() | nil, ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Embeds the values of the src tensor into input at the given dimension.
This function returns a tensor with fresh storage; it does not create a view.
Arguments
input(ExTorch.Tensor) - the input tensor.src(ExTorch.Tensor) - the tensor to embed intoinput.
Optional arguments
dim(integer) - the dimension to insert the slice into. Default: 0start(integerornil) - the start index from which the slice should be inserted into. Default:nilstop(integerornil) - the end index until which the slice is inserted. Default:nilstep(integerornil) - how many elements are skipped in between slice insertions. Default: 1out(ExTorch.Tensorornil) - an optional pre-allocated tensor used to store the output result. Default:nil
Examples
iex> a = ExTorch.zeros({8, 8})
iex> b = ExTorch.ones({2, 8})
iex> ExTorch.slice_scatter(a, b, start: 6)
#Tensor<
[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000]]
[size: {8, 8}, dtype: :float, device: :cpu, requires_grad: false]>
iex> b = ExTorch.ones({8, 2})
iex> ExTorch.slice_scatter(a, b, dim: 1, start: 2, stop: 6, step: 2)
#Tensor<
[[0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000]]
[size: {8, 8}, dtype: :float, device: :cpu, requires_grad: false]>
@spec split(ExTorch.Tensor.t(), integer() | [integer()] | tuple()) :: [ ExTorch.Tensor.t() ]
See ExTorch.split/3
Available signature calls:
split(tensor, split_size_or_sections)
@spec split(ExTorch.Tensor.t(), integer() | [integer()] | tuple(), integer()) :: [ ExTorch.Tensor.t() ]
@spec split(ExTorch.Tensor.t(), integer() | [integer()] | tuple(), [{:dim, integer()}]) :: [ ExTorch.Tensor.t() ]
Splits the tensor into chunks. Each chunk is a view of the original tensor.
If split_size_or_sections is an integer type, then tensor will be split into equally
sized chunks (if possible). Last chunk will be smaller if the tensor size along the given
dimension dim is not divisible by split_size.
If split_size_or_sections is a list, then tensor will be split into
length(split_size_or_sections) chunks with sizes in dim according to split_size_or_sections.
Arguments
tensor(ExTorch.Tensor) - tensor to split.split_size_or_sections(integeror[integer]) - size of a single chunk or list of sizes for each chunk.
Optional arguments
dim(integer) - dimension along which to split the tensor.
Examples
iex> a = ExTorch.arange(10) |> ExTorch.reshape({5, 2})
#Tensor<
[[0.0000, 1.0000],
[2.0000, 3.0000],
[4.0000, 5.0000],
[6.0000, 7.0000],
[8.0000, 9.0000]]
[size: {5, 2}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.split(a, 2)
[
#Tensor<
[[0.0000, 1.0000],
[2.0000, 3.0000]]
[size: {2, 2}, dtype: :float, device: :cpu, requires_grad: false]>,
#Tensor<
[[4., 5.],
[6., 7.]]
[size: {2, 2}, dtype: :float, device: :cpu, requires_grad: false]>,
#Tensor<
[[8., 9.]]
[size: {1, 2}, dtype: :float, device: :cpu, requires_grad: false]>
]
iex> ExTorch.split(a, [2, 3])
[
#Tensor<
[[0.0000, 1.0000],
[2.0000, 3.0000]]
[size: {2, 2}, dtype: :float, device: :cpu, requires_grad: false]>,
#Tensor<
[[4., 5.],
[6., 7.],
[8., 9.]]
[size: {3, 2}, dtype: :float, device: :cpu, requires_grad: false]>
]
@spec squeeze(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
squeeze(input)
@spec squeeze(ExTorch.Tensor.t(), integer() | tuple() | [integer()] | nil) :: ExTorch.Tensor.t()
@spec squeeze( ExTorch.Tensor.t(), [{:dim, integer() | tuple() | [integer()] | nil}] ) :: ExTorch.Tensor.t()
Returns a tensor with all specified dimensions of input of size 1 removed.
For example, if input is of shape: $\(A \times 1 \times B \times C \times 1 \times D\)$ then
ExTorch.squeeze(input) will be of shape: $\(A \times B \times C \times D \)$.
When dim is given, a squeeze operation is done only in the given dimension(s).
If input is of shape: $\(A \times 1 \times B \)$, squeeze(input, 0) leaves the tensor
unchanged, but squeeze(input, 1) will squeeze the tensor to the shape $\(A \times B \)$.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integerortupleor[integer]ornil) - the dimension(s) to squeeze frominput. Ifnil, then all singleton dimensions will be squeezed.
Notes
- The returned tensor shares the storage with the
inputtensor, so changing the contents of one will change the contents of the other. - If the tensor has a batch dimension of size 1, then
squeeze(input)will also remove the batch dimension, which can lead to unexpected errors. Consider specifying only the dims you wish to be squeezed.
Examples
iex> a = ExTorch.empty({1, 3, 1, 4, 1, 5})
iex> a.size
{1, 3, 1, 4, 5}
# Squeeze all singleton dimensions
iex> b = ExTorch.squeeze(a)
iex> b.size
{3, 4, 5}
# Squeeze a particular dimension
iex> b = ExTorch.squeeze(a, -2)
iex> b.size
{1, 3, 1, 4, 5}
# Squeeze particular dimensions
iex> b = ExTorch.squeeze(a, {2, 4})
iex> b.size
{1, 3, 4, 5}
@spec stack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()
See ExTorch.stack/3
Available signature calls:
stack(input)
@spec stack([ExTorch.Tensor.t()] | tuple(), integer()) :: ExTorch.Tensor.t()
@spec stack([ExTorch.Tensor.t()] | tuple(), dim: integer(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
See ExTorch.stack/3
Available signature calls:
stack(input, kwargs)stack(input, dim)
@spec stack([ExTorch.Tensor.t()] | tuple(), integer(), [ {:out, ExTorch.Tensor.t() | nil} ]) :: ExTorch.Tensor.t()
@spec stack([ExTorch.Tensor.t()] | tuple(), integer(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Concatenates a sequence of tensors along a new dimension.
All tensors need to be of the same size. This function is analogous to ExTorch.cat/3
Arguments
tensors([ExTorch.Tensor] | tuple()) - A sequence of tensors of the same type. Non-empty tensors provided must have the same shape.
Optional arguments
dim(integer()) - the dimension over which the tensors are concatenated. Default: 0out(ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the concatenation output. Default: nil
Examples
iex> a = ExTorch.rand({3, 4})
#Tensor<
[[0.7419, 0.4063, 0.0514, 0.4281],
[0.7350, 0.1977, 0.5593, 0.1701],
[0.4135, 0.7213, 0.9591, 0.2798]]
[size: {3, 4}, dtype: :float, device: :cpu, requires_grad: false]>
# Concatenate tensors into a new dimension at the beginning
iex> ExTorch.stack([a, a, a])
#Tensor<
[[[0.7419, 0.4063, 0.0514, 0.4281],
[0.7350, 0.1977, 0.5593, 0.1701],
[0.4135, 0.7213, 0.9591, 0.2798]],
[[0.7419, 0.4063, 0.0514, 0.4281],
[0.7350, 0.1977, 0.5593, 0.1701],
[0.4135, 0.7213, 0.9591, 0.2798]],
[[0.7419, 0.4063, 0.0514, 0.4281],
[0.7350, 0.1977, 0.5593, 0.1701],
[0.4135, 0.7213, 0.9591, 0.2798]]]
[size: {3, 3, 4}, dtype: :float, device: :cpu, requires_grad: false]>
# Concatenate tensors into a new dimension at the first position
iex> ExTorch.stack([a, a, a], 1)
#Tensor<
[[[0.7419, 0.4063, 0.0514, 0.4281],
[0.7419, 0.4063, 0.0514, 0.4281],
[0.7419, 0.4063, 0.0514, 0.4281]],
[[0.7350, 0.1977, 0.5593, 0.1701],
[0.7350, 0.1977, 0.5593, 0.1701],
[0.7350, 0.1977, 0.5593, 0.1701]],
[[0.4135, 0.7213, 0.9591, 0.2798],
[0.4135, 0.7213, 0.9591, 0.2798],
[0.4135, 0.7213, 0.9591, 0.2798]]]
[size: {3, 3, 4}, dtype: :float, device: :cpu, requires_grad: false]>
@spec swapaxes(ExTorch.Tensor.t(), integer(), integer()) :: ExTorch.Tensor.t()
Alias to transpose/3
@spec swapdims(ExTorch.Tensor.t(), integer(), integer()) :: ExTorch.Tensor.t()
Alias to transpose/3
@spec t(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Expects input to be <= 2-D tensor and transposes dimensions 0 and 1.
0-D and 1-D tensors are returned as is. When input is a 2-D tensor this
is equivalent to ExTorch.transpose(input, 0, 1).
Arguments
input(ExTorch.Tensor) - the input tensor.
Examples
iex> a = ExTorch.rand({4, 5})
#Tensor<
[[0.3812, 0.6590, 0.8400, 0.4826, 0.3654],
[0.4542, 0.4252, 0.5376, 0.8787, 0.6286],
[0.3727, 0.4394, 0.0584, 0.2185, 0.7270],
[0.8123, 0.2479, 0.2493, 0.2429, 0.3871]]
[size: {4, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.t(a)
#Tensor<
[[0.3812, 0.4542, 0.3727, 0.8123],
[0.6590, 0.4252, 0.4394, 0.2479],
[0.8400, 0.5376, 0.0584, 0.2493],
[0.4826, 0.8787, 0.2185, 0.2429],
[0.3654, 0.6286, 0.7270, 0.3871]]
[size: {5, 4}, dtype: :float, device: :cpu, requires_grad: false]>
@spec take(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a new tensor with the elements of input at the given indices.
The input tensor is treated as if it were viewed as a 1-D tensor.
The result takes the same shape as the indices.
Arguments
input(ExTorch.Tensor) - the input tensor.indices(ExTorch.Tensor) - the indices into tensor. It must be of:int64or:longdtype.
Examples
iex> a = ExTorch.rand({3, 3})
#Tensor<
[[0.0860, 0.9378, 0.3475],
[0.3576, 0.7145, 0.1036],
[0.7352, 0.4285, 0.2933]]
[size: {3, 3}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.take(a, ExTorch.tensor([1, 5, 6], dtype: :int64))
#Tensor<
[0.9378, 0.1036, 0.7352]
[size: {3}, dtype: :float, device: :cpu, requires_grad: false]>
@spec take_along_dim( ExTorch.Tensor.t(), ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Available signature calls:
take_along_dim(input, indices)
@spec take_along_dim( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer() | nil ) :: ExTorch.Tensor.t()
@spec take_along_dim( ExTorch.Tensor.t(), ExTorch.Tensor.t(), dim: integer() | nil, out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
take_along_dim(input, indices, kwargs)take_along_dim(input, indices, dim)
@spec take_along_dim( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer() | nil, [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec take_along_dim( ExTorch.Tensor.t(), ExTorch.Tensor.t(), integer() | nil, ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Selects values from input at the 1-dimensional indices from indices along the given dim.
Functions that return indices along a dimension, like ExTorch.argmax/3 and ExTorch.argmin/3,
are designed to work with this function. See the examples below.
Arguments
input(ExTorch.Tensor) - the input tensor.indices(ExTorch.Tensor) - the indices intoinput. It must have either:longor:int64dtype.
Optional arguments
dim(integer) - dimension to select along. Ifnil, then it will index all the dimensions as a single one. Default:nil.out(ExTorch.Tensorornil) - an optional pre-allocated tensor used to store the output. Default:nil
Notes
This function is similar to NumPy’s take_along_axis. See also ExTorch.gather/5.
Examples
iex> t = ExTorch.tensor([[10, 30, 20], [60, 40, 50]], dtype: :long)
#Tensor<
[[10, 30, 20],
[60, 40, 50]]
[size: {2, 3}, dtype: :long, device: :cpu, requires_grad: false]>
iex> max_idx = ExTorch.argmax(t)
#Tensor< 3 [size: {}, dtype: :long, device: :cpu, requires_grad: false]>
iex> ExTorch.take_along_dim(t, max_idx)
#Tensor< [60] [size: {1}, dtype: :long, device: :cpu, requires_grad: false]>
iex> sorted_idx = ExTorch.argsort(t, dim: 1)
#Tensor<
[[0, 2, 1],
[1, 2, 0]]
[size: {2, 3}, dtype: :long, device: :cpu, requires_grad: false]>
iex> ExTorch.take_along_dim(t, sorted_idx, dim: 1)
#Tensor<
[[10, 20, 30],
[40, 50, 60]]
[size: {2, 3}, dtype: :long, device: :cpu, requires_grad: false]>
@spec tensor_split( ExTorch.Tensor.t(), integer() | [integer()] | tuple() | ExTorch.Tensor.t() ) :: [ExTorch.Tensor.t()]
Available signature calls:
tensor_split(input, indices_or_sections)
@spec tensor_split( ExTorch.Tensor.t(), integer() | [integer()] | tuple() | ExTorch.Tensor.t(), integer() ) :: [ExTorch.Tensor.t()]
@spec tensor_split( ExTorch.Tensor.t(), integer() | [integer()] | tuple() | ExTorch.Tensor.t(), [{:dim, integer()}] ) :: [ExTorch.Tensor.t()]
Splits a tensor into multiple sub-tensors, all of which are views of input,
along dimension dim according to the indices or number of sections specified
by indices_or_sections.
Arguments
input(ExTorch.Tensor) - the tensor to split.indices_or_sections(integer | ExTorch.Tensor | [integer()] | tuple) -- If
indices_or_sectionsis an integernor a zero dimensional long tensor with valuen,inputis split intonsections along dimensiondim. Ifinputis divisible bynalong dimensiondim, each section will be of equal size,input.size[dim] / n. - If
inputis not divisible by n, the sizes of the firstinput.size[dim] % nsections will have sizeinput.size[dim] / n + 1, and the rest will have sizeinput.size[dim] / n. - If
indices_or_sectionsis a list or tuple of ints, or a one-dimensional long tensor, theninputis split along dimensiondimat each of the indices in the list, tuple or tensor. For instance,indices_or_sections = [2, 3]anddim = 0would result in the tensorsinput[:2],input[2:3], andinput[3:]. - If
indices_or_sectionsis a tensor, it must be a zero-dimensional or one-dimensional long tensor on the CPU.
- If
Optional arguments
dim(integer) - dimension along which to split the tensor. Default: 0
Examples
# Split a tensor in a given number of chunks
iex> a = ExTorch.arange(10)
iex> ExTorch.tensor_split(a, 2)
[
#Tensor<
[0.0000, 1.0000, 2.0000, 3.0000, 4.0000]
[
size: {5},
dtype: :float,
device: :cpu,
requires_grad: false
]>,
#Tensor<
[5., 6., 7., 8., 9.]
[
size: {5},
dtype: :float,
device: :cpu,
requires_grad: false
]>
]
# Split a tensor into the given sections
iex> ExTorch.tensor_split(a, [2, 5])
[
#Tensor<
[0.0000, 1.0000]
[size: {2}, dtype: :float, device: :cpu, requires_grad: false]>,
#Tensor<
[2., 3., 4.]
[size: {3}, dtype: :float, device: :cpu, requires_grad: false]>,
#Tensor<
[5., 6., 7., 8., 9.]
[size: {5}, dtype: :float, device: :cpu, requires_grad: false]>
]
@spec transpose(ExTorch.Tensor.t(), integer(), integer()) :: ExTorch.Tensor.t()
Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped.
Arguments
input(ExTorch.Tensor) - the input tensor.dim0(integer()) - the first dimension to transpose.dim1(integer()) - the second dimension to transpose.
Notes
- If
inputis a strided tensor then the resulting out tensor shares its underlying storage with theinputtensor, so changing the content of one would change the content of the other. - If
inputis a sparse tensor then the resulting out tensor does not share the underlying storage with the input tensor. - If input is a sparse tensor with compressed layout (SparseCSR, SparseBSR, SparseCSC or SparseBSC) the arguments
dim0anddim1must be both batch dimensions, or must both be sparse dimensions. The batch dimensions of a sparse tensor are the dimensions preceding the sparse dimensions.
Examples
iex> a = ExTorch.arange(6) |> ExTorch.reshape({2, 3})
#Tensor<
[[0.0000, 1.0000, 2.0000],
[3.0000, 4.0000, 5.0000]]
[
size: {2, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.transpose(a, 0, 1)
#Tensor<
[[0.0000, 3.0000],
[1.0000, 4.0000],
[2.0000, 5.0000]]
[
size: {3, 2},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec unsqueeze( ExTorch.Tensor.t(), integer() ) :: ExTorch.Tensor.t()
Append an empty dimension to a tensor on a given dimension.
Arguments
tensor: Input tensor (ExTorch.Tensor)dim: Dimension (integer())
Examples
iex> x = ExTorch.full({2, 2}, -2)
#Tensor<
[[-2., -2.],
[-2., -2.]]
[
size: {2, 2},
dtype: :double,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.unsqueeze(x, -1)
#Tensor<
[[[-2.],
[-2.]],
[[-2.],
[-2.]]]
[
size: {2, 2, 1},
dtype: :double,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.unsqueeze(x, 1)
#Tensor<
[[[-2., -2.]],
[[-2., -2.]]]
[
size: {2, 1, 2},
dtype: :double,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.unsqueeze(x, 0)
#Tensor<
[[[-2., -2.],
[-2., -2.]]]
[
size: {1, 2, 2},
dtype: :double,
device: :cpu,
requires_grad: false
]>
@spec vstack([ExTorch.Tensor.t()] | tuple()) :: ExTorch.Tensor.t()
See ExTorch.vstack/2
Available signature calls:
vstack(tensors)
@spec vstack( [ExTorch.Tensor.t()] | tuple(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec vstack([ExTorch.Tensor.t()] | tuple(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Stack tensors in sequence vertically (row wise).
This is equivalent to concatenation along the first axis after all 1-D tensors have been reshaped to at least 2 dimensions.
Arguments
tensors([ExTorch.Tensor.t()] | tuple()) - sequence of tensors to concatenate.
Optional arguments
out (
ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default:nil
Examples
iex> a = ExTorch.tensor([1, 2, 3])
iex> b = ExTorch.tensor([4, 5, 6])
iex> ExTorch.vstack({a, b})
#Tensor<
[[1, 2, 3],
[4, 5, 6]]
[size: {2, 3}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> a = ExTorch.tensor([[1],[2],[3]])
#Tensor<
[[1],
[2],
[3]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> b = ExTorch.tensor([[4],[5],[6]])
#Tensor<
[[4],
[5],
[6]]
[size: {3, 1}, dtype: :byte, device: :cpu, requires_grad: false]>
iex> ExTorch.vstack([a, b])
#Tensor<
[[1],
[2],
[3],
[4],
[5],
[6]]
[size: {6, 1}, dtype: :byte, device: :cpu, requires_grad: false]>
Tensor indexing
@spec index( ExTorch.Tensor.t(), ExTorch.Index.t() ) :: ExTorch.Tensor.t()
Index a tensor given a list of integers, ranges, tensors, nil or
:ellipsis.
Arguments
tensor: Input tensor (ExTorch.Tensor)indices: Indices to select (ExTorch.Index)
Examples
iex> a = ExTorch.arange(3 * 4 * 4) |> ExTorch.reshape({3, 4, 4})
#Tensor<
[[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]],
[[16., 17., 18., 19.],
[20., 21., 22., 23.],
[24., 25., 26., 27.],
[28., 29., 30., 31.]],
[[32., 33., 34., 35.],
[36., 37., 38., 39.],
[40., 41., 42., 43.],
[44., 45., 46., 47.]]]
[
size: {3, 4, 4},
dtype: :double,
device: :cpu,
requires_grad: false
]>
# Use an integer index
iex> ExTorch.index(a, 0)
#Tensor<
[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]]
[
size: {4, 4},
dtype: :double,
device: :cpu,
requires_grad: false
]>
# Use a slice index
iex> ExTorch.index(a, 0..2)
#Tensor<
[[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]],
[[16., 17., 18., 19.],
[20., 21., 22., 23.],
[24., 25., 26., 27.],
[28., 29., 30., 31.]]]
[
size: {2, 4, 4},
dtype: :double,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.index(a, ExTorch.slice(0, 1))
#Tensor<
[[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]]]
[
size: {1, 4, 4},
dtype: :double,
device: :cpu,
requires_grad: false
]>
# Index multiple dimensions
iex> ExTorch.index(a, [:::, ExTorch.slice(0, 2), 0])
#Tensor<
[[ 0., 4.],
[16., 20.],
[32., 36.]]
[
size: {3, 2},
dtype: :double,
device: :cpu,
requires_grad: false
]>Notes
For more information regarding the kind of accepted indices and their corresponding
behaviour, please see the ExTorch.Index documentation
@spec index_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Available signature calls:
index_add(input, dim, index, source)
@spec index_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Scalar.t() ) :: ExTorch.Tensor.t()
@spec index_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), alpha: ExTorch.Scalar.t(), out: ExTorch.Tensor.t() | nil, inplace: boolean() ) :: ExTorch.Tensor.t()
Available signature calls:
index_add(input, dim, index, source, kwargs)index_add(input, dim, index, source, alpha)
@spec index_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Scalar.t(), out: ExTorch.Tensor.t() | nil, inplace: boolean() ) :: ExTorch.Tensor.t()
@spec index_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Scalar.t(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
index_add(input, dim, index, source, alpha, out)index_add(input, dim, index, source, alpha, kwargs)
@spec index_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Scalar.t(), ExTorch.Tensor.t() | nil, boolean() ) :: ExTorch.Tensor.t()
@spec index_add( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Scalar.t(), ExTorch.Tensor.t() | nil, [{:inplace, boolean()}] ) :: ExTorch.Tensor.t()
Accumulate the elements of alpha times source into the input tensor by adding
to the indices in the order given in index.
For example, if dim == 0, index[i] == j, and alpha=-1, then the ith row of source is
subtracted from the jth row of input.
The dim-th dimension of source must have the same size as the length of index
(which must be a 1D tensor), and all other dimensions must match input, or an error
will be raised.
For a 3-D tensor the output is given as:
out[index[i], :, :] = input[index[i], :, :] + alpha * src[i, :, :] # if dim == 0
out[:, index[i], :] = input[:, index[i], :] + alpha * src[:, i, :] # if dim == 1
out[:, :, index[i]] = input[:, :, index[i]] + alpha * src[:, :, i] # if dim == 2Arguments
input(ExTorch.Tensor) - input tensor.dim(integer()) - dimension along which to index.index(ExTorch.Tensor) - indices ofinputto select from, its dtype must be:long.source(ExTorch.Tensor) - the tensor containing values to add.
Optional arguments
alpha(ExTorch.Scalar) - the scalar multiplier forsource. Default: 1out(ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default:nilinplace(boolean) - iftrue, then the operation will be written to theinputtensor (theoutargument will be ignored). Else, it returns a new tensor or writes to theoutargument (if notnil)
Examples
iex> x = ExTorch.ones({5, 3})
iex> t = ExTorch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype: :float)
iex> index = ExTorch.tensor([0, 4, 2], dtype: :long)
iex> ExTorch.index_add(x, 0, index, t)
#Tensor<
[[ 2., 3., 4.],
[ 1., 1., 1.],
[ 8., 9., 10.],
[ 1., 1., 1.],
[ 5., 6., 7.]]
[size: {5, 3}, dtype: :float, device: :cpu, requires_grad: false]>
@spec index_copy( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Available signature calls:
index_copy(input, dim, index, source)
@spec index_copy( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), out: ExTorch.Tensor.t() | nil, inplace: boolean() ) :: ExTorch.Tensor.t()
@spec index_copy( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
index_copy(input, dim, index, source, out)index_copy(input, dim, index, source, kwargs)
@spec index_copy( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil, boolean() ) :: ExTorch.Tensor.t()
@spec index_copy( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil, [{:inplace, boolean()}] ) :: ExTorch.Tensor.t()
Copies the elements of source into the input tensor by selecting the indices
in the order given in index.
For example, if dim == 0 and index[i] == j, then the ith row of source is
copied to the jth row of input.
The dimth dimension of source must have the same size as the length of index
(which must be a 1D tensor), and all other dimensions must match input, or an error will be raised.
Arguments
input(ExTorch.Tensor) - input tensor.dim(integer()) - dimension along which to index.index(ExTorch.Tensor) - indices ofinputto select from, its dtype must be:long.source(ExTorch.Tensor) - the tensor containing values to copy.
Optional arguments
out(ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default:nilinplace(boolean) - iftrue, theinputtensor will be modified and be the object of the modifications done by this function. Else it will return a new tensor with the changes, or it will apply them toout. Default:false
Notes
If index contains duplicate entries, multiple elements from source will be copied to the same
index of self. The result is nondeterministic since it depends on which copy occurs last.
Examples
iex> x = ExTorch.zeros({5, 3})
iex> t = ExTorch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype: :float)
iex> ExTorch.index_copy(input, 0, index, t)
#Tensor<
[[1.0000, 2.0000, 3.0000],
[0.0000, 0.0000, 0.0000],
[7.0000, 8.0000, 9.0000],
[0.0000, 0.0000, 0.0000],
[4.0000, 5.0000, 6.0000]]
[size: {5, 3}, dtype: :float, device: :cpu, requires_grad: false]>
@spec index_put( ExTorch.Tensor.t(), ExTorch.Index.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
Available signature calls:
index_put(tensor, indices, value)
@spec index_put( ExTorch.Tensor.t(), ExTorch.Index.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), boolean() ) :: ExTorch.Tensor.t()
@spec index_put( ExTorch.Tensor.t(), ExTorch.Index.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:inplace, boolean()}] ) :: ExTorch.Tensor.t()
Assign a value into a tensor given a single or a sequence of indices.
Arguments
tensor- Input tensor (ExTorch.Tensor)index- Indices to replace (ExTorch.Index)value- The value to assign into the tensor (ExTorch.Tensor|number()|list()|tuple()|number())
Optional arguments
inplace(boolean()) - Iftrue, then the values will be replaced on the originaltensorargument. Else, it will return a copy oftensorwith the values replaced. Default:false
Examples
# Assign a particular value
iex> x = ExTorch.zeros({2, 3, 3})
#Tensor<
[[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]]]
[
size: {2, 3, 3},
dtype: :double,
device: :cpu,
requires_grad: false
]>
iex> x = ExTorch.index_put(x, 0, -1)
#Tensor<
[[[-1., -1., -1.],
[-1., -1., -1.],
[-1., -1., -1.]],
[[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]]]
[
size: {2, 3, 3},
dtype: :double,
device: :cpu,
requires_grad: false
]>
# Assign a value into an slice
iex> x = ExTorch.index_put(x, [0, ExTorch.slice(1), ExTorch.slice(1)], 0.3)
#Tensor<
[[[-1.0000, -1.0000, -1.0000],
[-1.0000, 0.3000, 0.3000],
[-1.0000, 0.3000, 0.3000]],
[[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000]]]
[
size: {2, 3, 3},
dtype: :double,
device: :cpu,
requires_grad: false
]>
# Assign a tensor into an index
iex> value = ExTorch.eye(3)
iex> x = ExTorch.index_put(x, 1, value)
#Tensor<
[[[-1.0000, -1.0000, -1.0000],
[-1.0000, 0.3000, 0.3000],
[-1.0000, 0.3000, 0.3000]],
[[ 1.0000, 0.0000, 0.0000],
[ 0.0000, 1.0000, 0.0000],
[ 0.0000, 0.0000, 1.0000]]]
[
size: {2, 3, 3},
dtype: :double,
device: :cpu,
requires_grad: false
]>
# Assign a list of numbers into an index (broadcastable)
iex> x = ExTorch.index_put(x, [:::, 1], [1, 2, 3])
#Tensor<
[[[-1.0000, -1.0000, -1.0000],
[ 1.0000, 2.0000, 3.0000],
[-1.0000, 0.3000, 0.3000]],
[[ 1.0000, 0.0000, 0.0000],
[ 1.0000, 2.0000, 3.0000],
[ 0.0000, 0.0000, 1.0000]]]
[
size: {2, 3, 3},
dtype: :double,
device: :cpu,
requires_grad: false
]>
@spec index_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :prod | :mean | :amax | :amin ) :: ExTorch.Tensor.t()
Available signature calls:
index_reduce(self, dim, index, source, reduce)
@spec index_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :prod | :mean | :amax | :amin, boolean() ) :: ExTorch.Tensor.t()
@spec index_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :prod | :mean | :amax | :amin, include_self: boolean(), out: ExTorch.Tensor.t() | nil, inplace: boolean() ) :: ExTorch.Tensor.t()
Available signature calls:
index_reduce(self, dim, index, source, reduce, kwargs)index_reduce(self, dim, index, source, reduce, include_self)
@spec index_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :prod | :mean | :amax | :amin, boolean(), out: ExTorch.Tensor.t() | nil, inplace: boolean() ) :: ExTorch.Tensor.t()
@spec index_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :prod | :mean | :amax | :amin, boolean(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
index_reduce(self, dim, index, source, reduce, include_self, out)index_reduce(self, dim, index, source, reduce, include_self, kwargs)
@spec index_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :prod | :mean | :amax | :amin, boolean(), ExTorch.Tensor.t() | nil, boolean() ) :: ExTorch.Tensor.t()
@spec index_reduce( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t(), :prod | :mean | :amax | :amin, boolean(), ExTorch.Tensor.t() | nil, [{:inplace, boolean()}] ) :: ExTorch.Tensor.t()
Accumulate the elements of source into the self tensor by accumulating to the indices in
the order given in index using the reduction given by the reduce argument.
For example, if dim == 0, index[i] == j, reduce == :prod and include_self == true then
the ith row of source is multiplied by the jth row of self. If include_self = true,
the values in the self tensor are included in the reduction, otherwise, rows in the
self tensor that are accumulated to are treated as if they were filled with the
reduction identites.
The dimth dimension of source must have the same size as the length of index
(which must be a 1D tensor), and all other dimensions must match self, or an error
will be raised.
For a 3-D tensor with reduce = :prod and include_self = true the output is given as:
self[index[i], :, :] *= src[i, :, :] # if dim == 0
self[:, index[i], :] *= src[:, i, :] # if dim == 1
self[:, :, index[i]] *= src[:, :, i] # if dim == 2Arguments
self(ExTorch.Tensor) - input tensor.dim(integer()) - dimension along which to index.index(ExTorch.Tensor) - indices ofsourceto select from, its dtype must be:long.source(ExTorch.Tensor) - the tensor containing values to copy.reduce(:prod | :mean | :amax | :amin) - the reduction operation to apply.
Optional arguments
include_self(boolean) - whether the elements from theselftensor are included in the reduction. Default:trueout(ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default:nilinplace(boolean) - iftrue, theselftensor will be modified and be the object of the modifications done by this function. Else it will return a new tensor with the changes, or it will apply them toout. Default:false
Notes
- This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information.
- This function only supports floating point tensors.
- This function is in beta and may change in the near future.
Examples
iex> x = ExTorch.full({5, 3}, 2)
iex> t = ExTorch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]], dtype: :float)
iex> index = ExTorch.tensor([0, 4, 2, 0], dtype: :long)
#Tensor<
[0, 4, 2, 0]
[size: {4}, dtype: :long, device: :cpu, requires_grad: false]>
iex> ExTorch.index_reduce(x, 0, index, t, :prod)
#Tensor<
[[20., 44., 72.],
[ 2., 2., 2.],
[14., 16., 18.],
[ 2., 2., 2.],
[ 8., 10., 12.]]
[size: {5, 3}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.index_reduce(x, 0, index, t, :prod, include_self: false)
#Tensor<
[[10., 22., 36.],
[ 2., 2., 2.],
[ 7., 8., 9.],
[ 2., 2., 2.],
[ 4., 5., 6.]]
[size: {5, 3}, dtype: :float, device: :cpu, requires_grad: false]>
@spec index_select( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Available signature calls:
index_select(input, dim, index)
@spec index_select( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec index_select( ExTorch.Tensor.t(), integer(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Returns a new tensor which indexes the input tensor along dimension dim
using the entries in index (whose dtype is :long).
The returned tensor has the same number of dimensions as the original tensor
(input). The dimth dimension has the same size as the length of index;
other dimensions have the same size as in the original tensor.
Arguments
input(ExTorch.Tensor) - input tensor.dim(integer()) - dimension along which to index.index(ExTorch.Tensor) - indices ofinputto select from, its dtype must be:long.
Optional arguments
out(ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default:nil
Notes
- The returned tensor does not use the same storage as the original tensor.
- If
outhas a different shape than expected, we silently change it to the correct shape, reallocating the underlying storage if necessary.
Examples
iex> x = ExTorch.randn({3, 4})
#Tensor<
[[ 2.3564, 1.1268, -0.3407, -0.0561],
[ 0.6479, -2.3011, -1.6695, 0.5547],
[ 1.3554, 3.6460, 2.5569, -0.1892]]
[size: {3, 4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> indices = ExTorch.tensor([0, 2], dtype: :long)
iex> ExTorch.index_select(x, 0, indices)
#Tensor<
[[ 2.3564, 1.1268, -0.3407, -0.0561],
[ 1.3554, 3.6460, 2.5569, -0.1892]]
[size: {2, 4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.index_select(x, 1, indices)
#Tensor<
[[ 2.3564, -0.3407],
[ 0.6479, -1.6695],
[ 1.3554, 2.5569]]
[size: {3, 2}, dtype: :float, device: :cpu, requires_grad: false]>
@spec masked_select(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
masked_select(input, mask)
@spec masked_select(ExTorch.Tensor.t(), ExTorch.Tensor.t(), [ {:out, ExTorch.Tensor.t() | nil} ]) :: ExTorch.Tensor.t()
@spec masked_select(ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Returns a new 1-D tensor which indexes the input tensor according to the
boolean mask mask which has dtype :bool.
The shapes of the mask tensor and the input tensor don’t need to match,
but they must be broadcastable.
Arguments
input(ExTorch.Tensor) - input tensor.mask(ExTorch.Tensor) - the tensor containing the binary mask to index with. It's dtype must be:bool
Optional arguments
out(ExTorch.Tensor | nil) - an optional pre-allocated tensor used to store the output result. Default:nil
Notes
The returned tensor does not use the same storage as the original tensor.
Examples
iex> x = ExTorch.randn({4, 5})
#Tensor<
[[ 1.6055, -0.1662, -0.6764, -0.8615, -2.1960],
[ 0.8188, -1.1111, -0.2659, 1.4720, -0.0226],
[-0.7065, -1.0628, -0.7172, -1.0006, 0.3091],
[-0.8901, -0.6624, -0.4590, 0.0821, -0.9716]]
[size: {4, 5}, dtype: :float, device: :cpu, requires_grad: false]>
iex> mask = ExTorch.ge(x, 0)
#Tensor<
[[ true, false, false, false, false],
[ true, false, false, true, false],
[false, false, false, false, true],
[false, false, false, true, false]]
[size: {4, 5}, dtype: :bool, device: :cpu, requires_grad: false]>
iex> ExTorch.masked_select(x, mask)
#Tensor<
[1.6055, 0.8188, 1.4720, 0.3091, 0.0821]
[size: {5}, dtype: :float, device: :cpu, requires_grad: false]>
@spec select(ExTorch.Tensor.t(), integer(), integer()) :: ExTorch.Tensor.t()
Slices the input tensor along the selected dimension at the given index.
This function returns a view of the original tensor with the given dimension removed.
Arguments
input(ExTorch.Tensor) - the input tensor.dim(integer) - the dimension to slice.index(integer) - the index to select.
Notes
ExTorch.select/3 is equivalent to slicing. For example, ExTorch.select(0, index)
is equivalent to tensor[index] and ExTorch.select(2, index) is equivalent to
tensor[{:::, :::, index}].
Examples
iex> a = ExTorch.arange(2 * 3 * 4) |> ExTorch.reshape({2, 3, 4})
#Tensor<
[[[ 0.0000, 1.0000, 2.0000, 3.0000],
[ 4.0000, 5.0000, 6.0000, 7.0000],
[ 8.0000, 9.0000, 10.0000, 11.0000]],
[[12.0000, 13.0000, 14.0000, 15.0000],
[16.0000, 17.0000, 18.0000, 19.0000],
[20.0000, 21.0000, 22.0000, 23.0000]]]
[size: {2, 3, 4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.select(a, 0, 1)
#Tensor<
[[12., 13., 14., 15.],
[16., 17., 18., 19.],
[20., 21., 22., 23.]]
[size: {3, 4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.select(a, 1, 0)
#Tensor<
[[ 0.0000, 1.0000, 2.0000, 3.0000],
[12.0000, 13.0000, 14.0000, 15.0000]]
[size: {2, 4}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.select(a, 2, 2)
#Tensor<
[[ 2., 6., 10.],
[14., 18., 22.]]
[size: {2, 3}, dtype: :float, device: :cpu, requires_grad: false]>
@spec slice(integer() | nil, integer() | nil, integer() | nil) :: ExTorch.Index.Slice.t()
Create a slice to index a tensor.
Arguments
start: The starting slice value. Default: nilstop: The non-inclusive end of the slice. Default: nilstep: The step between values. Default: nil
Returns
slice: AnExTorch.Index.Slicestruct that represents the slice.
Notes
An empty slice will represent the "take-all axis", represented by ":" in Python.
Pointwise math operations
@spec imag(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a new tensor containing imaginary values of the input tensor.
The returned tensor and input share the same underlying storage.
Arguments
input: The input tensor.
Examples
iex> x = ExTorch.rand({3}, dtype: :complex64)
#Tensor<
[0.8235+0.9395j, 0.9912+0.4506j, 0.5164+0.3070j]
[
size: {3},
dtype: :complex_float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.imag(x)
#Tensor<
[0.9395, 0.4506, 0.3070]
[
size: {3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec real(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a new tensor containing real values of the input tensor.
The returned tensor and input share the same underlying storage.
Arguments
input: The input tensor.
Examples
iex> x = ExTorch.rand({3}, dtype: :complex64)
#Tensor<
[0.8235+0.9395j, 0.9912+0.4506j, 0.5164+0.3070j]
[
size: {3},
dtype: :complex_float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.real(x)
#Tensor<
[0.8235, 0.9912, 0.5164]
[
size: {3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
Reduction operations
@spec all(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.all/4
Available signature calls:
all(input)
@spec all(ExTorch.Tensor.t(), nil | integer()) :: ExTorch.Tensor.t()
@spec all(ExTorch.Tensor.t(), dim: nil | integer(), keepdim: boolean(), out: nil | ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
See ExTorch.all/4
Available signature calls:
all(input, kwargs)all(input, dim)
@spec all(ExTorch.Tensor.t(), nil | integer(), boolean()) :: ExTorch.Tensor.t()
@spec all(ExTorch.Tensor.t(), nil | integer(), keepdim: boolean(), out: nil | ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
See ExTorch.all/4
Available signature calls:
all(input, dim, kwargs)all(input, dim, keepdim)
@spec all(ExTorch.Tensor.t(), nil | integer(), boolean(), [ {:out, nil | ExTorch.Tensor.t()} ]) :: ExTorch.Tensor.t()
@spec all(ExTorch.Tensor.t(), nil | integer(), boolean(), nil | ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Check if all elements (or in a dimension) in input evaluate to true.
If dim is nil, tests if all elements in input evaluate to true.
Else, for each row of input in the given dimension dim, returns true
if all elements in the row evaluate to true and false otherwise.
- The
keepdimandoutoptions only work ifdimis notnil. - If keepdim is
true, the output tensor is of the same size asinput, except in the dimensiondimwhere it is of size 1. Otherwise,dimis squeezed (seeExTorch.squeeze), resulting in the output tensor having 1 fewer dimension thaninput.
Arguments
input- the input tensor. (ExTorch.Tensor)
Optional arguments
dim- the dimension to reduce. (nil | integer()). Default:nilkeepdim- whether the output tensor has dim retained or not. (boolean()). Default:falseout- the optional output pre-allocated tensor. (ExTorch.Tensor | nil). Default:nil
Examples
# Find if all elements in a tensor are true
iex> a = ExTorch.tensor([[true, true, true], [true, true, true]])
iex> ExTorch.all(a)
#Tensor<
true
[size: {}, dtype: :bool, device: :cpu, requires_grad: false]>
# Find if all elements (per dimension) are true
iex> b = ExTorch.empty({3, 3}, dtype: :bool)
#Tensor<
[[false, true, false],
[ true, true, true],
[false, false, false]]
[
size: {3, 3},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.all(b, -1)
#Tensor<
[false, true, false]
[
size: {3},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Preserve tensor dimensions
iex> ExTorch.all(b, -1, keepdim: true)
#Tensor<
[[false],
[ true],
[false]]
[
size: {3, 1},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec amax(ExTorch.Tensor.t(), nil | integer() | tuple()) :: ExTorch.Tensor.t()
See ExTorch.amax/4
Available signature calls:
amax(input, dim)
@spec amax(ExTorch.Tensor.t(), nil | integer() | tuple(), boolean()) :: ExTorch.Tensor.t()
@spec amax(ExTorch.Tensor.t(), nil | integer() | tuple(), keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
See ExTorch.amax/4
Available signature calls:
amax(input, dim, kwargs)amax(input, dim, keepdim)
@spec amax(ExTorch.Tensor.t(), nil | integer() | tuple(), boolean(), [ {:out, ExTorch.Tensor.t() | nil} ]) :: ExTorch.Tensor.t()
@spec amax( ExTorch.Tensor.t(), nil | integer() | tuple(), boolean(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Returns the maximum value of each slice of the input tensor in the given dimension(s) dim.
If keepdim is true, the output tensor is of the same size as input except in the dimension(s)
dim where it is of size 1. Otherwise, dim is squeezed (see Extorch.squeeze), resulting in
the output tensor having 1 (or length(dim)) fewer dimension(s).
Arguments
input(ExTorch.Tensor) - the input tensor.dim(nil | integer() | tuple()) - the dimension(s) to reduce. Ifnil, then it reduces all the dimensions.
Optional arguments
keepdim(boolean()) - whether the output tensor has dim retained or not, this . Default:falseout(ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default:nil
Notes
The difference between ExTorch.max/ExTorch.min and ExTorch.amax/ExTorch.amin is:
ExTorch.amax/ExTorch.aminsupports reducing on multiple dimensionsExTorch.amax/ExTorch.amindoes not return indicesExTorch.amax/ExTorch.aminevenly distributes gradient between equal values, whilemax(dim)/min(dim)propagates gradient only to a single index in the source tensor.
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[ 1.4814, 0.1511, -1.9243, 0.6649],
[ 0.1308, 0.5038, -0.0844, -0.8609],
[-0.9535, 0.1651, -0.5081, 0.7449],
[-1.5848, 0.1389, -0.5299, -0.0702]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Get the maximum values alongside the last dimension
iex> ExTorch.amax(a, -1)
#Tensor<
[1.4814, 0.5038, 0.7449, 0.1389]
[
size: {4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Get the maximum value on all dimensions
iex> ExTorch.amax(a, {0, 1})
#Tensor<
1.4814
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
@spec amin(ExTorch.Tensor.t(), nil | integer() | tuple()) :: ExTorch.Tensor.t()
See ExTorch.amin/4
Available signature calls:
amin(input, dim)
@spec amin(ExTorch.Tensor.t(), nil | integer() | tuple(), boolean()) :: ExTorch.Tensor.t()
@spec amin(ExTorch.Tensor.t(), nil | integer() | tuple(), keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
See ExTorch.amin/4
Available signature calls:
amin(input, dim, kwargs)amin(input, dim, keepdim)
@spec amin(ExTorch.Tensor.t(), nil | integer() | tuple(), boolean(), [ {:out, ExTorch.Tensor.t() | nil} ]) :: ExTorch.Tensor.t()
@spec amin( ExTorch.Tensor.t(), nil | integer() | tuple(), boolean(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Returns the minimum value of each slice of the input tensor in the given dimension(s) dim.
If keepdim is true, the output tensor is of the same size as input except in the dimension(s)
dim where it is of size 1. Otherwise, dim is squeezed (see Extorch.squeeze), resulting in
the output tensor having 1 (or length(dim)) fewer dimension(s).
Arguments
input(ExTorch.Tensor) - the input tensor.dim(nil | integer() | tuple()) - the dimension(s) to reduce. Ifnil, then it reduces all the dimensions.
Optional arguments
keepdim(boolean()) - whether the output tensor has dim retained or not, this . Default:falseout(ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default:nil
Notes
The difference between ExTorch.max/ExTorch.min and ExTorch.amax/ExTorch.amin is:
ExTorch.amax/ExTorch.aminsupports reducing on multiple dimensionsExTorch.amax/ExTorch.amindoes not return indicesExTorch.amax/ExTorch.aminevenly distributes gradient between equal values, whilemax(dim)/min(dim)propagates gradient only to a single index in the source tensor.
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[ 0.4138, 0.9993, 0.1177, -0.0021],
[ 0.0340, -0.7703, 1.5916, -0.2477],
[ 0.4927, 0.7762, -0.9214, -0.3303],
[-0.4098, -0.1762, -1.4085, -1.4918]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Get the minimum values alongside the last dimension
iex> ExTorch.amin(a, -1)
#Tensor<
[-0.0021, -0.7703, -0.9214, -1.4918]
[
size: {4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Get the minimum value on all dimensions
iex> ExTorch.amin(a, {0, 1})
#Tensor<
-1.4918
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
@spec aminmax(ExTorch.Tensor.t()) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
aminmax(input)
@spec aminmax( ExTorch.Tensor.t(), integer() | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec aminmax(ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
aminmax(input, kwargs)aminmax(input, dim)
@spec aminmax( ExTorch.Tensor.t(), integer() | nil, boolean() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec aminmax( ExTorch.Tensor.t(), integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
aminmax(input, dim, kwargs)aminmax(input, dim, keepdim)
@spec aminmax( ExTorch.Tensor.t(), integer() | nil, boolean(), [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}] ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec aminmax( ExTorch.Tensor.t(), integer() | nil, boolean(), {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Computes the minimum and maximum values of the input tensor.
It will return a tuple {min, max} containing the maximum and minimum values, respectively.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(nil | integer()) - the dimension to reduce. Ifnil, then it computes the values over the entireinputtensor. Default:nilkeepdim(boolean()) - whether the output tensors have dim retained or not. Default:falseout({ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tensors in a tuple. Default:nil
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.7684, 0.8360, 0.1960, -0.7748],
[ 0.9795, -0.3725, 0.1304, -0.3627],
[-0.6206, 0.1624, 0.8514, -1.2361],
[-1.5297, -0.6418, -0.8179, 1.7531]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Find the minimum and maximum values on the entire input
iex> {min, max} = ExTorch.aminmax(a)
iex> min
#Tensor<
1.7531
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
iex> max
#Tensor<
-1.5297
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Find the minimum and maximum values alongside the last dimension
iex> {min, max} = ExTorch.aminmax(a, -1, keepdim: true)
iex> min
#Tensor<
[[0.8360],
[0.9795],
[0.8514],
[1.7531]]
[
size: {4, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> max
#Tensor<
[[-0.7748],
[-0.3725],
[-1.2361],
[-1.5297]]
[
size: {4, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec any(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.any/4
Available signature calls:
any(input)
@spec any(ExTorch.Tensor.t(), nil | integer()) :: ExTorch.Tensor.t()
@spec any(ExTorch.Tensor.t(), dim: nil | integer(), keepdim: boolean(), out: nil | ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
See ExTorch.any/4
Available signature calls:
any(input, kwargs)any(input, dim)
@spec any(ExTorch.Tensor.t(), nil | integer(), boolean()) :: ExTorch.Tensor.t()
@spec any(ExTorch.Tensor.t(), nil | integer(), keepdim: boolean(), out: nil | ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
See ExTorch.any/4
Available signature calls:
any(input, dim, kwargs)any(input, dim, keepdim)
@spec any(ExTorch.Tensor.t(), nil | integer(), boolean(), [ {:out, nil | ExTorch.Tensor.t()} ]) :: ExTorch.Tensor.t()
@spec any(ExTorch.Tensor.t(), nil | integer(), boolean(), nil | ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Check if at least one element (or element in a dimension) in input evaluates to true.
If dim is nil, tests if at least one element in input evaluate to true.
Else, for each row of input in the given dimension dim, returns true
if at least one element in the row evaluates to true and false otherwise.
- The
keepdimandoutoptions only work ifdimis notnil. - If keepdim is
true, the output tensor is of the same size asinput, except in the dimensiondimwhere it is of size 1. Otherwise,dimis squeezed (seeExTorch.squeeze), resulting in the output tensor having 1 fewer dimension thaninput.
Arguments
input- the input tensor. (ExTorch.Tensor)
Optional arguments
dim- the dimension to reduce. (nil | integer()). Default:nilkeepdim- whether the output tensor has dim retained or not. (boolean()). Default:falseout- the optional output pre-allocated tensor. (ExTorch.Tensor | nil). Default:nil
Examples
# Find if any element in a tensor is true
iex> a = ExTorch.tensor([[true, false, true], [false, true, true]])
iex> ExTorch.any(a)
#Tensor<
true
[size: {}, dtype: :bool, device: :cpu, requires_grad: false]>
# Find if any elements (per dimension) is true
iex> b = ExTorch.empty({3, 3}, dtype: :bool)
#Tensor<
[[false, true, false],
[ true, true, true],
[false, false, false]]
[
size: {3, 3},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.any(b, -1)
#Tensor<
[ true, true, false]
[
size: {3},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Preserve tensor dimensions
iex> ExTorch.any(b, -1, keepdim: true)
#Tensor<
[[ true],
[ true],
[false]]
[
size: {3, 1},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec argmax(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.argmax/3
Available signature calls:
argmax(input)
@spec argmax(ExTorch.Tensor.t(), integer() | nil) :: ExTorch.Tensor.t()
@spec argmax(ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean()) :: ExTorch.Tensor.t()
See ExTorch.argmax/3
Available signature calls:
argmax(input, kwargs)argmax(input, dim)
@spec argmax(ExTorch.Tensor.t(), integer() | nil, boolean()) :: ExTorch.Tensor.t()
@spec argmax(ExTorch.Tensor.t(), integer() | nil, [{:keepdim, boolean()}]) :: ExTorch.Tensor.t()
Returns the indices of the maximum value of all elements (or elements in a dimension) in the input tensor.
- If
dimisnil, it will return the index of the maximum element on the overall input tensor, else it will return the maximum values alongside the specified dimension.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(nil | integer()) - the dimension to reduce. Default:nilkeepdim(boolean()) - whether the output tensor has dim retained or not. Default:false
Notes
If there are multiple maximal values then the indices of the first maximal value are returned.
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[ 1.2023, -0.1142, 0.5077, 1.2127],
[ 0.5873, -0.7416, -0.0758, 0.0578],
[-0.8066, 1.7030, 0.2894, 0.0539],
[ 0.2353, 0.4396, -0.1846, -0.7395]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Get the overall maximum index.
iex> ExTorch.argmax(a)
#Tensor<
9
[size: {}, dtype: :long, device: :cpu, requires_grad: false]>
# Get the maximum index on the last dimension.
iex> ExTorch.argmax(a, -1)
#Tensor<
[3, 0, 1, 1]
[
size: {4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Keep the reduced dimension on the output
iex> ExTorch.argmax(a, 0, keepdim: true)
#Tensor<
[[0, 2, 0, 0]]
[
size: {1, 4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec argmin(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.argmin/3
Available signature calls:
argmin(input)
@spec argmin(ExTorch.Tensor.t(), integer() | nil) :: ExTorch.Tensor.t()
@spec argmin(ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean()) :: ExTorch.Tensor.t()
See ExTorch.argmin/3
Available signature calls:
argmin(input, kwargs)argmin(input, dim)
@spec argmin(ExTorch.Tensor.t(), integer() | nil, boolean()) :: ExTorch.Tensor.t()
@spec argmin(ExTorch.Tensor.t(), integer() | nil, [{:keepdim, boolean()}]) :: ExTorch.Tensor.t()
Returns the indices of the minimum value of all elements (or elements in a dimension) in the input tensor.
- If
dimisnil, it will return the index of the minimum element on the overall input tensor, else it will return the minimum values alongside the specified dimension.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(nil | integer()) - the dimension to reduce. Default:nilkeepdim(boolean()) - whether the output tensor has dim retained or not. Default:false
Notes
If there are multiple minimal values then the indices of the first minimal value are returned.
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.6192, -0.4204, 0.1524, -0.1544],
[ 1.4040, 1.0165, 1.6355, 0.6480],
[-0.6566, 1.0730, -0.1548, -0.2488],
[-1.0406, 0.0883, 1.0485, -0.3025]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Get the overall minimum index.
iex> ExTorch.argmin(a)
#Tensor<
12
[size: {}, dtype: :long, device: :cpu, requires_grad: false]>
# Get the minimum index on the last dimension.
iex> ExTorch.argmin(a, -1)
#Tensor<
[0, 3, 0, 0]
[
size: {4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Keep the reduced dimension on the output
iex> ExTorch.argmin(a, 0, keepdim: true)
#Tensor<
[[3, 0, 2, 3]]
[
size: {1, 4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec count_nonzero(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
count_nonzero(input)
@spec count_nonzero(ExTorch.Tensor.t(), integer() | tuple() | nil) :: ExTorch.Tensor.t()
@spec count_nonzero( ExTorch.Tensor.t(), [{:dim, integer() | tuple() | nil}] ) :: ExTorch.Tensor.t()
Counts the number of non-zero values in the tensor input along the given dim.
If no dim is specified then all non-zeros in the tensor are counted.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | tuple | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nil
Examples
iex> x = ExTorch.zeros({3, 3})
iex> x = ExTorch.index_put(x, ExTorch.gt(ExTorch.randn({3, 3}), 0.5), 1)
#Tensor<
[[0.0000, 0.0000, 1.0000],
[0.0000, 0.0000, 0.0000],
[1.0000, 0.0000, 0.0000]]
[
size: {3, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Count overall nonzero elements
iex> ExTorch.count_nonzero(x)
#Tensor<
2
[size: {}, dtype: :long, device: :cpu, requires_grad: false]>
# Count nonzero elements in the first dimension
iex> ExTorch.count_nonzero(x, 0)
#Tensor<
[1, 0, 1]
[size: {3}, dtype: :long, device: :cpu, requires_grad: false]>
@spec dist(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.dist/3
Available signature calls:
dist(input, other)
@spec dist(ExTorch.Tensor.t(), ExTorch.Tensor.t(), [{:p, number()}]) :: ExTorch.Tensor.t()
@spec dist(ExTorch.Tensor.t(), ExTorch.Tensor.t(), number()) :: ExTorch.Tensor.t()
Returns the p-norm of (input - other)
The shapes of input and other must be broadcastable.
Arguments
input(ExTorch.Tensor) - the input tensor.other(ExTorch.Tensor) - the right-hand side input tensor.
Optional arguments
p(number) - the norm to be computed. Default: 2.0
Examples
iex> a = ExTorch.randn(3)
#Tensor<
[ 0.3934, 0.6799, -0.1292]
[
size: {3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> b = ExTorch.randn(3)
#Tensor<
[-0.3785, -1.5249, 0.2093]
[
size: {3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Compute the Euclidean norm
iex> ExTorch.dist(a, b)
#Tensor<
2.3605
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Compute the L1 distance
iex> ExTorch.dist(a, b, 1)
#Tensor<
3.3152
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
@spec logsumexp(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
logsumexp(input)
@spec logsumexp( ExTorch.Tensor.t(), integer() | tuple() | nil ) :: ExTorch.Tensor.t()
@spec logsumexp(ExTorch.Tensor.t(), dim: integer() | tuple() | nil, keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
logsumexp(input, kwargs)logsumexp(input, dim)
@spec logsumexp( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean() ) :: ExTorch.Tensor.t()
@spec logsumexp( ExTorch.Tensor.t(), integer() | tuple() | nil, keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
logsumexp(input, dim, kwargs)logsumexp(input, dim, keepdim)
@spec logsumexp( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec logsumexp( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Returns the log of summed exponentials of each row of the input tensor in the given dimension dim.
The computation is numerically stabilized.
For summation index $j$ given by dim and other indices $i$, the result is:
$$ \text{logsumexp}(x)\_i = \text{log}\sum\_j \text{exp}\left(x\_{ij}\right) $$
If keepdim is true, the output tensor is of the same size as input except in the dimension(s) dim
where it is of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the
output tensor having 1 (or length(dim)) fewer dimension(s).
Arguments
input(ExTorch.Tensor) - the input tensor.dim(nil | integer() | tuple()) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced.
Optional arguments
keepdim(boolean) - whether the output tensor hasdimretained or not.out(ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default:nil
Examples
iex> a = ExTorch.randn({3, 3})
#Tensor<
[[ 0.2292, -1.0899, 0.0889],
[-2.0117, 0.4716, -0.3893],
[-0.9382, 1.0590, -0.0838]]
[
size: {3, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Compute logsumexp in all dimensions
iex> ExTorch.logsumexp(a)
#Tensor<
2.2295
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Compute logsumexp in the last dimension, preserve dimensions
iex> ExTorch.logsumexp(a, -1, keepdim: true)
#Tensor<
[[0.9883],
[0.8812],
[1.4338]]
[
size: {3, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec max(ExTorch.Tensor.t()) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.max/4
Available signature calls:
max(input)
@spec max( ExTorch.Tensor.t(), integer() | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec max(ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.max/4
Available signature calls:
max(input, kwargs)max(input, dim)
@spec max( ExTorch.Tensor.t(), integer() | nil, boolean() ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec max( ExTorch.Tensor.t(), integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.max/4
Available signature calls:
max(input, dim, kwargs)max(input, dim, keepdim)
@spec max( ExTorch.Tensor.t(), integer() | nil, boolean(), [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}] ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec max( ExTorch.Tensor.t(), integer() | nil, boolean(), {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Returns the maximum value of all elements (or elements in a dimension) in the
input tensor.
If dim is nil, max returns the maximum element in the tensor.
Else, it returns a namedtuple {max, argmax} where max are the maximum values
of each row of the input tensor in the given dimension dim. And argmax
is the index location of each maximum value found (See ExTorch.argmax/3).
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(nil | integer()) - the dimension to reduce. Default:nilkeepdim(boolean()) - whether the output tensor has dim retained or not, this . Default:falseout(ExTorch.Tensor | {ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tensor if computing the overall maximum of a tensor, else it must be a pre-allocated tensor tuple{max, argmax}. Default:nil
Notes
keepdimandoutwon't take any effect ifdimisnil.- Unlike PyTorch,
maxwill not take two tensors as input as an alias toExTorch.maximum/3, please use that function explicitly instead.
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.6730, 0.9223, -0.3803, 0.2369],
[ 0.5956, -0.2750, 1.3838, -2.1479],
[ 0.4648, -1.8987, 0.8329, -0.5854],
[ 1.1679, -0.4866, -0.5227, -0.4399]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Find the overall maximum in the tensor
iex> ExTorch.max(a)
#Tensor<
1.3838
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Find the maximum in the last dimension
iex> {max, argmax} = ExTorch.max(a, -1)
iex> max
#Tensor<
[0.9223, 1.3838, 0.8329, 1.1679]
[
size: {4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> argmax
#Tensor<
[1, 2, 2, 0]
[
size: {4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Preserve the original number of dimensions
iex> {max, argmax} = ExTorch.max(a, -1, keepdim: true)
iex> max
#Tensor<
[[0.9223],
[1.3838],
[0.8329],
[1.1679]]
[
size: {4, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> argmax
#Tensor<
[[1],
[2],
[2],
[0]]
[
size: {4, 1},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec mean(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.mean/5
Available signature calls:
mean(input)
@spec mean( ExTorch.Tensor.t(), integer() | tuple() | nil ) :: ExTorch.Tensor.t()
@spec mean(ExTorch.Tensor.t(), dim: integer() | tuple() | nil, keepdim: boolean(), dtype: ExTorch.DType.dtype(), out: ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
See ExTorch.mean/5
Available signature calls:
mean(input, kwargs)mean(input, dim)
@spec mean( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean() ) :: ExTorch.Tensor.t()
@spec mean( ExTorch.Tensor.t(), integer() | tuple() | nil, keepdim: boolean(), dtype: ExTorch.DType.dtype(), out: ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
See ExTorch.mean/5
Available signature calls:
mean(input, dim, kwargs)mean(input, dim, keepdim)
@spec mean( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), ExTorch.DType.dtype() ) :: ExTorch.Tensor.t()
@spec mean( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), dtype: ExTorch.DType.dtype(), out: ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
See ExTorch.mean/5
Available signature calls:
mean(input, dim, keepdim, kwargs)mean(input, dim, keepdim, dtype)
@spec mean( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), ExTorch.DType.dtype(), [{:out, ExTorch.Tensor.t()}] ) :: ExTorch.Tensor.t()
@spec mean( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), ExTorch.DType.dtype(), ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Returns the mean value of all elements (or alongside an axis) in the input tensor.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | tuple | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nilkeepdim(boolean) - whether the output tensor hasdimretained or not. Default:falsedtype(ExTorch.DTypeornil) - the desired data type of returned tensor. If specified, theinputtensor is casted todtypebefore the operation is performed. This is useful for preventing data type overflows. Default:nil.out(ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default:nil
Examples
iex> a = ExTorch.rand({3, 3})
#Tensor<
[[0.0945, 0.3992, 0.5090],
[0.0142, 0.1471, 0.4568],
[0.1428, 0.2121, 0.6163]]
[
size: {3, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Mean of all elements in a tensor.
iex> ExTorch.mean(a)
#Tensor<
0.2880
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Mean of elements in the last dimension, keeping dims and casting to double
iex> ExTorch.mean(a, -1, keepdim: true, dtype: :double)
#Tensor<
[[0.3343],
[0.2060],
[0.3237]]
[
size: {3, 1},
dtype: :double,
device: :cpu,
requires_grad: false
]>
@spec median(ExTorch.Tensor.t()) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.median/4
Available signature calls:
median(input)
@spec median( ExTorch.Tensor.t(), integer() | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec median(ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.median/4
Available signature calls:
median(input, kwargs)median(input, dim)
@spec median( ExTorch.Tensor.t(), integer() | nil, boolean() ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec median( ExTorch.Tensor.t(), integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.median/4
Available signature calls:
median(input, dim, kwargs)median(input, dim, keepdim)
@spec median( ExTorch.Tensor.t(), integer() | nil, boolean(), [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}] ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec median( ExTorch.Tensor.t(), integer() | nil, boolean(), {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Returns the median of the values in input.
If dim is nil, it returns the median of all values in the tensor. Else, it
returns a tuple {values, indices} where values contains the median
of each row of input in the dimension dim, and indices
contains the index of the median values found in the dimension dim.
If keepdim is true, the output tensors are of the same size as input
except in the dimension dim where they are of size 1.
Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs
tensor having 1 fewer dimension than input.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nilkeepdim(boolean) - whether the output tensor hasdimretained or not. Default:falseout({ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tuple tensor. Default:nil
Notes
The median is not unique for
inputtensors with an even number of elements. In this case the lower of the two medians is returned. To compute the mean of both medians, useExTorch.quantilewith q=0.5 instead.indicesdoes not necessarily contain the first occurrence of each median value found, unless it is unique. The exact implementation details are device-specific. Do not expect the same result when run on CPU and GPU in general. For the same reason do not expect the gradients to be deterministic.keepdimandoutwill not take effect whendim = nil.
Examples
iex> a = ExTorch.randn({3, 3})
#Tensor<
[[-0.7721, -2.0910, -0.4622],
[ 0.1119, 2.4266, 1.3471],
[-0.1450, -0.2876, -2.3025]]
[
size: {3, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Compute overall median of the input tensor.
iex> ExTorch.median(a)
#Tensor<
-0.2876
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Compute the median of the last dimension, keeping dimensions.
iex> {values, indices} = ExTorch.median(a, -1, keepdim: true)
iex> values
#Tensor<
[[-0.7721],
[ 1.3471],
[-0.2876]]
[
size: {3, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> indices
#Tensor<
[[0],
[2],
[1]]
[
size: {3, 1},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec min(ExTorch.Tensor.t()) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.min/4
Available signature calls:
min(input)
@spec min( ExTorch.Tensor.t(), integer() | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec min(ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.min/4
Available signature calls:
min(input, kwargs)min(input, dim)
@spec min( ExTorch.Tensor.t(), integer() | nil, boolean() ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec min( ExTorch.Tensor.t(), integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.min/4
Available signature calls:
min(input, dim, kwargs)min(input, dim, keepdim)
@spec min( ExTorch.Tensor.t(), integer() | nil, boolean(), [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}] ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec min( ExTorch.Tensor.t(), integer() | nil, boolean(), {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Returns the minimum value of all elements (or elements in a dimension) in the
input tensor.
If dim is nil, min returns the minimum element in the tensor.
Else, it returns a namedtuple {min, argmin} where min are the minimum values
of each row of the input tensor in the given dimension dim. And argmin
is the index location of each minimum value found (See ExTorch.argmin/3).
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(nil | integer()) - the dimension to reduce. Default:nilkeepdim(boolean()) - whether the output tensor has dim retained or not, this . Default:falseout(ExTorch.Tensor | {ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tensor if computing the overall minimum of a tensor, else it must be a pre-allocated tensor tuple{min, argmin}. Default:nil
Notes
keepdimandoutwon't take any effect ifdimisnil.- Unlike PyTorch,
minwill not take two tensors as input as an alias toExTorch.minimum/3, please use that function explicitly instead.
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.6730, 0.9223, -0.3803, 0.2369],
[ 0.5956, -0.2750, 1.3838, -2.1479],
[ 0.4648, -1.8987, 0.8329, -0.5854],
[ 1.1679, -0.4866, -0.5227, -0.4399]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Find the overall minimum in the tensor
iex> ExTorch.min(a)
#Tensor<
-2.1479
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Find the minimum in the last dimension
iex> {min, argmin} = ExTorch.min(a, -1)
iex> min
#Tensor<
[-0.6730, -2.1479, -1.8987, -0.5227]
[
size: {4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> argmin
#Tensor<
[0, 3, 1, 2]
[
size: {4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Preserve the original number of dimensions
iex> {min, argmin} = ExTorch.min(a, -1, keepdim: true)
iex> min
#Tensor<
[[-0.6730],
[-2.1479],
[-1.8987],
[-0.5227]]
[
size: {4, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex(16)> argmin
#Tensor<
[[0],
[3],
[1],
[2]]
[
size: {4, 1},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec mode(ExTorch.Tensor.t()) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.mode/4
Available signature calls:
mode(input)
@spec mode( ExTorch.Tensor.t(), integer() | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec mode(ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.mode/4
Available signature calls:
mode(input, kwargs)mode(input, dim)
@spec mode( ExTorch.Tensor.t(), integer() | nil, boolean() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec mode( ExTorch.Tensor.t(), integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.mode/4
Available signature calls:
mode(input, dim, kwargs)mode(input, dim, keepdim)
@spec mode( ExTorch.Tensor.t(), integer() | nil, boolean(), [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}] ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec mode( ExTorch.Tensor.t(), integer() | nil, boolean(), {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Returns the mode of the values in input across dimension dim.
It returns a tuple {values, indices} where values contains the median
of each row of input in the dimension dim, and indices
contains the index of the median values found in the dimension dim.
If keepdim is true, the output tensors are of the same size as input
except in the dimension dim where they are of size 1.
Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs
tensor having 1 fewer dimension than input.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | nil) - the dimension or dimensions to reduce. Default: -1keepdim(boolean) - whether the output tensor hasdimretained or not. Default:falseout({ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tuple tensor. Default:nil
Notes
This function is not defined for CUDA tensors yet.
Examples
iex> a = ExTorch.randint(5, {3, 4}, dtype: :int32)
#Tensor<
[[4, 4, 4, 4],
[3, 4, 1, 1],
[3, 2, 2, 0]]
[
size: {3, 4},
dtype: :int,
device: :cpu,
requires_grad: false
]>
# Compute the mode in the last dimension.
iex> {values, indices} = ExTorch.mode(a)
iex> values
#Tensor<
[4, 1, 2]
[size: {3}, dtype: :int, device: :cpu, requires_grad: false]>
iex> indices
#Tensor<
[3, 3, 2]
[size: {3}, dtype: :long, device: :cpu, requires_grad: false]>
# Compute the mode in the first dimension, keeping output dimensions.
iex> {values, indices} = ExTorch.mode(a, 0, keepdim: true)
iex> values
#Tensor<
[[3, 4, 1, 0]]
[
size: {1, 4},
dtype: :int,
device: :cpu,
requires_grad: false
]>
iex> indices
#Tensor<
[[2, 1, 1, 2]]
[
size: {1, 4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec nanmean(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
nanmean(input)
@spec nanmean( ExTorch.Tensor.t(), integer() | tuple() | nil ) :: ExTorch.Tensor.t()
@spec nanmean(ExTorch.Tensor.t(), dim: integer() | tuple() | nil, keepdim: boolean(), dtype: ExTorch.DType.dtype(), out: ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Available signature calls:
nanmean(input, kwargs)nanmean(input, dim)
@spec nanmean( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean() ) :: ExTorch.Tensor.t()
@spec nanmean( ExTorch.Tensor.t(), integer() | tuple() | nil, keepdim: boolean(), dtype: ExTorch.DType.dtype(), out: ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Available signature calls:
nanmean(input, dim, kwargs)nanmean(input, dim, keepdim)
@spec nanmean( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), ExTorch.DType.dtype() ) :: ExTorch.Tensor.t()
@spec nanmean( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), dtype: ExTorch.DType.dtype(), out: ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Available signature calls:
nanmean(input, dim, keepdim, kwargs)nanmean(input, dim, keepdim, dtype)
@spec nanmean( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), ExTorch.DType.dtype(), [{:out, ExTorch.Tensor.t()}] ) :: ExTorch.Tensor.t()
@spec nanmean( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), ExTorch.DType.dtype(), ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Computes the mean of all non-NaN elements along the specified dimensions.
This function is identical to ExTorch.mean/5 when there are no :nan values in the input tensor.
In the presence of :nan, ExTorch.mean will propagate the :nan to the output whereas ExTorch.nanmean
will ignore the NaN values.
If keepdim is true, the output tensor is of the same size as input except in the dimension(s) dim
where it is of size 1. Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the output
tensor having 1 (or length(dim)) fewer dimension(s).
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | tuple | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nilkeepdim(boolean) - whether the output tensor hasdimretained or not. Default:falsedtype(ExTorch.DTypeornil) - the desired data type of returned tensor. If specified, theinputtensor is casted todtypebefore the operation is performed. This is useful for preventing data type overflows. Default:nil.out(ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default:nil
Examples
iex> a = ExTorch.tensor([[:nan, 1.0, 2.0], [-1.0, :nan, 2.0], [1.0, -1.0, :nan]])
#Tensor<
[[ nan, 1.0000, 2.0000],
[-1.0000, nan, 2.0000],
[ 1.0000, -1.0000, nan]]
[
size: {3, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Compute mean of all array elements without :nan
iex> ExTorch.nanmean(a)
#Tensor<
0.6667
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Compute mean of all array elements on the last dimension, keep all dims
iex> ExTorch.nanmean(a, -1, keepdim: true)
#Tensor<
[[1.5000],
[0.5000],
[0.0000]]
[
size: {3, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec nanmedian(ExTorch.Tensor.t()) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
nanmedian(input)
@spec nanmedian( ExTorch.Tensor.t(), integer() | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec nanmedian(ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
nanmedian(input, kwargs)nanmedian(input, dim)
@spec nanmedian( ExTorch.Tensor.t(), integer() | nil, boolean() ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec nanmedian( ExTorch.Tensor.t(), integer() | nil, keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
nanmedian(input, dim, kwargs)nanmedian(input, dim, keepdim)
@spec nanmedian( ExTorch.Tensor.t(), integer() | nil, boolean(), [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}] ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec nanmedian( ExTorch.Tensor.t(), integer() | nil, boolean(), {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Returns the median of the values in input, ignoring NaN values.
This function is identical to ExTorch.median/4 when there are no :nan values in input.
When input has one or more :nan values, ExTorch.median will always return :nan,
while this function will return the median of the non-NaN elements in input.
If all the elements in input are NaN it will also return :nan.
If dim is nil, it returns the median of all values in the tensor. Else, it
returns a tuple {values, indices} where values contains the median
of each row of input in the dimension dim, and indices
contains the index of the median values found in the dimension dim.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nilkeepdim(boolean) - whether the output tensor hasdimretained or not. Default:falseout({ExTorch.Tensor, ExTorch.Tensor} | nil) - the optional output pre-allocated tuple tensor. Default:nil
Examples
iex> input =
...> ExTorch.tensor([
...> [:nan, 1.0, 2.0],
...> [-1.0, :nan, 2.0],
...> [1.0, -1.0, :nan]
...> ])
# Compute median of the tensor without :nan
iex> ExTorch.nanmedian(input)
#Tensor<
1.
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Compute median of the tensor in the last dimension, keeping all dimensions
iex> {values, indices} = ExTorch.nanmedian(input, -1, keepdim: true)
iex> values
#Tensor<
[[ 1.],
[-1.],
[-1.]]
[
size: {3, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> indices
#Tensor<
[[1],
[0],
[1]]
[
size: {3, 1},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec nanquantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Available signature calls:
nanquantile(input, q)
@spec nanquantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil ) :: ExTorch.Tensor.t()
@spec nanquantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean(), interpolation: :linear | :lower | :higher | :midpoint | :nearest, out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
nanquantile(input, q, kwargs)nanquantile(input, q, dim)
@spec nanquantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, boolean() ) :: ExTorch.Tensor.t()
@spec nanquantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, keepdim: boolean(), interpolation: :linear | :lower | :higher | :midpoint | :nearest, out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
nanquantile(input, q, dim, kwargs)nanquantile(input, q, dim, keepdim)
@spec nanquantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, boolean(), :linear | :lower | :higher | :midpoint | :nearest ) :: ExTorch.Tensor.t()
@spec nanquantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, boolean(), interpolation: :linear | :lower | :higher | :midpoint | :nearest, out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
nanquantile(input, q, dim, keepdim, kwargs)nanquantile(input, q, dim, keepdim, interpolation)
@spec nanquantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, boolean(), :linear | :lower | :higher | :midpoint | :nearest, [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec nanquantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, boolean(), :linear | :lower | :higher | :midpoint | :nearest, ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
This is a variant of ExTorch.quantile/6 that “ignores” NaN values, computing the quantiles q as
if NaN values in input did not exist. If all values in a reduced row are NaN then the quantiles
for that reduction will be NaN. See the documentation for ExTorch.quantile/6.
Arguments
input(ExTorch.Tensor) - the input tensor.q(ExTorch.Tensor|floating) - a scalar or 1D tensor of values in the range [0, 1].
Optional arguments
dim(integer|nil) - the dimension to reduce. Ifnil,inputwill be flattened before computation. Default:nilkeepdim(boolean) - whether the output hasdimretained or not. Default:falseinterpolation(atom) interpolation method to use when the desired quantile lies between two data points. Can be:linear,:lower,:higher,:midpointand:nearest. Default::linear.out(ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default:nil
Examples
# Compute quantiles throughout all tensor elements, ignoring :nan values
iex> a = ExTorch.tensor([:nan, 1, 2])
iex> ExTorch.nanquantile(a, 0.5)
#Tensor<
[1.5000]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
# Compute quantiles across specific dimensions, ignoring :nan values
iex> a = ExTorch.tensor([[:nan, :nan], [1, 2]])
iex> ExTorch.nanquantile(a, 0.5, dim: 0)
#Tensor<
[[1., 2.]]
[
size: {1, 2},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.nanquantile(a, 0.5, dim: 1)
#Tensor<
[[ nan, 1.5000]]
[
size: {1, 2},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec nansum(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.nansum/4
Available signature calls:
nansum(input)
@spec nansum(ExTorch.Tensor.t(), integer() | tuple() | nil) :: ExTorch.Tensor.t()
@spec nansum(ExTorch.Tensor.t(), dim: integer() | tuple() | nil, keepdim: boolean(), dtype: ExTorch.DType.dtype() ) :: ExTorch.Tensor.t()
See ExTorch.nansum/4
Available signature calls:
nansum(input, kwargs)nansum(input, dim)
@spec nansum(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean()) :: ExTorch.Tensor.t()
@spec nansum(ExTorch.Tensor.t(), integer() | tuple() | nil, keepdim: boolean(), dtype: ExTorch.DType.dtype() ) :: ExTorch.Tensor.t()
See ExTorch.nansum/4
Available signature calls:
nansum(input, dim, kwargs)nansum(input, dim, keepdim)
@spec nansum( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), ExTorch.DType.dtype() ) :: ExTorch.Tensor.t()
@spec nansum(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), [ {:dtype, ExTorch.DType.dtype()} ]) :: ExTorch.Tensor.t()
Returns the sum of all elements (or alongside an axis) in the input tensor, treating NaNs as zeros.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | tuple | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nilkeepdim(boolean) - whether the output tensor hasdimretained or not. Default:falsedtype(ExTorch.DTypeornil) - the desired data type of returned tensor. If specified, theinputtensor is casted todtypebefore the operation is performed. This is useful for preventing data type overflows. Default:nil.
Examples
iex> input =
...> ExTorch.tensor(
...> [
...> [4, 4, 4, :nan],
...> [3, :nan, 1, 1],
...> [3, 2, :nan, 0]
...> ]
...> )
# Sum all elements in the tensor, ignoring NaNs
iex> ExTorch.nansum(input)
#Tensor<
22.
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Sum all elements in the last dimension, keeping dims and casting to double.
iex> ExTorch.nansum(input, -1, keepdim: true, dtype: :double)
#Tensor<
[[12.],
[ 5.],
[ 5.]]
[
size: {3, 1},
dtype: :double,
device: :cpu,
requires_grad: false
]>
@spec prod(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.prod/4
Available signature calls:
prod(input)
@spec prod(ExTorch.Tensor.t(), integer() | tuple() | nil) :: ExTorch.Tensor.t()
@spec prod(ExTorch.Tensor.t(), dim: integer() | tuple() | nil, keepdim: boolean(), dtype: ExTorch.DType.dtype() ) :: ExTorch.Tensor.t()
See ExTorch.prod/4
Available signature calls:
prod(input, kwargs)prod(input, dim)
@spec prod(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean()) :: ExTorch.Tensor.t()
@spec prod(ExTorch.Tensor.t(), integer() | tuple() | nil, keepdim: boolean(), dtype: ExTorch.DType.dtype() ) :: ExTorch.Tensor.t()
See ExTorch.prod/4
Available signature calls:
prod(input, dim, kwargs)prod(input, dim, keepdim)
@spec prod( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), ExTorch.DType.dtype() ) :: ExTorch.Tensor.t()
@spec prod(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), [ {:dtype, ExTorch.DType.dtype()} ]) :: ExTorch.Tensor.t()
Returns the product of all elements (or alongside an axis) in the input tensor.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | tuple | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nilkeepdim(boolean) - whether the output tensor hasdimretained or not. Default:falsedtype(ExTorch.DTypeornil) - the desired data type of returned tensor. If specified, theinputtensor is casted todtypebefore the operation is performed. This is useful for preventing data type overflows. Default:nil.
Notes
keepdimdoes not apply whendim = nil.
Examples
iex> a = ExTorch.randint(1, 3, {3, 4})
#Tensor<
[[1., 1., 1., 2.],
[1., 2., 1., 1.],
[1., 2., 1., 2.]]
[
size: {3, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Multiply all elements in the tensor
iex> ExTorch.prod(a)
#Tensor<
16.
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Multiply all elements in the last dimension, keep all dimensions
iex> ExTorch.prod(a, -1, keepdim: true)
#Tensor<
[[2.],
[2.],
[4.]]
[
size: {3, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec quantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t() ) :: ExTorch.Tensor.t()
Available signature calls:
quantile(input, q)
@spec quantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil ) :: ExTorch.Tensor.t()
@spec quantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), dim: integer() | nil, keepdim: boolean(), interpolation: :linear | :lower | :higher | :midpoint | :nearest, out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
quantile(input, q, kwargs)quantile(input, q, dim)
@spec quantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, boolean() ) :: ExTorch.Tensor.t()
@spec quantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, keepdim: boolean(), interpolation: :linear | :lower | :higher | :midpoint | :nearest, out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
quantile(input, q, dim, kwargs)quantile(input, q, dim, keepdim)
@spec quantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, boolean(), :linear | :lower | :higher | :midpoint | :nearest ) :: ExTorch.Tensor.t()
@spec quantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, boolean(), interpolation: :linear | :lower | :higher | :midpoint | :nearest, out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
quantile(input, q, dim, keepdim, kwargs)quantile(input, q, dim, keepdim, interpolation)
@spec quantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, boolean(), :linear | :lower | :higher | :midpoint | :nearest, [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec quantile( ExTorch.Tensor.t(), float() | ExTorch.Tensor.t(), integer() | nil, boolean(), :linear | :lower | :higher | :midpoint | :nearest, ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Computes the q-th quantiles of each row of the input tensor along the dimension dim.
To compute the quantile, we map q in [0, 1] to the range of indices [0, n] to find the
location of the quantile in the sorted input. If the quantile lies between two data points
a < b with indices i and j in the sorted order, result is computed according to the
given interpolation method as follows:
:linear:a + (b - a) * fraction, wherefractionis the fractional part of the computed quantile index.lower:a.higher:b.nearest:aorb, whichever’s index is closer to the computed quantile index (rounding down for .5 fractions).midpoint:(a + b) / 2.
If q is a 1D tensor, the first dimension of the output represents the quantiles and has size equal
to the size of q, the remaining dimensions are what remains from the reduction.
Arguments
input(ExTorch.Tensor) - the input tensor.q(ExTorch.Tensor|floating) - a scalar or 1D tensor of values in the range [0, 1].
Optional arguments
dim(integer|nil) - the dimension to reduce. Ifnil,inputwill be flattened before computation. Default:nilkeepdim(boolean) - whether the output hasdimretained or not. Default:falseinterpolation(atom) interpolation method to use when the desired quantile lies between two data points. Can be:linear,:lower,:higher,:midpointand:nearest. Default::linear.out(ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default:nil
Examples
iex> a = ExTorch.randn({2, 3})
#Tensor<
[[ 2.1818, -1.5810, 0.6152],
[ 0.2525, -0.7425, 0.3769]]
[
size: {2, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> q = torch.tensor([0.25, 0.5, 0.75])
# Quantiles in the last dimension, keep output dimensions
iex> ExTorch.quantile(a, q, dim: 1, keepdim: true)
#Tensor<
[[[-0.4829],
[-0.2450]],
[[ 0.6152],
[ 0.2525]],
[[ 1.3985],
[ 0.3147]]]
[
size: {3, 2, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> a = ExTorch.arange(4)
#Tensor<
[0.0000, 1.0000, 2.0000, 3.0000]
[
size: {4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Interpolation modes
iex> ExTorch.quantile(a, 0.6, interpolation: :linear)
#Tensor<
[1.8000]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.quantile(a, 0.6, interpolation: :lower)
#Tensor<
[1.]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.quantile(a, 0.6, interpolation: :higher)
#Tensor<
[2.]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.quantile(a, 0.6, interpolation: :midpoint)
#Tensor<
[1.5000]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
iex> ExTorch.quantile(a, 0.6, interpolation: :nearest)
#Tensor<
[2.]
[size: {1}, dtype: :float, device: :cpu, requires_grad: false]>
@spec std(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Alias to std_dev/1
@spec std( ExTorch.Tensor.t(), integer() | tuple() | nil ) :: ExTorch.Tensor.t()
@spec std(ExTorch.Tensor.t(), dim: integer() | tuple() | nil, correction: integer(), keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Alias to std_dev/2
@spec std( ExTorch.Tensor.t(), integer() | tuple() | nil, integer() ) :: ExTorch.Tensor.t()
@spec std( ExTorch.Tensor.t(), integer() | tuple() | nil, correction: integer(), keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Alias to std_dev/3
@spec std( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean() ) :: ExTorch.Tensor.t()
@spec std( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Alias to std_dev/4
@spec std( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec std( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Alias to std_dev/5
@spec std_dev(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
std_dev(input)
@spec std_dev( ExTorch.Tensor.t(), integer() | tuple() | nil ) :: ExTorch.Tensor.t()
@spec std_dev(ExTorch.Tensor.t(), dim: integer() | tuple() | nil, correction: integer(), keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
std_dev(input, kwargs)std_dev(input, dim)
@spec std_dev( ExTorch.Tensor.t(), integer() | tuple() | nil, integer() ) :: ExTorch.Tensor.t()
@spec std_dev( ExTorch.Tensor.t(), integer() | tuple() | nil, correction: integer(), keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
std_dev(input, dim, kwargs)std_dev(input, dim, correction)
@spec std_dev( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean() ) :: ExTorch.Tensor.t()
@spec std_dev( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Available signature calls:
std_dev(input, dim, correction, kwargs)std_dev(input, dim, correction, keepdim)
@spec std_dev( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec std_dev( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Calculates the standard deviation over the dimensions specified by dim.
dim can be a single dimension, list of dimensions, or nil to reduce over all dimensions.
The standard deviation ($\sigma$) is calculated as
$$ \sigma = \sqrt{\frac{1}{N - \delta N} \sum\_{i=0}^{N - 1} (x\_i - \bar{x})^2} $$
where $x$ is the sample set of elements, $\bar{x}$ is the sample mean, $N$ is the number of samples
and $\delta N$ is the correction.
If keepdim is true, the output tensors are of the same size as input
except in the dimension dim where they are of size 1.
Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs
tensor having 1 fewer dimension than input.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | tuple | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nilcorrection(integer) - difference between the sample size and sample degrees of freedom. Defaults to Bessel's correction. Default: 1keepdim(boolean) - whether the output tensor hasdimretained or not. Default:falseout(ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default:nil
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[ 0.0686, 0.7169, 0.2143, 1.5755],
[-1.6080, 0.9169, -0.0937, 1.2906],
[ 0.5432, 2.4151, -0.3814, 0.2830],
[-0.0724, 0.7037, -0.1951, -0.1191]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Compute the standard deviation of all tensor elements
iex> ExTorch.std(a)
#Tensor<
0.9167
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Compute the standard deviation of elements in the last dimension, keeping total dimensions
iex> ExTorch.std(a, -1, keepdim: true)
#Tensor<
[[0.6804],
[1.2957],
[1.1984],
[0.4194]]
[
size: {4, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec std_mean(ExTorch.Tensor.t()) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
std_mean(input)
@spec std_mean( ExTorch.Tensor.t(), integer() | tuple() | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec std_mean(ExTorch.Tensor.t(), dim: integer() | tuple() | nil, correction: integer(), keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
std_mean(input, kwargs)std_mean(input, dim)
@spec std_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, integer() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec std_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, correction: integer(), keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
std_mean(input, dim, kwargs)std_mean(input, dim, correction)
@spec std_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec std_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
std_mean(input, dim, correction, kwargs)std_mean(input, dim, correction, keepdim)
@spec std_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean(), [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}] ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec std_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean(), {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Calculates the standard deviation and mean over the dimensions specified by dim.
dim can be a single dimension, list of dimensions, or nil to reduce over all dimensions.
It returns a tuple {std, mean} containing the standard deviation and mean, respectively.
The standard deviation ($ igma$) is calculated as
$$ igma = qrt{rac{1}{N - elta N} um_{i=0}^{N - 1} (x_i - ar{x})^2} $$
where $x$ is the sample set of elements, $ar{x}$ is the sample mean, $N$ is the number of samples
and $elta N$ is the correction.
If keepdim is true, the output tensors are of the same size as input
except in the dimension dim where they are of size 1.
Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs
tensor having 1 fewer dimension than input.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | tuple | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nilcorrection(integer) - difference between the sample size and sample degrees of freedom. Defaults to Bessel's correction. Default: 1keepdim(boolean) - whether the output tensor hasdimretained or not. Default:falseout({ExTorch.Tensor, ExTorch.Tensor} | nil) - a tuple containing the optional output pre-allocated tensors. Default:nil
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.8204, 0.0761, -0.5242, 0.7905],
[ 0.4202, 0.5431, -0.9726, 0.7407],
[-1.5224, 1.1669, -1.4509, 0.0034],
[-0.8064, 1.2111, 1.3384, -1.2709]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Compute standard deviation and mean of all tensor elements
iex> {std, mean} = ExTorch.std_mean(a)
iex> std
#Tensor<
0.9926
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
iex> mean
#Tensor<
-0.0673
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Compute standard deviation and mean of all tensor elements in the last dimension
iex> {std, mean} = ExTorch.std_mean(a, -1, keepdim: true)
iex> std
#Tensor<
[[0.7121],
[0.7815],
[1.2874],
[1.3500]]
[
size: {4, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> mean
#Tensor<
[[-0.1195],
[ 0.1829],
[-0.4507],
[ 0.1180]]
[
size: {4, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec sum(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.sum/4
Available signature calls:
sum(input)
@spec sum(ExTorch.Tensor.t(), integer() | tuple() | nil) :: ExTorch.Tensor.t()
@spec sum(ExTorch.Tensor.t(), dim: integer() | tuple() | nil, keepdim: boolean(), dtype: ExTorch.DType.dtype() ) :: ExTorch.Tensor.t()
See ExTorch.sum/4
Available signature calls:
sum(input, kwargs)sum(input, dim)
@spec sum(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean()) :: ExTorch.Tensor.t()
@spec sum(ExTorch.Tensor.t(), integer() | tuple() | nil, keepdim: boolean(), dtype: ExTorch.DType.dtype() ) :: ExTorch.Tensor.t()
See ExTorch.sum/4
Available signature calls:
sum(input, dim, kwargs)sum(input, dim, keepdim)
@spec sum( ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), ExTorch.DType.dtype() ) :: ExTorch.Tensor.t()
@spec sum(ExTorch.Tensor.t(), integer() | tuple() | nil, boolean(), [ {:dtype, ExTorch.DType.dtype()} ]) :: ExTorch.Tensor.t()
Returns the sum of all elements (or alongside an axis) in the input tensor.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | tuple | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nilkeepdim(boolean) - whether the output tensor hasdimretained or not. Default:falsedtype(ExTorch.DTypeornil) - the desired data type of returned tensor. If specified, theinputtensor is casted todtypebefore the operation is performed. This is useful for preventing data type overflows. Default:nil.
Examples
iex> a = ExTorch.rand({3, 3})
#Tensor<
[[0.7281, 0.9280, 0.5829],
[0.4569, 0.4785, 0.1352],
[0.9905, 0.0698, 0.1905]]
[
size: {3, 3},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Sum all elements in a tensor.
iex> ExTorch.sum(a)
#Tensor<
4.5604
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Sum all elements in the last dimension, keeping dims and casting to double
iex> ExTorch.sum(a, 1, keepdim: true, dtype: :double)
#Tensor<
[[2.2390],
[1.0707],
[1.2507]]
[
size: {3, 1},
dtype: :double,
device: :cpu,
requires_grad: false
]>
@spec unique(ExTorch.Tensor.t()) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.unique/5
Available signature calls:
unique(input)
@spec unique(ExTorch.Tensor.t(), sorted: boolean(), return_inverse: boolean(), return_counts: boolean(), dim: integer() | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec unique(ExTorch.Tensor.t(), boolean()) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.unique/5
Available signature calls:
unique(input, sorted)unique(input, kwargs)
@spec unique(ExTorch.Tensor.t(), boolean(), return_inverse: boolean(), return_counts: boolean(), dim: integer() | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec unique(ExTorch.Tensor.t(), boolean(), boolean()) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.unique/5
Available signature calls:
unique(input, sorted, return_inverse)unique(input, sorted, kwargs)
@spec unique(ExTorch.Tensor.t(), boolean(), boolean(), return_counts: boolean(), dim: integer() | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec unique(ExTorch.Tensor.t(), boolean(), boolean(), boolean()) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.unique/5
Available signature calls:
unique(input, sorted, return_inverse, return_counts)unique(input, sorted, return_inverse, kwargs)
@spec unique(ExTorch.Tensor.t(), boolean(), boolean(), boolean(), integer() | nil) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec unique(ExTorch.Tensor.t(), boolean(), boolean(), boolean(), [ {:dim, integer() | nil} ]) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Returns the unique elements of the input tensor.
Depending on the value of return_inverse and return_counts, this function
can return either a single tensor, a tuple of two tensors, or a tuple of three tensors.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
sorted(boolean) - whether to sort the unique elements in ascending order before returning. Default:truereturn_inverse(boolean) - whether to also return the indices for where elements in the original input ended up in the returned unique list. Default:falsereturn_counts(boolean) - whether to also return the counts for each unique element. Default:falsedim(integer | nil) - the dimension to operate upon. Ifnil, the unique of the flattened input is returned. Otherwise, each of the tensors indexed by the given dimension is treated as one of the elements to apply the unique operation upon. See examples for more details. Default:nil
Examples
iex> a = ExTorch.randint(-1, 4, {4, 4}, dtype: :int64)
#Tensor<
[[ 2, 2, -1, 0],
[ 1, 3, 1, -1],
[-1, 2, 0, 1],
[ 3, 2, -1, 3]]
[
size: {4, 4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Compute a tensor's unique values
iex> ExTorch.unique(a)
#Tensor<
[-1, 0, 1, 2, 3]
[
size: {5},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Compute unique values and inverse tensor
iex> {unique, inverse} = ExTorch.unique(a, return_inverse: true)
iex> inverse
#Tensor<
[[3, 3, 0, 1],
[2, 4, 2, 0],
[0, 3, 1, 2],
[4, 3, 0, 4]]
[
size: {4, 4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Compute unique values and count tensor
iex> {unique, count} = ExTorch.unique(a, return_counts: true)
iex> count
#Tensor<
[4, 2, 3, 4, 3]
[
size: {5},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Compute unique values, inverse and count tensors
iex> {unique, inverse, count} = ExTorch.unique(a, return_inverse: true, return_counts: true)
# Compute unique values across a dimension
iex> a = ExTorch.tensor([[0, 1, 1], [-1, -1, -1], [0, 1, 1]], dtype: :int64)
iex> ExTorch.unique(a, dim: 0)
#Tensor<
[[-1, -1, -1],
[ 0, 1, 1]]
[
size: {2, 3},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec unique_consecutive(ExTorch.Tensor.t()) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.unique_consecutive/4
Available signature calls:
unique_consecutive(input)
@spec unique_consecutive(ExTorch.Tensor.t(), return_inverse: boolean(), return_counts: boolean(), dim: integer() | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec unique_consecutive(ExTorch.Tensor.t(), boolean()) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.unique_consecutive/4
Available signature calls:
unique_consecutive(input, return_inverse)unique_consecutive(input, kwargs)
@spec unique_consecutive(ExTorch.Tensor.t(), boolean(), return_counts: boolean(), dim: integer() | nil ) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec unique_consecutive(ExTorch.Tensor.t(), boolean(), boolean()) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.unique_consecutive/4
Available signature calls:
unique_consecutive(input, return_inverse, return_counts)unique_consecutive(input, return_inverse, kwargs)
@spec unique_consecutive(ExTorch.Tensor.t(), boolean(), boolean(), integer() | nil) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec unique_consecutive(ExTorch.Tensor.t(), boolean(), boolean(), [ {:dim, integer() | nil} ]) :: ExTorch.Tensor.t() | {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | {ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Eliminates all but the first element from every consecutive group of equivalent elements.
Depending on the value of return_inverse and return_counts, this function
can return either a single tensor, a tuple of two tensors, or a tuple of three tensors.
This function is different from ExTorch.unique/5 in the sense that this function only
eliminates consecutive duplicate values. This semantics is similar to std::unique in C++.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
return_inverse(boolean) - whether to also return the indices for where elements in the original input ended up in the returned unique list. Default:falsereturn_counts(boolean) - whether to also return the counts for each unique element. Default:falsedim(integer | nil) - the dimension to operate upon. Ifnil, the unique of the flattened input is returned. Otherwise, each of the tensors indexed by the given dimension is treated as one of the elements to apply the unique operation upon. See examples for more details. Default:nil
Examples
iex> a = ExTorch.tensor([1, 1, 2, 2, 3, 1, 1, 2], dtype: :int32)
# Find unique consecutive elements in a tensor
iex> ExTorch.unique_consecutive(a)
#Tensor<
[1, 2, 3, 1, 2]
[
size: {5},
dtype: :int,
device: :cpu,
requires_grad: false
]>
# Compute unique values and inverse tensor
iex> {unique, inverse} = ExTorch.unique_consecutive(a, return_inverse: true)
iex> inverse
#Tensor<
[0, 0, 1, 1, 2, 3, 3, 4]
[
size: {8},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Compute unique values and count tensor
iex> {unique, count} = ExTorch.unique_consecutive(a, return_counts: true)
iex> count
#Tensor<
[2, 2, 1, 2, 1]
[
size: {5},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Compute unique values, inverse and count tensors
iex> {unique, inverse, count} = ExTorch.unique(a, return_inverse: true, return_counts: true)
# Compute unique consecutive values across a dimension
iex> a = ExTorch.tensor([[-1, -1, -1], [0, 1, 1], [0, 1, 1], [-1, -1, -1]], dtype: :int64)
iex> ExTorch.unique(a, dim: 0)
#Tensor<
[[-1, -1, -1],
[ 0, 1, 1]]
[
size: {2, 3},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec var(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.var/5
Available signature calls:
var(input)
@spec var( ExTorch.Tensor.t(), integer() | tuple() | nil ) :: ExTorch.Tensor.t()
@spec var(ExTorch.Tensor.t(), dim: integer() | tuple() | nil, correction: integer(), keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
See ExTorch.var/5
Available signature calls:
var(input, kwargs)var(input, dim)
@spec var( ExTorch.Tensor.t(), integer() | tuple() | nil, integer() ) :: ExTorch.Tensor.t()
@spec var( ExTorch.Tensor.t(), integer() | tuple() | nil, correction: integer(), keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
See ExTorch.var/5
Available signature calls:
var(input, dim, kwargs)var(input, dim, correction)
@spec var( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean() ) :: ExTorch.Tensor.t()
@spec var( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), keepdim: boolean(), out: ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
See ExTorch.var/5
Available signature calls:
var(input, dim, correction, kwargs)var(input, dim, correction, keepdim)
@spec var( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec var( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Calculates the variance over the dimensions specified by dim.
dim can be a single dimension, list of dimensions, or nil to reduce over all dimensions.
The variance ($\sigma^2$) is calculated as
$$ \sigma^2 = \frac{1}{N - \delta N} \sum\_{i=0}^{N - 1} (x\_i - \bar{x})^2 $$
where $x$ is the sample set of elements, $\bar{x}$ is the sample mean, $N$ is the number of samples
and $\delta N$ is the correction.
If keepdim is true, the output tensors are of the same size as input
except in the dimension dim where they are of size 1.
Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs
tensor having 1 fewer dimension than input.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | tuple | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nilcorrection(integer) - difference between the sample size and sample degrees of freedom. Defaults to Bessel's correction. Default: 1keepdim(boolean) - whether the output tensor hasdimretained or not. Default:falseout(ExTorch.Tensor | nil) - the optional output pre-allocated tensor. Default:nil
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.9319, 0.1259, 0.0744, 0.3516],
[-0.1965, 0.8596, -1.2986, -0.6350],
[-0.0211, 0.2856, -1.3375, -1.4459],
[-0.0489, 0.4821, -0.5326, -2.3099]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Compute the variance of all tensor elements
iex> ExTorch.var(a)
#Tensor<
0.7327
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Compute the variance of elements in the last dimension, keeping total dimensions
iex> ExTorch.var(a, -1, keepdim: true)
#Tensor<
[[0.3258],
[0.8211],
[0.7917],
[1.4677]]
[
size: {4, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec var_mean(ExTorch.Tensor.t()) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
var_mean(input)
@spec var_mean( ExTorch.Tensor.t(), integer() | tuple() | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec var_mean(ExTorch.Tensor.t(), dim: integer() | tuple() | nil, correction: integer(), keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
var_mean(input, kwargs)var_mean(input, dim)
@spec var_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, integer() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec var_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, correction: integer(), keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
var_mean(input, dim, kwargs)var_mean(input, dim, correction)
@spec var_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec var_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
var_mean(input, dim, correction, kwargs)var_mean(input, dim, correction, keepdim)
@spec var_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean(), [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}] ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec var_mean( ExTorch.Tensor.t(), integer() | tuple() | nil, integer(), boolean(), {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Calculates the variance and mean over the dimensions specified by dim.
dim can be a single dimension, list of dimensions, or nil to reduce over all dimensions.
It returns a tuple {var, mean} containing the variance and mean, respectively.
The variance ($\sigma^2$) is calculated as
$$ \sigma^2 = \frac{1}{N - \delta N} \sum\_{i=0}^{N - 1} (x\_i - \bar{x})^2 $$
where $x$ is the sample set of elements, $\bar{x}$ is the sample mean, $N$ is the number of samples
and $\delta N$ is the correction.
If keepdim is true, the output tensors are of the same size as input
except in the dimension dim where they are of size 1.
Otherwise, dim is squeezed (see ExTorch.squeeze), resulting in the outputs
tensor having 1 fewer dimension than input.
Arguments
input(ExTorch.Tensor) - the input tensor.
Optional arguments
dim(integer | tuple | nil) - the dimension or dimensions to reduce. Ifnil, all dimensions are reduced. Default:nilcorrection(integer) - difference between the sample size and sample degrees of freedom. Defaults to Bessel's correction. Default: 1keepdim(boolean) - whether the output tensor hasdimretained or not. Default:falseout({ExTorch.Tensor, ExTorch.Tensor} | nil) - a tuple containing the optional output pre-allocated tensors. Default:nil
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-0.9319, 0.1259, 0.0744, 0.3516],
[-0.1965, 0.8596, -1.2986, -0.6350],
[-0.0211, 0.2856, -1.3375, -1.4459],
[-0.0489, 0.4821, -0.5326, -2.3099]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Compute variance and mean of all tensor elements
iex> {var, mean} = ExTorch.var_mean(a)
iex> var
#Tensor<
0.7327
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
iex> mean
#Tensor<
-0.4112
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
# Compute variance and mean of all tensor elements in the last dimension
iex> {var, mean} = ExTorch.var_mean(a, -1, keepdim: true)
iex> var
#Tensor<
[[0.3258],
[0.8211],
[0.7917],
[1.4677]]
[
size: {4, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> mean
#Tensor<
[[-0.0950],
[-0.3176],
[-0.6297],
[-0.6023]]
[
size: {4, 1},
dtype: :float,
device: :cpu,
requires_grad: false
]>
Comparison operations
@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: boolean()
Available signature calls:
allclose(input, other)
@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), rtol: float(), atol: float(), equal_nan: boolean() ) :: boolean()
@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float()) :: boolean()
Available signature calls:
allclose(input, other, rtol)allclose(input, other, kwargs)
@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float()) :: boolean()
@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), atol: float(), equal_nan: boolean() ) :: boolean()
Available signature calls:
allclose(input, other, rtol, kwargs)allclose(input, other, rtol, atol)
@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float(), boolean()) :: boolean()
@spec allclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float(), [ {:equal_nan, boolean()} ]) :: boolean()
This function checks if input and other satisfy the condition:
$$
|\text{input} - \text{other}| \leq \texttt{atol} + \texttt{rtol} \times |\text{other}|
$$
elementwise, for all elements of input and other.
Arguments
input- First tensor to compare (ExTorch.Tensor)other- Second tensor to compare (ExTorch.Tensor)
Optional arguments
rtol- Relative tolerance (float). Default: 1.0e-5atol- Absolute tolerance (float). Default: 1.0e-8equal_nan- Iftrue, then twoNaNs will be considered equal. Default:false.
Examples
iex> ExTorch.allclose(ExTorch.tensor([10000.0, 1.0e-07]), ExTorch.tensor([10000.1, 1.0e-08]))
false
iex> ExTorch.allclose(ExTorch.tensor([10000.0, 1.0e-08]), ExTorch.tensor([10000.1, 1.0e-09]))
true
iex> ExTorch.allclose(ExTorch.tensor([1.0, :nan]), ExTorch.tensor([1.0, :nan]))
false
iex> ExTorch.allclose(ExTorch.tensor([1.0, :nan]), ExTorch.tensor([1.0, :nan]), equal_nan: true)
true
@spec argsort(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
argsort(input)
@spec argsort(ExTorch.Tensor.t(), integer()) :: ExTorch.Tensor.t()
@spec argsort(ExTorch.Tensor.t(), dim: integer(), descending: boolean(), stable: boolean() ) :: ExTorch.Tensor.t()
Available signature calls:
argsort(input, kwargs)argsort(input, dim)
@spec argsort(ExTorch.Tensor.t(), integer(), boolean()) :: ExTorch.Tensor.t()
@spec argsort(ExTorch.Tensor.t(), integer(), descending: boolean(), stable: boolean()) :: ExTorch.Tensor.t()
Available signature calls:
argsort(input, dim, kwargs)argsort(input, dim, descending)
@spec argsort(ExTorch.Tensor.t(), integer(), boolean(), [{:stable, boolean()}]) :: ExTorch.Tensor.t()
@spec argsort(ExTorch.Tensor.t(), integer(), boolean(), boolean()) :: ExTorch.Tensor.t()
Returns the indices that sort a tensor along a given dimension in ascending order by value.
This is the second value returned by ExTorch.sort/5. See its documentation for the exact
semantics of this method. If stable is true then the sorting routine becomes stable,
preserving the order of equivalent elements. If false, the relative order of values
which compare equal is not guaranteed. true is slower.
Arguments
input- Input tensor. (ExTorch.Tensor)
Optional arguments
dim- the dimension to sort along ('integer()'). Default: -1descending- controls the sorting order (ascending or descending) (boolean()). Default:falsestable- controls the relative order of equivalent elements (boolean()). Default:false
Examples
iex> a = ExTorch.randn({4, 4})
#Tensor<
[[-1.2732, 0.8419, -0.0140, 0.4717],
[-1.1627, -0.2813, -0.5655, -0.1348],
[ 1.5269, -0.2712, 0.5134, -1.5580],
[ 0.6169, -1.0332, 0.4478, -0.9864]]
[
size: {4, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Sort alongside an specific dimension
iex> ExTorch.argsort(a, dim: 1)
#Tensor<
[[0, 2, 3, 1],
[0, 2, 1, 3],
[3, 1, 2, 0],
[1, 3, 2, 0]]
[
size: {4, 4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec eq( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
See ExTorch.eq/3
Available signature calls:
eq(input, other)
@spec eq( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec eq( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Computes element-wise equality.
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
It will return a boolean tensor of the same shape as input, where a true entry
represents a value that is equal on both input and other, and false otherwise.
Arguments
input- the tensor to compare (ExTorch.Tensor).other- the tensor or value to compare (ExTorch.Tensoror value)
Optional arguments
out- an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)
Examples
# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.eq(a, 1)
#Tensor<
[[ true, false],
[false, false]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against a broadcastable value.
iex> ExTorch.eq(a, [1, 2])
#Tensor<
[[ true, true],
[false, false]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against another tensor.
iex> ExTorch.eq(a, ExTorch.tensor([[1, 1], [4, 4]]))
#Tensor<
[[ true, false],
[false, true]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec equal(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: boolean()
Strict element-wise equality for two tensors.
This function will return true if both inputs have the same size and elements.
false, otherwise.
Arguments
input- tensor to compare (ExTorch.Tensor)other- tensor to compare (ExTorch.Tensor)
Examples
iex> ExTorch.equal(ExTorch.tensor([1, 2]), ExTorch.tensor([1, 2]))
true
iex> ExTorch.equal(ExTorch.tensor([1, 2]), ExTorch.tensor([1]))
false
@spec fmax(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.fmax/3
Available signature calls:
fmax(input, other)
@spec fmax(ExTorch.Tensor.t(), ExTorch.Tensor.t(), [{:out, ExTorch.Tensor.t() | nil}]) :: ExTorch.Tensor.t()
@spec fmax(ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Computes the element-wise maximum of input and other.
This is like ExTorch.maximum/3 except it handles NaNs differently:
if exactly one of the two elements being compared is a NaN then the
non-NaN element is taken as the maximum. Only if both elements are NaN is NaN propagated.
This function is a wrapper around C++’s std::fmax and is similar to NumPy’s fmax function.
Supports broadcasting to a common shape, type promotion, and integer and floating-point inputs.
Arguments
input- the tensor to compare (ExTorch.Tensor).other- the tensor or value to compare (ExTorch.Tensoror value)
Optional arguments
out- an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)
Examples
iex> a = ExTorch.tensor([9.7, :nan, 3.1, :nan])
iex> b = ExTorch.tensor([-2.2, 0.5, :nan, :nan])
iex> ExTorch.fmax(a, b)
#Tensor<
[9.7000, 0.5000, 3.1000, nan]
[
size: {4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec fmin(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.fmin/3
Available signature calls:
fmin(input, other)
@spec fmin(ExTorch.Tensor.t(), ExTorch.Tensor.t(), [{:out, ExTorch.Tensor.t() | nil}]) :: ExTorch.Tensor.t()
@spec fmin(ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Computes the element-wise minimum of input and other.
This is like ExTorch.minimum/3 except it handles NaNs differently:
if exactly one of the two elements being compared is a NaN then the
non-NaN element is taken as the minimum. Only if both elements are NaN is NaN propagated.
This function is a wrapper around C++’s std::fmin and is similar to NumPy’s fmin function.
Supports broadcasting to a common shape, type promotion, and integer and floating-point inputs.
Arguments
input- the tensor to compare (ExTorch.Tensor).other- the tensor or value to compare (ExTorch.Tensoror value)
Optional arguments
out- an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)
Examples
iex> a = ExTorch.tensor([9.7, :nan, 3.1, :nan])
iex> b = ExTorch.tensor([-2.2, 0.5, :nan, :nan])
iex> ExTorch.fmin(a, b)
#Tensor<
[-2.2000, 0.5000, 3.1000, nan]
[
size: {4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec ge( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
See ExTorch.ge/3
Available signature calls:
ge(input, other)
@spec ge( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec ge( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Computes input >= other element-wise.
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
It will return a boolean tensor of the same shape as input, where a true entry
represents a value that is equal on both input and other, and false otherwise.
Arguments
input- the tensor to compare (ExTorch.Tensor).other- the tensor or value to compare (ExTorch.Tensoror value)
Optional arguments
out- an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)
Examples
# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.ge(a, 2)
#Tensor<
[[false, true],
[ true, true]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against a broadcastable value.
iex> ExTorch.ge(a, [2, 5])
#Tensor<
[[false, false],
[ true, false]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against another tensor.
iex> ExTorch.ge(a, ExTorch.tensor([[3, 1], [2, 5]]))
#Tensor<
[[false, true],
[ true, false]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec greater( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
Alias to gt/2
@spec greater( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec greater( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Alias to gt/3
@spec greater_equal( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
Alias to ge/2
@spec greater_equal( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec greater_equal( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Alias to ge/3
@spec gt( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
See ExTorch.gt/3
Available signature calls:
gt(input, other)
@spec gt( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec gt( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Computes input > other element-wise.
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
It will return a boolean tensor of the same shape as input, where a true entry
represents a value that is equal on both input and other, and false otherwise.
Arguments
input- the tensor to compare (ExTorch.Tensor).other- the tensor or value to compare (ExTorch.Tensoror value)
Optional arguments
out- an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)
Examples
# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.gt(a, 2)
#Tensor<
[[false, false],
[ true, true]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against a broadcastable value.
iex> ExTorch.gt(a, [0, 5])
#Tensor<
[[ true, false],
[ true, false]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against another tensor.
iex> ExTorch.gt(a, ExTorch.tensor([[0, 1], [2, 5]]))
#Tensor<
[[ true, true],
[ true, false]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
isclose(input, other)
@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), rtol: float(), atol: float(), equal_nan: boolean() ) :: ExTorch.Tensor.t()
@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float()) :: ExTorch.Tensor.t()
Available signature calls:
isclose(input, other, rtol)isclose(input, other, kwargs)
@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float()) :: ExTorch.Tensor.t()
@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), atol: float(), equal_nan: boolean() ) :: ExTorch.Tensor.t()
Available signature calls:
isclose(input, other, rtol, kwargs)isclose(input, other, rtol, atol)
@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float(), boolean()) :: ExTorch.Tensor.t()
@spec isclose(ExTorch.Tensor.t(), ExTorch.Tensor.t(), float(), float(), [ {:equal_nan, boolean()} ]) :: ExTorch.Tensor.t()
Returns a new tensor with boolean elements representing if each element of input is
“close” to the corresponding element of other. Closeness is defined as:
$$ |\text{input} - \text{other}| \leq \texttt{atol} + \texttt{rtol} \times |\text{other}| $$
Where input and/or other are nonfinite they are close if and only if they are
equal, with :nans being considered equal to each other when equal_nan is true.
Arguments
input- First tensor to compare (ExTorch.Tensor)other- Second tensor to compare (ExTorch.Tensor)
Optional arguments
rtol- Relative tolerance (float). Default: 1.0e-5atol- Absolute tolerance (float). Default: 1.0e-8equal_nan- Iftrue, then twoNaNs will be considered equal. Default:false.
Examples
iex> ExTorch.isclose(ExTorch.tensor([10000.0, 1.0e-07]), ExTorch.tensor([10000.1, 1.0e-08]))
#Tensor<
[ true, false]
[
size: {2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.isclose(ExTorch.tensor([10000.0, 1.0e-08]), ExTorch.tensor([10000.1, 1.0e-09]))
#Tensor<
[true, true]
[
size: {2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.isclose(ExTorch.tensor([1.0, :nan]), ExTorch.tensor([1.0, :nan]))
#Tensor<
[ true, false]
[
size: {2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.isclose(ExTorch.tensor([1.0, :nan]), ExTorch.tensor([1.0, :nan]), equal_nan: true)
#Tensor<
[true, true]
[
size: {2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec isfinite(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a new tensor with boolean elements representing if each element is finite or not.
Real values are finite when they are not NaN (:nan), negative infinity (:ninf), or infinity (:inf).
ExTorch.Complex values are finite when both their real and imaginary parts are finite.
Arguments
input- the input tensor (ExTorch.Tensor)
Examples
iex> input = ExTorch.tensor([1, :inf, 2, :ninf, :nan])
iex> ExTorch.isfinite(input)
#Tensor<
[ true, false, true, false, false]
[
size: {5},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec isin( ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
See ExTorch.isin/4
Available signature calls:
isin(elements, test_elements)
@spec isin( ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), boolean() ) :: ExTorch.Tensor.t()
@spec isin( ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), assume_unique: boolean(), invert: boolean() ) :: ExTorch.Tensor.t()
See ExTorch.isin/4
Available signature calls:
isin(elements, test_elements, kwargs)isin(elements, test_elements, assume_unique)
@spec isin( ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), boolean(), boolean() ) :: ExTorch.Tensor.t()
@spec isin( ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), boolean(), [{:invert, boolean()}] ) :: ExTorch.Tensor.t()
Tests if each element of elements is in test_elements.
Returns a boolean tensor of the same shape as elements that is true for
elements in test_elements and false otherwise.
Arguments
elements- input elements (ExTorch.Tensor | ExTorch.Scalar)test_elements- values to compare against for each input element. (ExTorch.Tensor | ExTorch.Scalar)
Optional arguments
assume_unique- Iftrue, assumes bothelementsandtest_elementscontain unique elements, which can speed up the calculation. Default:false. (boolean)invert- Iftrue, inverts the boolean return tensor, resulting intruevalues forelementsnot intest_elements. Default:false. (boolean)
Examples
# Check if any of the values is 2
iex> x = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.isin(x, 2)
#Tensor<
[[false, true],
[false, false]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Check if any of the values in x is in [1, 3, 5]
iex> ExTorch.isin(x, [1, 3, 5])
#Tensor<
[[ true, false],
[ true, false]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Invert result
iex> ExTorch.isin(x, ExTorch.tensor([[1, 3], [5, 4]]), invert: true)
#Tensor<
[[false, true],
[false, false]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec isinf(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a new tensor with boolean elements representing if each element is infinity (both positive and negative).
ExTorch.Complex values are infinity when either of their real or imaginary
parts are infinite.
Arguments
input- the input tensor (ExTorch.Tensor)
Examples
iex> input = ExTorch.tensor([1, :inf, 2, :ninf, :nan])
iex> ExTorch.isinf(input)
#Tensor<
[ false, true, false, true, false]
[
size: {5},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec isnan(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a new tensor with boolean elements representing if each element is
:nan.
ExTorch.Complex values are infinity when either of their real or imaginary
parts are :nan.
Arguments
input- the input tensor (ExTorch.Tensor)
Examples
iex> input = ExTorch.tensor([1, :inf, 2, :ninf, :nan])
iex> ExTorch.isnan(input)
#Tensor<
[ false, false, false, false, true]
[
size: {5},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec isneginf(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a new tensor with boolean elements representing if each element is negative infinity.
ExTorch.Complex values are infinity when either of their real or imaginary
parts are negative infinite.
Arguments
input- the input tensor (ExTorch.Tensor)
Examples
iex> input = ExTorch.tensor([1, :inf, 2, :ninf, :nan])
iex> ExTorch.isneginf(input)
#Tensor<
[ false, false, false, true, false]
[
size: {5},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec isposinf(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a new tensor with boolean elements representing if each element is positive infinity.
ExTorch.Complex values are infinity when either of their real or imaginary
parts are positive infinite.
Arguments
input- the input tensor (ExTorch.Tensor)
Examples
iex> input = ExTorch.tensor([1, :inf, 2, :ninf, :nan])
iex> ExTorch.isposinf(input)
#Tensor<
[ false, true, false, false, false]
[
size: {5},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec isreal(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a new tensor with boolean elements representing if each element is real valued.
All real-valued types are considered real. ExTorch.Complex values are real when their imaginary
parts are zero.
Arguments
input- the input tensor (ExTorch.Tensor)
Examples
iex> input = ExTorch.tensor([1, ExTorch.Complex.complex(-2, 0), ExTorch.Complex.complex(0, 1)])
iex> ExTorch.isreal(input)
#Tensor<
[ true, true, false]
[
size: {3},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec kthvalue( ExTorch.Tensor.t(), integer() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
kthvalue(input, k)
@spec kthvalue( ExTorch.Tensor.t(), integer(), integer() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec kthvalue( ExTorch.Tensor.t(), integer(), dim: integer(), keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
kthvalue(input, k, kwargs)kthvalue(input, k, dim)
@spec kthvalue( ExTorch.Tensor.t(), integer(), integer(), boolean() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec kthvalue( ExTorch.Tensor.t(), integer(), integer(), keepdim: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Available signature calls:
kthvalue(input, k, dim, kwargs)kthvalue(input, k, dim, keepdim)
@spec kthvalue( ExTorch.Tensor.t(), integer(), integer(), boolean(), [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}] ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec kthvalue( ExTorch.Tensor.t(), integer(), integer(), boolean(), {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Returns a tuple {values, indices} where values is the kth smallest element of
each row of the input tensor in the given dimension dim. And indices is the
index location of each element found.
- If
dimis not given, the last dimension of theinputis chosen. - If keepdim is
true, both thevaluesandindicestensors are the same size asinput, except in the dimensiondimwhere they are of size 1. Otherwise, dim is squeezed (seeExTorch.squeeze), resulting in both thevaluesandindicestensors having 1 fewer dimension than theinputtensor.
Arguments
input- the input tensor. (ExTorch.Tensor)k- k for the kth smallest value. (integer)
Optional arguments
dim- the dimension to find the kth smallest value along. Default: -1. (integer)keepdim- whether the output tensor hasdimretained or not. Default:false(boolean)out- the output tuple of{values, indices}that can be optionally given as output buffers. ({ExTorch.Tensor, ExTorch.Tensor}). Default:nil
Notes
When input is a CUDA tensor and there are multiple valid k th values, this function
may nondeterministically return indices for any of them.
Examples
# Retrieve the fourth smallest value.
iex> x = ExTorch.arange(6)
#Tensor<
[0.0000, 1.0000, 2.0000, 3.0000, 4.0000, 5.0000]
[
size: {6},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> {values, indices} = ExTorch.kthvalue(x, 4)
iex> values
#Tensor<
3.
[size: {}, dtype: :float, device: :cpu, requires_grad: false]>
iex> indices
#Tensor<
3
[size: {}, dtype: :long, device: :cpu, requires_grad: false]>
# Retrieve the second smallest value in the first dimension
iex> x = ExTorch.rand({3, 4})
#Tensor<
[[0.7375, 0.2798, 0.7146, 0.0654],
[0.0163, 0.8829, 0.8946, 0.3852],
[0.2225, 0.3258, 0.6905, 0.1512]]
[
size: {3, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> {values, indices} = ExTorch.kthvalue(x, 2, 0, keepdim: true)
iex> values
#Tensor<
[[0.2225, 0.3258, 0.7146, 0.1512]]
[
size: {1, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex(12)> indices
#Tensor<
[[2, 2, 0, 2]]
[
size: {1, 4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec le( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
See ExTorch.le/3
Available signature calls:
le(input, other)
@spec le( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec le( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Computes input <= other element-wise.
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
It will return a boolean tensor of the same shape as input, where a true entry
represents a value that is equal on both input and other, and false otherwise.
Arguments
input- the tensor to compare (ExTorch.Tensor).other- the tensor or value to compare (ExTorch.Tensoror value)
Optional arguments
out- an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)
Examples
# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.le(a, 2)
#Tensor<
[[ true, true],
[false, false]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against a broadcastable value.
iex> ExTorch.le(a, [2, 4])
#Tensor<
[[ true, true],
[false, true]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against another tensor.
iex> ExTorch.le(a, ExTorch.tensor([[3, 1], [2, 5]]))
#Tensor<
[[ true, false],
[false, true]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec less( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
Alias to lt/2
@spec less( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec less( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Alias to lt/3
@spec less_equal( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
Alias to le/2
@spec less_equal( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec less_equal( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Alias to le/3
@spec lt( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
See ExTorch.lt/3
Available signature calls:
lt(input, other)
@spec lt( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec lt( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Computes input < other element-wise.
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
It will return a boolean tensor of the same shape as input, where a true entry
represents a value that is equal on both input and other, and false otherwise.
Arguments
input- the tensor to compare (ExTorch.Tensor).other- the tensor or value to compare (ExTorch.Tensoror value)
Optional arguments
out- an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)
Examples
# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.lt(a, 2)
#Tensor<
[[ true, false],
[false, false]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against a broadcastable value.
iex> ExTorch.lt(a, [1, 5])
#Tensor<
[[false, true],
[false, true]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against another tensor.
iex> ExTorch.lt(a, ExTorch.tensor([[0, 1], [2, 5]]))
#Tensor<
[[false, false],
[false, true]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec maximum(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
maximum(input, other)
@spec maximum(ExTorch.Tensor.t(), ExTorch.Tensor.t(), [ {:out, ExTorch.Tensor.t() | nil} ]) :: ExTorch.Tensor.t()
@spec maximum(ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Computes the element-wise maximum of input and other.
Arguments
input- the tensor to compare (ExTorch.Tensor).other- the tensor or value to compare (ExTorch.Tensoror value)
Optional arguments
out- an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)
Notes
If one of the elements being compared is a NaN, then that element is returned. ExTorch.maximum/3 is not
supported for tensors with complex dtypes.
Examples
iex> a = ExTorch.tensor([1, 2, -1])
iex> b = ExTorch.tensor([3, 0, 4])
iex> ExTorch.maximum(a, b)
#Tensor<
[3, 2, 4]
[size: {3}, dtype: :int, device: :cpu, requires_grad: false]>
@spec minimum(ExTorch.Tensor.t(), ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Available signature calls:
minimum(input, other)
@spec minimum(ExTorch.Tensor.t(), ExTorch.Tensor.t(), [ {:out, ExTorch.Tensor.t() | nil} ]) :: ExTorch.Tensor.t()
@spec minimum(ExTorch.Tensor.t(), ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Computes the element-wise minimum of input and other.
Arguments
input- the tensor to compare (ExTorch.Tensor).other- the tensor or value to compare (ExTorch.Tensoror value)
Optional arguments
out- an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)
Notes
If one of the elements being compared is a NaN, then that element is returned. ExTorch.minimum/3 is not
supported for tensors with complex dtypes.
Examples
iex> a = ExTorch.tensor([1, 2, -1])
iex> b = ExTorch.tensor([3, 0, 4])
iex> ExTorch.minimum(a, b)
#Tensor<
[ 1, 0, -1]
[
size: {3},
dtype: :int,
device: :cpu,
requires_grad: false
]>
@spec msort(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
See ExTorch.msort/2
Available signature calls:
msort(input)
@spec msort( ExTorch.Tensor.t(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec msort(ExTorch.Tensor.t(), ExTorch.Tensor.t() | nil) :: ExTorch.Tensor.t()
Sorts the elements of the input tensor along its first dimension in ascending order by value.
ExTorch.msort/2is equivalent to{values, _} = ExTorch.sort(input, 0). SeeExTorch.sort/5
Arguments
input(ExTorch.Tensor) - the input tensor
Optional arguments
out(ExTorch.Tensor) - an optional pre-allocated tensor used to store the comparison result. Default:nil.
Examples
iex> t = ExTorch.randn({3, 4})
#Tensor<
[[-1.5470, -1.5603, -0.9216, 3.0246],
[ 0.3064, 1.1371, 0.3475, 1.3003],
[-2.0710, -0.0693, -1.5537, -0.3430]]
[
size: {3, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.msort(t)
#Tensor<
[[-2.0710, -1.5603, -1.5537, -0.3430],
[-1.5470, -0.0693, -0.9216, 1.3003],
[ 0.3064, 1.1371, 0.3475, 3.0246]]
[
size: {3, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
@spec ne( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
See ExTorch.ne/3
Available signature calls:
ne(input, other)
@spec ne( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec ne( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Computes input != other element-wise.
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
It will return a boolean tensor of the same shape as input, where a true entry
represents a value that is equal on both input and other, and false otherwise.
Arguments
input- the tensor to compare (ExTorch.Tensor).other- the tensor or value to compare (ExTorch.Tensoror value)
Optional arguments
out- an optional pre-allocated tensor used to store the comparison result. (ExTorch.Tensor)
Examples
# Compare against an scalar value.
iex> a = ExTorch.tensor([[1, 2], [3, 4]])
iex> ExTorch.ne(a, 2)
#Tensor<
[[ true, false],
[ true, true]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against a broadcastable value.
iex> ExTorch.ne(a, [1, 5])
#Tensor<
[[false, true],
[true, true]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
# Compare against another tensor.
iex> ExTorch.ne(a, ExTorch.tensor([[0, 2], [2, 5]]))
#Tensor<
[[true, false],
[true, true]]
[
size: {2, 2},
dtype: :bool,
device: :cpu,
requires_grad: false
]>
@spec not_equal( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list() ) :: ExTorch.Tensor.t()
Alias to ne/2
@spec not_equal( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), [{:out, ExTorch.Tensor.t() | nil}] ) :: ExTorch.Tensor.t()
@spec not_equal( ExTorch.Tensor.t(), ExTorch.Tensor.t() | ExTorch.Scalar.scalar_or_list(), ExTorch.Tensor.t() | nil ) :: ExTorch.Tensor.t()
Alias to ne/3
@spec sort(ExTorch.Tensor.t()) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.sort/5
Available signature calls:
sort(input)
@spec sort( ExTorch.Tensor.t(), integer() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec sort(ExTorch.Tensor.t(), dim: integer(), descending: boolean(), stable: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.sort/5
Available signature calls:
sort(input, kwargs)sort(input, dim)
@spec sort( ExTorch.Tensor.t(), integer(), boolean() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec sort( ExTorch.Tensor.t(), integer(), descending: boolean(), stable: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.sort/5
Available signature calls:
sort(input, dim, kwargs)sort(input, dim, descending)
@spec sort( ExTorch.Tensor.t(), integer(), boolean(), stable: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec sort( ExTorch.Tensor.t(), integer(), boolean(), boolean() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.sort/5
Available signature calls:
sort(input, dim, descending, stable)sort(input, dim, descending, kwargs)
@spec sort( ExTorch.Tensor.t(), integer(), boolean(), boolean(), [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}] ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec sort( ExTorch.Tensor.t(), integer(), boolean(), boolean(), {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Sorts the elements of the input tensor along a given dimension in ascending order by value.
- If
dimis not given, the last dimension of theinputis chosen. - If
descendingistruethen the elements are sorted in descending order by value. - If
stableistruethen the sorting routine becomes stable, preserving the order of equivalent elements.
A tuple of {values, indices} is returned, where the values are the sorted values
and indices are the indices of the elements in the original input tensor.
Arguments
input- Input tensor. (ExTorch.Tensor)
Optional arguments
dim- the dimension to sort along. ('integer()'). Default: -1descending- controls the sorting order. (ascending or descending) (boolean()). Default:falsestable- controls the relative order of equivalent elements. (boolean()). Default:falseout- the output tuple of{values, indices}that can be optionally given as output buffers. ({ExTorch.Tensor, ExTorch.Tensor}). Default:nil
Examples
iex> a = ExTorch.randn({3, 4})
#Tensor<
[[ 0.7517, 0.5590, -0.1417, -0.1662],
[-0.1247, 0.5669, 0.0484, 0.4289],
[ 0.0876, -0.5951, -1.0296, 0.0093]]
[
size: {3, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
# Sort tensor on the last dimension
iex> {values, indices} = ExTorch.sort(a)
iex> values
#Tensor<
[[-0.1662, -0.1417, 0.5590, 0.7517],
[-0.1247, 0.0484, 0.4289, 0.5669],
[-1.0296, -0.5951, 0.0093, 0.0876]]
[
size: {3, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> indices
#Tensor<
[[3, 2, 1, 0],
[0, 2, 3, 1],
[2, 1, 3, 0]]
[
size: {3, 4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Sort tensor on the first dimension while reusing values and indices
iex> ExTorch.sort(a, 0, out: {values, indices})
iex> values
#Tensor<
[[-0.1247, -0.5951, -1.0296, -0.1662],
[ 0.0876, 0.5590, -0.1417, 0.0093],
[ 0.7517, 0.5669, 0.0484, 0.4289]]
[
size: {3, 4},
dtype: :float,
device: :cpu,
requires_grad: false
]>
iex> indices
#Tensor<
[[1, 2, 2, 0],
[2, 0, 0, 2],
[0, 1, 1, 1]]
[
size: {3, 4},
dtype: :long,
device: :cpu,
requires_grad: false
]>
@spec topk( ExTorch.Tensor.t(), integer() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.topk/6
Available signature calls:
topk(input, k)
@spec topk( ExTorch.Tensor.t(), integer(), integer() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec topk( ExTorch.Tensor.t(), integer(), dim: integer(), largest: boolean(), sorted: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.topk/6
Available signature calls:
topk(input, k, kwargs)topk(input, k, dim)
@spec topk( ExTorch.Tensor.t(), integer(), integer(), largest: boolean(), sorted: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec topk( ExTorch.Tensor.t(), integer(), integer(), boolean() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.topk/6
Available signature calls:
topk(input, k, dim, largest)topk(input, k, dim, kwargs)
@spec topk( ExTorch.Tensor.t(), integer(), integer(), boolean(), sorted: boolean(), out: {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec topk( ExTorch.Tensor.t(), integer(), integer(), boolean(), boolean() ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
See ExTorch.topk/6
Available signature calls:
topk(input, k, dim, largest, sorted)topk(input, k, dim, largest, kwargs)
@spec topk( ExTorch.Tensor.t(), integer(), integer(), boolean(), boolean(), [{:out, {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil}] ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
@spec topk( ExTorch.Tensor.t(), integer(), integer(), boolean(), boolean(), {ExTorch.Tensor.t(), ExTorch.Tensor.t()} | nil ) :: {ExTorch.Tensor.t(), ExTorch.Tensor.t()}
Returns the k largest elements of the given input tensor along a given dimension.
- If
dimis not given, the last dimension of the input is chosen. - If
largestisfalsethen theksmallest elements are returned. - A tuple of
{values, indices}is returned with thevaluesandindicesof the largestkelements of each row of theinputtensor in the given dimensiondim. - The boolean option
sortediftrue, will make sure that the returnedkelements are themselves sorted.
Arguments
input(ExTorch.Tensor) - the input tensork(integer) - the k in top-k
Optional arguments
- dim (
integer) - the dimension to sort along. Default: -1 - largest (
boolean) - controls whether to return largest or smallest elements. Default:true - sorted (
boolean) - controls whether to return the elements in sorted order. Default:true out({ExTorch.Tensor, ExTorch.Tensor} | nil) - the output tuple of{values, indices}that can be optionally given as output buffers. Default:nil
Examples
# Retrieve the top-3 elements in the last dimension.
iex> input = ExTorch.tensor([
...> [-1, 3, 10, -2, 0, 4, 5],
...> [5, -5, 2, 3, 7, 20, 1],
...> [0, 1, 2, 3, 4, 5, 6]
...> ])
iex> {values, indices} = ExTorch.topk(input, 3)
iex> values
#Tensor<
[[10, 5, 4],
[20, 7, 5],
[ 6, 5, 4]]
[
size: {3, 3},
dtype: :int,
device: :cpu,
requires_grad: false
]>
iex> indices
#Tensor<
[[2, 6, 5],
[5, 4, 0],
[6, 5, 4]]
[
size: {3, 3},
dtype: :long,
device: :cpu,
requires_grad: false
]>
# Retrieve the top-2 smallest elements in the first dimension.
iex> {values, indices} = ExTorch.topk(input, 2, dim: 0, largest: false)
iex> values
#Tensor<
[[-1, -5, 2, -2, 0, 4, 1],
[ 0, 1, 2, 3, 4, 5, 5]]
[
size: {2, 7},
dtype: :int,
device: :cpu,
requires_grad: false
]>
iex(10)> indices
#Tensor<
[[0, 1, 1, 0, 0, 0, 1],
[2, 2, 2, 1, 2, 2, 0]]
[
size: {2, 7},
dtype: :long,
device: :cpu,
requires_grad: false
]>
Other operations
@spec resolve_conj(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a new tensor with materialized conjugation if input’s conjugate
bit is set to true, else returns input. The output tensor will always
have its conjugate bit set to false.
Arguments
- input: The input
ExTorch.Tensor
Examples
# Create a conjugated view.
iex> a = ExTorch.rand({3, 3}, dtype: :complex128)
iex> b = ExTorch.conj(a)
iex> ExTorch.Tensor.is_conj(b)
true
# Materialize the view.
iex> c = ExTorch.resolve_conj(b)
iex> ExTorch.Tensor.is_conj(c)
false
@spec view_as_complex(ExTorch.Tensor.t()) :: ExTorch.Tensor.t()
Returns a view of input as a complex tensor.
For an input complex tensor of size $$ (\text{m1}, \text{m2}, \cdots, \text{mi}, 2) $$, this function returns a new complex tensor of size $$ (\text{m1}, \text{m2}, \cdots, \text{mi}) $$ where the last dimension of the input tensor is expected to represent the real and imaginary components of complex numbers.
Arguments
- input: The input
ExTorch.Tensor
Notes
view_as_complex/1 is only supported for tensors with ExTorch.DType
:float64 and :float32. The input is expected to have the last dimension
of size 2. In addition, the tensor must have a stride of 1 for its last
dimension. The strides of all other dimensions must be even numbers.
Examples
iex> x = ExTorch.randint(-3, 3, {5, 2})
#Tensor<
[[ 2., -1.],
[ 0., -1.],
[-2., -2.],
[ 2., 0.],
[ 1., -1.]]
[
size: {5, 2},
dtype: :double,
device: :cpu,
requires_grad: false
]>
iex> ExTorch.view_as_complex(x)
#Tensor<
[ 2.-1.j, 0.-1.j, -2.-2.j, 2.+0.j, 1.-1.j]
[
size: {5},
dtype: :complex_double,
device: :cpu,
requires_grad: false
]>