ExTorch (extorch v0.1.0-pre0)
The ExTorch
namespace contains data structures for multi-dimensional tensors and mathematical operations over these are defined.
Additionally, it provides many utilities for efficient serializing of Tensors and arbitrary types, and other useful utilities.
It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0
Link to this section Summary
Tensor creation
Returns a 1-D tensor of size $\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil$
with values from the interval [start, end)
taken with common difference
step
beginning from start
.
Returns a tensor filled with uninitialized data. The shape of the tensor is
defined by the tuple argument size
.
See ExTorch.eye/3
Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.
Returns a tensor filled with the scalar value scalar
, with the shape defined
by the variable argument size
.
Creates a one-dimensional tensor of size steps
whose values are evenly
spaced from start
to end
, inclusive. That is, the value are
Creates a one-dimensional tensor of size steps
whose values are evenly
spaced from ${{\text{{base}}}}^{{\text{{start}}}}$ to
${{\text{{base}}}}^{{\text{{end}}}}$, inclusive, on a logarithmic scale
with base base
. That is, the values are
Returns a tensor filled with the scalar value 1
, with the shape defined
by the variable argument size
.
Returns a tensor filled with random numbers from a uniform distribution on the interval $[0, 1)$
Returns a tensor filled with random integers generated uniformly
between low
(inclusive) and high
(exclusive).
Returns a tensor filled with random numbers from a normal distribution
with mean 0
and variance 1
(also called the standard normal
distribution).
Constructs a tensor with data.
Returns a tensor filled with the scalar value 0
, with the shape defined
by the variable argument size
.
Tensor manipulation
Append an empty dimension to a tensor on a given dimension.
Tensor indexing
Index a tensor given a list of integers, ranges, tensors, nil or
:ellipsis
.
Create a slice to index a tensor.
Link to this section Tensor creation
arange(end_bound)
@spec arange(number()) :: ExTorch.Tensor.t()
See ExTorch.arange/4
Available signature calls:
arange(end_bound)
arange(end_bound, kwargs)
@spec arange(number(), step: number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec arange( number(), number() ) :: ExTorch.Tensor.t()
See ExTorch.arange/4
Available signature calls:
arange(start, end_bound)
arange(end_bound, kwargs)
arange(end_bound, step, opts)
@spec arange( number(), number(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
@spec arange( number(), number(), step: number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec arange( number(), number(), number() ) :: ExTorch.Tensor.t()
See ExTorch.arange/4
Available signature calls:
arange(start, end_bound, step)
arange(start, end_bound, kwargs)
arange(end_bound, step, opts)
arange(start, end_bound, step, kwargs)
@spec arange( number(), number(), number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec arange( number(), number(), number(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a 1-D tensor of size $\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil$
with values from the interval [start, end)
taken with common difference
step
beginning from start
.
Note that non-integer step
is subject to floating point rounding errors when
comparing against end
; to avoid inconsistency, we advise adding a small epsilon
to end
in such cases.
$$out_{i + 1} = out_i + step$$
arguments
Arguments
start
: the starting value for the set of points. Default:0
.end
: the ending value for the set of points.step
: the gap between each pair of adjacent points. Default:1
.
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
# Single argument, end only
iex> ExTorch.arange(5)
#Tensor<
0
1
2
3
4
[ CPUFloatType{5} ]
>
# End only with options
iex> ExTorch.arange(5, dtype: :uint8)
#Tensor<
0
1
2
3
4
[ CPUByteType{5} ]
# Start to end
iex> ExTorch.arange(1, 7)
#Tensor<
1
2
3
4
5
6
[ CPUFloatType{6} ]
>
# Start to end with options
iex> ExTorch.arange(1, 7, device: :cpu, dtype: :float64)
#Tensor<
1
2
3
4
5
6
[ CPUDoubleType{6} ]
>
# Start to end with step
iex> ExTorch.arange(-1.3, 2.4, 0.5)
#Tensor<
-1.3000
-0.8000
-0.3000
0.2000
0.7000
1.2000
1.7000
2.2000
[ CPUFloatType{8} ]
>
# Start to end with step and options
iex> ExTorch.arange(-1.3, 2.4, 0.5, dtype: :float64)
#Tensor<
-1.3000
-0.8000
-0.3000
0.2000
0.7000
1.2000
1.7000
2.2000
[ CPUDoubleType{8} ]
>
empty(size)
@spec empty(tuple() | [integer()]) :: ExTorch.Tensor.t()
See ExTorch.empty/2
Available signature calls:
empty(size)
empty(size, kwargs)
@spec empty(tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec empty( tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with uninitialized data. The shape of the tensor is
defined by the tuple argument size
.
arguments
Arguments
size
: a tuple/list of integers defining the shape of the output tensor.
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
iex> ExTorch.empty({2, 3})
#Tensor<-6.2093e+29 4.5611e-41 0.0000e+00
0.0000e+00 1.1673e-42 0.0000e+00
[ CPUFloatType{2,3} ]>
iex> ExTorch.empty({2, 3}, dtype: :int64, device: :cpu)
#Tensor< 1.4023e+14 0.0000e+00 0.0000e+00
1.0000e+00 7.0000e+00 1.4023e+14
[ CPULongType{2,3} ]>
eye(n)
@spec eye(integer()) :: ExTorch.Tensor.t()
See ExTorch.eye/3
Available signature calls:
eye(n)
eye(n, kwargs)
@spec eye(integer(), m: integer(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec eye( integer(), integer() ) :: ExTorch.Tensor.t()
See ExTorch.eye/3
Available signature calls:
eye(n, m)
eye(n, kwargs)
eye(n, m, kwargs)
@spec eye( integer(), integer(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec eye( integer(), integer(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.
arguments
Arguments
n
: the number of rowsm
: the number of columns
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
iex> ExTorch.eye(3, 3)
#Tensor<
1 0 0
0 1 0
0 0 1
[ CPUFloatType{3,3} ]
>
iex> ExTorch.eye(4, 6, dtype: :uint8, device: :cpu)
#Tensor<
1 0 0 0 0 0
0 1 0 0 0 0
0 0 1 0 0 0
0 0 0 1 0 0
[ CPUByteType{4,6} ]
>
full(size, scalar)
@spec full( tuple() | [integer()], number() ) :: ExTorch.Tensor.t()
See ExTorch.full/3
Available signature calls:
full(size, scalar)
full(size, scalar, kwargs)
@spec full( tuple() | [integer()], number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec full( tuple() | [integer()], number(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with the scalar value scalar
, with the shape defined
by the variable argument size
.
arguments
Arguments
size
: a tuple/list of integers defining the shape of the output tensor.scalar
: the value to fill the output tensor with.
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
iex> ExTorch.full({2, 3}, 2)
#Tensor< 2 2 2
2 2 2
[ CPUFloatType{2,3} ]>
iex> ExTorch.full({2, 3}, 23, dtype: :uint8, device: :cpu)
#Tensor< 23 23 23
23 23 23
[ CPUByteType{2,3} ]>
iex> ExTorch.full({2, 3}, 3.1416)
#Tensor< 3.1416 3.1416 3.1416
3.1416 3.1416 3.1416
[ CPUFloatType{5} ]>
linspace(start, end_bound, steps)
@spec linspace( number(), number(), integer() ) :: ExTorch.Tensor.t()
Available signature calls:
linspace(start, end_bound, steps)
linspace(start, end_bound, steps, kwargs)
@spec linspace( number(), number(), integer(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec linspace( number(), number(), integer(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Creates a one-dimensional tensor of size steps
whose values are evenly
spaced from start
to end
, inclusive. That is, the value are:
$$(\text{start}, \text{start} + \frac{\text{end} - \text{start}}{\text{steps} - 1}, \ldots, \text{start} + (\text{steps} - 2) * \frac{\text{end} - \text{start}}{\text{steps} - 1}, \text{end})$$
arguments
Arguments
start
: the starting value for the set of points.end
: the ending value for the set of points.steps
: size of the constructed tensor.
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
# Returns a tensor with 10 evenly-spaced values between -2 and 10
iex> ExTorch.linspace(-2, 10, 10)
#Tensor<
-2.0000
-0.6667
0.6667
2.0000
3.3333
4.6667
6.0000
7.3333
8.6667
10.0000
[ CPUFloatType{10} ]
>
# Returns a tensor with 10 evenly-spaced int32 values between -2 and 10
iex> ExTorch.linspace(-2, 10, 10, dtype: :int32)
#Tensor<
-2
0
0
1
3
4
6
7
8
10
[ CPUIntType{10} ]
>
logspace(start, end_bound, steps)
@spec logspace( number(), number(), integer() ) :: ExTorch.Tensor.t()
Available signature calls:
logspace(start, end_bound, steps)
logspace(start, end_bound, steps, base)
@spec logspace( number(), number(), integer(), number() ) :: ExTorch.Tensor.t()
@spec logspace( number(), number(), integer(), base: number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
Available signature calls:
logspace(start, end_bound, steps, kwargs)
logspace(start, end_bound, steps, base)
logspace(start, end_bound, steps, base, kwargs)
@spec logspace( number(), number(), integer(), number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec logspace( number(), number(), integer(), number(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Creates a one-dimensional tensor of size steps
whose values are evenly
spaced from ${{\text{{base}}}}^{{\text{{start}}}}$ to
${{\text{{base}}}}^{{\text{{end}}}}$, inclusive, on a logarithmic scale
with base base
. That is, the values are:
$$(\text{base}^{\text{start}}, \text{base}^{(\text{start} + \frac{\text{end} - \text{start}}{ \text{steps} - 1})}, \ldots, \text{base}^{(\text{start} + (\text{steps} - 2) * \frac{\text{end} - \text{start}}{ \text{steps} - 1})}, \text{base}^{\text{end}})$$
arguments
Arguments
start
: the starting value for the set of points.end
: the ending value for the set of points.steps
: size of the constructed tensor.base
: base of the logarithm function. Default:10.0
.
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
iex> ExTorch.logspace(-10, 10, 5)
#Tensor<
1.0000e-10
1.0000e-05
1.0000e+00
1.0000e+05
1.0000e+10
[ CPUFloatType{5} ]
>
iex> ExTorch.logspace(0.1, 1.0, 5)
#Tensor<
1.2589
2.1135
3.5481
5.9566
10.0000
[ CPUFloatType{5} ]
>
iex> ExTorch.logspace(0.1, 1.0, 3, base: 2)
#Tensor<
1.0718
1.4641
2.0000
[ CPUFloatType{3} ]
>
iex> ExTorch.logspace(0.1, 1.0, 3, base: 2, dtype: :float64)
#Tensor<
1.0718
1.4641
2.0000
[ CPUDoubleType{3} ]
>
ones(size)
@spec ones(tuple() | [integer()]) :: ExTorch.Tensor.t()
See ExTorch.ones/2
Available signature calls:
ones(size)
ones(size, kwargs)
@spec ones(tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec ones( tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with the scalar value 1
, with the shape defined
by the variable argument size
.
arguments
Arguments
size
: a tuple/list of integers defining the shape of the output tensor.
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
iex> ExTorch.ones({2, 3})
#Tensor< 1 1 1
1 1 1
[ CPUFloatType{2,3} ]>
iex> ExTorch.ones({2, 3}, dtype: :uint8, device: :cpu)
#Tensor< 1 1 1
1 1 1
[ CPUByteType{2,3} ]>
iex> ExTorch.ones({5})
#Tensor< 1
1
1
1
1
[ CPUFloatType{5} ]>
rand(size)
@spec rand(tuple() | [integer()]) :: ExTorch.Tensor.t()
See ExTorch.rand/2
Available signature calls:
rand(size)
rand(size, kwargs)
@spec rand(tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec rand( tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with random numbers from a uniform distribution on the interval $[0, 1)$
The shape of the tensor is defined by the variable argument size
.
arguments
Arguments
size
: a tuple/list of integers defining the shape of the output tensor.
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
iex> ExTorch.rand({3, 3, 3})
#Tensor<
(1,.,.) =
0.5997 0.3569 0.7639
0.1939 0.0923 0.0942
0.3355 0.3534 0.6490
(2,.,.) =
0.7250 0.5877 0.9215
0.1583 0.7270 0.3289
0.7083 0.1259 0.0050
(3,.,.) =
0.1731 0.9534 0.6758
0.8523 0.0659 0.3623
0.0747 0.6079 0.7227
[ CPUFloatType{3,3,3} ]
>
iex> ExTorch.rand({2, 3}, dtype: :float64)
#Tensor<
0.6012 0.6164 0.2413
0.9720 0.7804 0.4863
[ CPUDoubleType{2,3} ]
>
randint(high, size)
@spec randint( integer(), tuple() | [integer()] ) :: ExTorch.Tensor.t()
Available signature calls:
randint(high, size)
randint(high, size, kwargs)
@spec randint( integer(), tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec randint( integer(), tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
@spec randint( integer(), integer(), tuple() | [integer()] ) :: ExTorch.Tensor.t()
Available signature calls:
randint(low, high, size)
randint(high, size, opts)
randint(high, size, kwargs)
randint(low, high, size, kwargs)
@spec randint( integer(), integer(), tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec randint( integer(), integer(), tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with random integers generated uniformly
between low
(inclusive) and high
(exclusive).
The shape of the tensor is defined by the variable argument size
.
arguments
Arguments
low
: Lowest integer to be drawn from the distribution. Default:0
.high
: One above the highest integer to be drawn from the distribution.size
: a tuple/list of integers defining the shape of the output tensor.
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
# Sample numbers between 0 and 3
iex> ExTorch.randint(3, {3, 3, 4})
#Tensor<
(1,.,.) =
2 2 0 2
0 0 1 0
1 2 1 0
(2,.,.) =
1 1 2 0
0 2 1 2
2 0 0 1
(3,.,.) =
0 2 0 2
0 1 1 1
2 1 1 1
[ CPUFloatType{3,3,4} ]
>
# Sample numbers between 0 and 3 of type int64
iex> ExTorch.randint(3, {3, 3, 4}, dtype: :int64)
#Tensor<
(1,.,.) =
2 2 1 0
0 1 0 1
2 2 2 2
(2,.,.) =
1 1 1 1
1 1 0 1
2 1 0 2
(3,.,.) =
1 2 1 0
1 1 2 1
1 1 0 1
[ CPULongType{3,3,4} ]
>
# Sample numbers between -2 and 4
iex> ExTorch.randint(-2, 3, {2, 2, 4})
#Tensor<
(1,.,.) =
-2 2 0 -2
0 2 1 2
(2,.,.) =
-2 -1 -1 1
0 -1 0 0
[ CPUFloatType{2,2,4} ]
>
# Sample numbers between -2 and 4 on cpu
iex> ExTorch.randint(-2, 3, {2, 2, 4}, device: :cpu)
#Tensor<
(1,.,.) =
-2 0 0 -2
2 1 2 -2
(2,.,.) =
2 -1 -1 1
1 2 1 -2
[ CPUFloatType{2,2,4} ]
>
randn(size)
@spec randn(tuple() | [integer()]) :: ExTorch.Tensor.t()
See ExTorch.randn/2
Available signature calls:
randn(size)
randn(size, kwargs)
@spec randn(tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec randn( tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with random numbers from a normal distribution
with mean 0
and variance 1
(also called the standard normal
distribution).
$$\text{{out}}_{{i}} \sim \mathcal{{N}}(0, 1)$$
The shape of the tensor is defined by the variable argument :attr:size
.
arguments
Arguments
size
: a tuple/list of integers defining the shape of the output tensor.
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
iex> ExTorch.randn({3, 3, 5})
#Tensor<
(1,.,.) =
0.0784 -0.3355 -0.0159 -0.0606 -1.2691
-0.6146 0.2346 0.8563 0.8795 0.0645
-1.9992 0.6692 0.2269 1.9263 0.1033
(2,.,.) =
0.2647 0.7078 0.0270 -1.1330 -0.4143
1.2061 -1.1191 0.7465 0.2140 0.7406
0.3587 -0.6102 0.3359 -0.4517 -0.5276
(3,.,.) =
1.7122 0.3814 -0.6218 0.8047 -0.6067
0.1693 0.4957 -0.6139 0.7341 1.4272
0.1630 -0.1142 0.8823 0.8026 1.3355
[ CPUFloatType{3,3,5} ]
iex> ExTorch.randn({3, 3, 5}, device: :cpu)
#Tensor<
(1,.,.) =
-0.8990 -0.3449 -1.2916 -0.0318 0.7116
0.9068 -0.3159 -0.6416 -1.8414 -0.1421
-0.9251 -0.8209 0.0830 -2.5484 0.3731
(2,.,.) =
0.5975 0.0690 -0.2972 -0.0328 -0.2672
1.3053 0.7803 -0.1992 -2.1078 -0.7520
1.3048 0.6391 0.1137 2.0412 0.2380
(3,.,.) =
-1.1820 -1.9329 -0.3965 -0.0618 -1.1190
0.7926 -1.8551 1.1356 -0.7451 -0.6003
1.0266 0.5791 0.2724 0.6952 -3.1296
[ CPUFloatType{3,3,5} ]
>
tensor(list)
@spec tensor(list() | tuple() | number()) :: ExTorch.Tensor.t()
See ExTorch.tensor/2
Available signature calls:
tensor(list)
tensor(list, kwargs)
@spec tensor(list() | tuple() | number(), device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec tensor( list() | tuple() | number(), ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Constructs a tensor with data.
arguments
Arguments
list
: Initial data for the tensor. Can be a list, tuple or number.
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
iex> ExTorch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
#Tensor<
0.1000 1.2000
2.2000 3.1000
4.9000 5.2000
[ CPUFloatType{3,2} ]
>
# Type inference
iex> ExTorch.tensor([0, 1])
#Tensor<
0
1
[ CPUByteType{2} ]
>
iex> ExTorch.tensor([[0.11111, 0.222222, 0.3333333]], dtype: :float64)
#Tensor<
0.1111 0.2222 0.3333
[ CPUDoubleType{1,3} ]
>
zeros(size)
@spec zeros(tuple() | [integer()]) :: ExTorch.Tensor.t()
See ExTorch.zeros/2
Available signature calls:
zeros(size)
zeros(size, kwargs)
@spec zeros(tuple() | [integer()], device: ExTorch.Device.device(), dtype: ExTorch.DType.dtype(), layout: ExTorch.Layout.layout(), memory_format: ExTorch.MemoryFormat.memory_format(), pin_memory: boolean(), requires_grad: boolean() ) :: ExTorch.Tensor.t()
@spec zeros( tuple() | [integer()], ExTorch.Tensor.Options.t() ) :: ExTorch.Tensor.t()
Returns a tensor filled with the scalar value 0
, with the shape defined
by the variable argument size
.
arguments
Arguments
size
: a tuple/list of integers defining the shape of the output tensor.
keyword-args
Keyword args
dtype (
ExTorch.DType
, optional): the desired data type of returned tensor. Default: ifnil
, uses a global default (seeExTorch.set_default_tensor_type
).layout (
ExTorch.Layout
, optional): the desired layout of returned Tensor. Default::strided
.device (
ExTorch.Device
, optional): the desired device of returned tensor. Default: ifnil
, uses the current device for the default tensor type (seeExTorch.set_default_tensor_type
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (
boolean()
, optional): If autograd should record operations on the returned tensor. Default:false
.pin_memory (
bool
, optional): If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:false
.memory_format (
ExTorch.MemoryFormat
, optional): the desired memory format of returned Tensor. Default::contiguous
examples
Examples
iex> ExTorch.zeros({2, 3})
#Tensor< 0 0 0
0 0 0
[ CPUFloatType{2,3} ]>
iex> ExTorch.zeros({2, 3}, dtype: :uint8, device: :cpu)
#Tensor< 0 0 0
0 0 0
[ CPUByteType{2,3} ]>
iex> ExTorch.zeros({5})
#Tensor< 0
0
0
0
0
[ CPUFloatType{5} ]>
Link to this section Tensor manipulation
unsqueeze(tensor, dim)
@spec unsqueeze( ExTorch.Tensor.t(), integer() ) :: ExTorch.Tensor.t()
Append an empty dimension to a tensor on a given dimension.
arguments
Arguments
tensor
: Input tensor (ExTorch.Tensor
)dim
: Dimension (integer()
)
Link to this section Tensor indexing
index(tensor, indices)
@spec index( ExTorch.Tensor.t(), ExTorch.Index.t() ) :: ExTorch.Tensor.t()
Index a tensor given a list of integers, ranges, tensors, nil or
:ellipsis
.
arguments
Arguments
tensor
: Input tensor (ExTorch.Tensor
)indices
: Indices to select (ExTorch.Index
)
examples
Examples
iex> a = ExTorch.rand({3, 4, 4})
#Tensor<
(1,.,.) =
0.8974 0.6348 0.4760 0.0726
0.3809 0.4332 0.9761 0.4656
0.8544 0.0605 0.1683 0.4142
0.7736 0.1794 0.2732 0.3165
(2,.,.) =
0.1967 0.2013 0.7938 0.8738
0.0240 0.0098 0.4605 0.3970
0.9699 0.1057 0.3176 0.2651
0.7698 0.6383 0.0016 0.7198
(3,.,.) =
0.5061 0.0021 0.4804 0.7444
0.5725 0.2019 0.3524 0.5345
0.3876 0.3622 0.5318 0.0445
0.3276 0.2913 0.8069 0.6132
[ CPUDoubleType{3,4,4} ]
>
# Use an integer index
iex> ExTorch.index(a, 0)
#Tensor<
0.8974 0.6348 0.4760 0.0726
0.3809 0.4332 0.9761 0.4656
0.8544 0.0605 0.1683 0.4142
0.7736 0.1794 0.2732 0.3165
[ CPUDoubleType{4,4} ]
>
# Use a slice index
iex> ExTorch.index(a, 0..2)
#Tensor<
(1,.,.) =
0.8974 0.6348 0.4760 0.0726
0.3809 0.4332 0.9761 0.4656
0.8544 0.0605 0.1683 0.4142
0.7736 0.1794 0.2732 0.3165
(2,.,.) =
0.1967 0.2013 0.7938 0.8738
0.0240 0.0098 0.4605 0.3970
0.9699 0.1057 0.3176 0.2651
0.7698 0.6383 0.0016 0.7198
[ CPUDoubleType{2,4,4} ]
>
iex> ExTorch.index(a, ExTorch.slice(0, 1))
#Tensor<
(1,.,.) =
0.8974 0.6348 0.4760 0.0726
0.3809 0.4332 0.9761 0.4656
0.8544 0.0605 0.1683 0.4142
0.7736 0.1794 0.2732 0.3165
[ CPUDoubleType{1,4,4} ]
>
# Index multiple dimensions
iex> ExTorch.index(a, [:::, ExTorch.slice(0, 2), 0])
#Tensor<
0.8974 0.3809
0.1967 0.0240
0.5061 0.5725
[ CPUDoubleType{3,2} ]
>
notes
Notes
For more information regarding the kind of accepted indices and their corresponding
behaviour, please see the ExTorch.Index
documentation
slice(start \\ nil, stop \\ nil, step \\ nil)
@spec slice(integer() | nil, integer() | nil, integer() | nil) :: ExTorch.Index.Slice.t()
Create a slice to index a tensor.
arguments
Arguments
start
: The starting slice value. Default: nilstop
: The non-inclusive end of the slice. Default: nilstep
: The step between values. Default: nil
returns
Returns
slice
: AnExTorch.Index.Slice
struct that represents the slice.
notes
Notes
An empty slice will represent the "take-all axis", represented by ":" in Python.