View Source Torchx.Backend (Torchx v0.3.0)
An opaque backend Nx backend with bindings to libtorch/Pytorch.
Torchx behaviour that is different from BinaryBackend:
Torchx doesn't support u16/u32/u64. Only u8 is supported.
iex> Nx.tensor([1, 2, 3], type: {:u, 16}, backend: Torchx.Backend) ** (ArgumentError) Torchx does not support unsigned 16 bit integer
Torchx doesn't support u8 on sums, you should convert input to signed integer.
iex> Nx.sum(Nx.tensor([1, 2, 3], type: {:u, 8}, backend: Torchx.Backend)) ** (ArgumentError) Torchx does not support unsigned 64 bit integer (explicitly cast the input tensor to a signed integer before taking sum)
Torchx rounds half-to-even, while Elixir rounds half-away-from-zero. So in Elixir
round(0.5) == 1.0
while in Torchxround(0.5) == 0.0
.Nx.as_type/2
converts non-finite values such as infinity becomes the maximum value for a type, negative infinity becomes the minimum value, and nan becomes zero.Torchx
behaviour is type dependent with no clear rule across types.
options
Options
:device
- Defaults to:cpu
. An atom representing the device for the allocation of a given tensor. Valid values can be seen at the mainTorchx
docs.