View Source Scholar.Metrics.Distance (Scholar v0.2.1)
Distance metrics between multi-dimensional tensors. They all support distance calculations between any subset of axes.
Summary
Functions
Chebyshev or $L_{\infty}$ distance.
Cosine distance.
Standard euclidean distance ($L_{2}$ distance).
Hamming distance.
Hamming distance in weighted version.
Manhattan, Taxicab, or $L_{1}$ distance.
Minkowski distance.
Squared euclidean distance.
Functions
Chebyshev or $L_{\infty}$ distance.
$$ D(x, y) = \max_i |x_i - y_i| $$
Options
:axes
- Axes to calculate the distance over. By default the distance is calculated between the whole tensors.
Examples
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([3, 2])
iex> Scholar.Metrics.Distance.chebyshev(x, y)
#Nx.Tensor<
f32
2.0
>
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([1, 2])
iex> Scholar.Metrics.Distance.chebyshev(x, y)
#Nx.Tensor<
f32
0.0
>
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([1, 2, 3])
iex> Scholar.Metrics.Distance.chebyshev(x, y)
** (ArgumentError) tensors must be broadcast compatible, got tensors with shapes {2} and {3}
iex> x = Nx.tensor([[1, 2, 5], [3, 4, 3]])
iex> y = Nx.tensor([[8, 3, 1], [2, 5, 2]])
iex> Scholar.Metrics.Distance.chebyshev(x, y, axes: [1])
#Nx.Tensor<
f32[2]
[7.0, 1.0]
>
iex> x = Nx.tensor([[6, 2, 9], [2, 5, 3]])
iex> y = Nx.tensor([[8, 3, 1]])
iex> Scholar.Metrics.Distance.chebyshev(x, y)
#Nx.Tensor<
f32
8.0
>
Cosine distance.
$$ D(u, v) = 1 - \frac{u \cdot v}{\|u\|_2 \|v\|_2} $$
Options
:axes
- Axes to calculate the distance over. By default the distance is calculated between the whole tensors.
Examples
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([5, 2])
iex> Scholar.Metrics.Distance.cosine(x, y)
#Nx.Tensor<
f32
0.25259071588516235
>
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([1, 2, 3])
iex> Scholar.Metrics.Distance.cosine(x, y)
** (ArgumentError) tensors must be broadcast compatible, got tensors with shapes {2} and {3}
iex> x = Nx.tensor([[1, 2, 3], [0, 0, 0], [5, 2, 4]])
iex> y = Nx.tensor([[1, 5, 2], [2, 4, 1], [0, 0, 0]])
iex> Scholar.Metrics.Distance.cosine(x, y, axes: [1])
#Nx.Tensor<
f32[3]
[0.1704850196838379, 1.0, 1.0]
>
iex> x = Nx.tensor([[6, 2, 9], [2, 5, 3]])
iex> y = Nx.tensor([[8, 3, 1]])
iex> Scholar.Metrics.Distance.cosine(x, y)
#Nx.Tensor<
f32
0.10575336217880249
>
Standard euclidean distance ($L_{2}$ distance).
$$ D(x, y) = \sqrt{\sum_i (x_i - y_i)^2} $$
Options
:axes
- Axes to calculate the distance over. By default the distance is calculated between the whole tensors.
Examples
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([3, 2])
iex> Scholar.Metrics.Distance.euclidean(x, y)
#Nx.Tensor<
f32
2.0
>
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([1, 2])
iex> Scholar.Metrics.Distance.euclidean(x, y)
#Nx.Tensor<
f32
0.0
>
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([1, 2, 3])
iex> Scholar.Metrics.Distance.euclidean(x, y)
** (ArgumentError) tensors must be broadcast compatible, got tensors with shapes {2} and {3}
iex> x = Nx.tensor([[1, 2, 5], [3, 4, 3]])
iex> y = Nx.tensor([[8, 3, 1], [2, 5, 2]])
iex> Scholar.Metrics.Distance.euclidean(x, y, axes: [0])
#Nx.Tensor<
f32[3]
[7.071067810058594, 1.4142135381698608, 4.123105525970459]
>
iex> x = Nx.tensor([[6, 2, 9], [2, 5, 3]])
iex> y = Nx.tensor([[8, 3, 1]])
iex> Scholar.Metrics.Distance.euclidean(x, y)
#Nx.Tensor<
f32
10.630146026611328
>
Hamming distance.
$$ hamming(x, y) = \frac{\left \lvert x_{i, j, ...} \neq y_{i, j, ...}\right \rvert}{\left \lvert x_{i, j, ...}\right \rvert} $$ where $i, j, ...$ are the aggregation axes
Options
:axes
- Axes to calculate the distance over. By default the distance is calculated between the whole tensors.
Examples
iex> x = Nx.tensor([1, 0, 0])
iex> y = Nx.tensor([0, 1, 0])
iex> Scholar.Metrics.Distance.hamming(x, y)
#Nx.Tensor<
f32
0.6666666865348816
>
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([1, 2, 3])
iex> Scholar.Metrics.Distance.hamming(x, y)
** (ArgumentError) tensors must be broadcast compatible, got tensors with shapes {2} and {3}
iex> x = Nx.tensor([[1, 2, 3], [0, 0, 0], [5, 2, 4]])
iex> y = Nx.tensor([[1, 5, 2], [2, 4, 1], [0, 0, 0]])
iex> Scholar.Metrics.Distance.hamming(x, y, axes: [1])
#Nx.Tensor<
f32[3]
[0.6666666865348816, 1.0, 1.0]
>
iex> x = Nx.tensor([[6, 2, 9], [2, 5, 3]])
iex> y = Nx.tensor([[8, 3, 1]])
iex> Scholar.Metrics.Distance.hamming(x, y)
#Nx.Tensor<
f32
1.0
>
Hamming distance in weighted version.
Options
:axes
- Axes to calculate the distance over. By default the distance is calculated between the whole tensors.
Examples
iex> x = Nx.tensor([1, 0, 0])
iex> y = Nx.tensor([0, 1, 0])
iex> weights = Nx.tensor([1, 0.5, 0.5])
iex> Scholar.Metrics.Distance.hamming(x, y, weights)
#Nx.Tensor<
f32
0.75
>
iex> x = Nx.tensor([[6, 2, 9], [2, 5, 3]])
iex> y = Nx.tensor([[8, 3, 1]])
iex> weights = Nx.tensor([1, 0.5, 0.5])
iex> Scholar.Metrics.Distance.hamming(x, y, weights, axes: [1])
#Nx.Tensor<
f32[2]
[1.0, 1.0]
>
Manhattan, Taxicab, or $L_{1}$ distance.
$$ D(x, y) = \sum_i |x_i - y_i| $$
Options
:axes
- Axes to calculate the distance over. By default the distance is calculated between the whole tensors.
Examples
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([3, 2])
iex> Scholar.Metrics.Distance.manhattan(x, y)
#Nx.Tensor<
f32
2.0
>
iex> x = Nx.tensor([1.0, 2.0])
iex> y = Nx.tensor([1, 2])
iex> Scholar.Metrics.Distance.manhattan(x, y)
#Nx.Tensor<
f32
0.0
>
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([1, 2, 3])
iex> Scholar.Metrics.Distance.manhattan(x, y)
** (ArgumentError) tensors must be broadcast compatible, got tensors with shapes {2} and {3}
iex> x = Nx.tensor([[1, 2, 5], [3, 4, 3]])
iex> y = Nx.tensor([[8, 3, 1], [2, 5, 2]])
iex> Scholar.Metrics.Distance.manhattan(x, y, axes: [0])
#Nx.Tensor<
f32[3]
[8.0, 2.0, 5.0]
>
iex> x = Nx.tensor([[6, 2, 9], [2, 5, 3]])
iex> y = Nx.tensor([[8, 3, 1]])
iex> Scholar.Metrics.Distance.manhattan(x, y)
#Nx.Tensor<
f32
21.0
>
Minkowski distance.
$$ D(x, y) = \left(\sum_i |x_i - y_i|^p\right)^{\frac{1}{p}} $$
Options
:axes
- Axes to calculate the distance over. By default the distance is calculated between the whole tensors.:p
- A positive parameter of Minkowski distance or :infinity (then Chebyshev metric computed). The default value is2.0
.
Examples
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([5, 2])
iex> Scholar.Metrics.Distance.minkowski(x, y)
#Nx.Tensor<
f32
4.0
>
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([1, 2])
iex> Scholar.Metrics.Distance.minkowski(x, y)
#Nx.Tensor<
f32
0.0
>
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([1, 2, 3])
iex> Scholar.Metrics.Distance.minkowski(x, y)
** (ArgumentError) tensors must be broadcast compatible, got tensors with shapes {2} and {3}
iex> x = Nx.tensor([[1, 2, 5], [3, 4, 3]])
iex> y = Nx.tensor([[8, 3, 1], [2, 5, 2]])
iex> Scholar.Metrics.Distance.minkowski(x, y, p: 2.5, axes: [0])
#Nx.Tensor<
f32[3]
[7.021548271179199, 1.3195079565048218, 4.049539089202881]
>
iex> x = Nx.tensor([[6, 2, 9], [2, 5, 3]])
iex> y = Nx.tensor([[8, 3, 1]])
iex> Scholar.Metrics.Distance.minkowski(x, y, p: 2.5)
#Nx.Tensor<
f32
9.621805191040039
>
Squared euclidean distance.
$$ D(x, y) = \sum_i (x_i - y_i)^2 $$
Options
:axes
- Axes to calculate the distance over. By default the distance is calculated between the whole tensors.
Examples
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([3, 2])
iex> Scholar.Metrics.Distance.squared_euclidean(x, y)
#Nx.Tensor<
f32
4.0
>
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([1.0, 2.0])
iex> Scholar.Metrics.Distance.squared_euclidean(x, y)
#Nx.Tensor<
f32
0.0
>
iex> x = Nx.tensor([1, 2])
iex> y = Nx.tensor([1, 2, 3])
iex> Scholar.Metrics.Distance.squared_euclidean(x, y)
** (ArgumentError) tensors must be broadcast compatible, got tensors with shapes {2} and {3}
iex> x = Nx.tensor([[1, 2, 5], [3, 4, 3]])
iex> y = Nx.tensor([[8, 3, 1], [2, 5, 2]])
iex> Scholar.Metrics.Distance.squared_euclidean(x, y, axes: [0])
#Nx.Tensor<
f32[3]
[50.0, 2.0, 17.0]
>
iex> x = Nx.tensor([[6, 2, 9], [2, 5, 3]])
iex> y = Nx.tensor([[8, 3, 1]])
iex> Scholar.Metrics.Distance.squared_euclidean(x, y)
#Nx.Tensor<
f32
113.0
>