matrex v0.6.7 Matrex.Algorithms View Source

Machine learning algorithms using matrices.

Link to this section Summary

Functions

Function of a surface with two hills

Minimizes a continuous differentiable multivariate function

Linear regression cost and gradient function with regularization from Andrew Ng’s course (ex3)

The same cost function, implemented with operators from Matrex.Operators module

Cost function for neural network with one hidden layer

Predict labels for the featurex with pre-trained neuron coefficients theta1 and theta2

Run logistic regression one-vs-all MNIST digits recognition in parallel

Computes sigmoid gradinet for the given matrix

Link to this section Functions

Function of a surface with two hills.

Link to this function fmincg(f, x, fParams, length) View Source
fmincg(
  (Matrex.t(), any(), pos_integer() -> {float(), Matrex.t()}),
  Matrex.t(),
  any(),
  integer()
) :: {Matrex.t(), [float()], pos_integer()}

Minimizes a continuous differentiable multivariate function.

Ported to Elixir from Octave version, found in Andre Ng’s course, (c) Carl Edward Rasmussen.

f — cost function, that takes two paramteters: current version of x and fParams. For example, lr_cost_fun/2.

x — vector of parameters, which we try to optimize, so that cost function returns the minimum value.

fParams — this value is passed as the second parameter to the cost function.

length — number of iterations to perform.

Returns column matrix of found solutions, list of cost function values and number of iterations used.

Starting point is given by x (D by 1), and the function f, must return a function value and a vector of partial derivatives. The Polack-Ribiere flavour of conjugate gradients is used to compute search directions, and a line search using quadratic and cubic polynomial approximations and the Wolfe-Powell stopping criteria is used together with the slope ratio method for guessing initial step sizes. Additionally a bunch of checks are made to make sure that exploration is taking place and that extrapolation will not be unboundedly large.

Link to this function lr_cost_fun(theta, params, iteration \\ 0) View Source
lr_cost_fun(
  Matrex.t(),
  {Matrex.t(), Matrex.t(), number(), non_neg_integer()},
  pos_integer()
) :: {float(), Matrex.t()}

Linear regression cost and gradient function with regularization from Andrew Ng’s course (ex3).

Computes the cost of using theta as the parameter for regularized logistic regression and the gradient of the cost w.r.t. to the parameters.

Compatible with fmincg/4 algorithm from thise module.

theta — parameters, to compute cost for

X — training data input.

y — training data output.

lambda — regularization parameter.

Link to this function lr_cost_fun_ops(theta, params, iteration \\ 0) View Source

The same cost function, implemented with operators from Matrex.Operators module.

Works 2 times slower, than standard implementation. But it’s a way more readable.

Link to this function nn_cost_fun(theta, params, iteration \\ 0) View Source
nn_cost_fun(
  Matrex.t(),
  {pos_integer(), pos_integer(), pos_integer(), Matrex.t(), Matrex.t(),
   number()},
  pos_integer()
) :: {number(), Matrex.t()}

Cost function for neural network with one hidden layer.

Does delta computation in parallel.

Ported from Andrew Ng’s course, ex4.

Link to this function nn_predict(theta1, theta2, x) View Source
nn_predict(Matrex.t(), Matrex.t(), Matrex.t()) :: Matrex.t()

Predict labels for the featurex with pre-trained neuron coefficients theta1 and theta2.

Link to this function run_lr(iterations \\ 56, concurrency \\ 1) View Source

Run logistic regression one-vs-all MNIST digits recognition in parallel.

Link to this function run_nn(epsilon \\ 0.12, iterations \\ 100, lambdas \\ [0.1, 5, 50]) View Source

Run neural network with one hidden layer.

Link to this function sigmoid_gradient(z) View Source
sigmoid_gradient(Matrex.t()) :: Matrex.t()

Computes sigmoid gradinet for the given matrix.

g = sigmoid(X) * (1 - sigmoid(X))