DeepPipe2 v1.1.6 Cumatrix View Source

Calculate matrix or tensor using CUDA and CUBLAS library.

Caution, each element of matrix must be float number.

tensor data structure is 4-dimensions tensor (N,C,H,W) or 3-dimension tensor (C,H,W) N is mini batch size. C is channel. H is hight of image. W is width of image.

error code N<10000 bad argument error N is argument number. 10000<= N <11000 CUDA error N-10000 is error code of CUDA. 11000 < N cuBLAS error N-11000 is error code of cuBLAS.

Link to this section Summary

Functions

accuracy(mt1,ls) return accuracy rate as float number. mt1 is set of row-vector.Each row-vector is onehot. ls is list each element is label integer number.

activate(mt,fun) apply fun to mt. fun is :sigmoid, :tanh, :relu :softmax

adagrad(mt1,mt2,mt3,lr) for each element h = mt2 + mt3mt3.

w = mt1 - lr
(1 / sqrt(h)) * mt2. and dropout w with dr. return tuple(h,w) for learn/3 in DeepPipe2

add(mt1,mt2) generate matrix mt1+mt2. if mt1 or mt2 is row vector, expand size to matrix. This function is for bias add in DL.

add_diff(mt,r,c,val) elt(mt,x,y) := elt(mt,x,y + val.

analizer(mt,id) analizer(ts,id) display id-number,max-element,min-element,average. for debug.

average(mt) caluculate average of row-vector and generate row-vector that each element is average. For Deep-Learning.

convolute(ts1,ts2,st_h,st_w,pad) convolution with input-tensor(ts1), filter-tensor(ts2), stride(st_h,st_w), padding(pad)

correct(mt1,ls) return correct number as integer number. mt1 is set of row-vector.Each row-vector is onehot. ls is list each element is label integer number.

deconvolute(ts1,ts2,st_h,st_w,pad) deconvolution with input-tensor(ts1), filter-tensor(ts2), stride(st_h,st_w), padding(pad) 1st arg loss-tensor 2nd arg filter-tesnor

diff(mt1,mt2,fun) for each element multiply differntial of mt2 and mt1. fun is :sigmoid :tanh, :relu.

generate mask matrix or tensor for dropout

elt(r,c,mt) pick up element of mt(r,c) index is one base

emult(mt1,mt2) generate Hadamard matrix.

full(ts) transfer from 4 DIM tensor to matrix.

gradfilter(ts1,ts2,ts3,st_h,st_w,pad) gradient by backpropagation. ts1 is input-tesor, ts2 is filter-tensor, ts3 is loss-tensor, st_h and st_w are stride size, pad is padding size. calculate gradient of filter.

ident(n) generate ident matrix of size n.

if_equal(mt1,mt2) is_equal(ts1,ts2) for debug

is_matrix(x) if x is matrix return true else return false

is_near(mt1,mt2) is_near(ts1,ts2) for debug

is_tesnsor(x) if x is tensor return true else return false

loss(mt1,mt2) mt1 is forwarded-matrix. mt2 is train-matrix. generate float that is average of loss. fun is :square or :cross. :square means mean_square function, and :cross means cross_entropy function. mt1 is calculated data matrix , mt2 is train data matrix. each data is row vector.

momentum(mt1,mt2,mt3,lr) for each element v = 0.5 mt2(x,y) - lr mt3(x,y). w = mt1 + v. return tuple {v,w} for learn/3 in DeepPipe2

generate matrix mt1mt2 with cuBLAS. if mt1 or mt2 is float number. generate matrix that each element is selt(x,y)

new(list) generate matrix with given list. e.g. [[1,2],[3,4]]. ls is also list that express 4-dimension or 3-dimension data

new(r,c) generate matrix with given row col size. Each elements are zero.

new(c,h,w) generate tensor with given size

new(n,c,h,w,val) generate tensor with given size.

new(n,c,h,w,val) generate tensor with given size.Each elements are val.

iex(1)> Cumatrix.nth([1,2,3],2) 2

pooling(tensor,st_h,st_w) pooling with stride st_w st_w. size of H and W must be less 1000. max 999*999. return tuple {tensor-for-forward,tensor-for-backward}

print(mt) print(ts) print matrix mt or tensor ts

rand(r,c) generate matrix with random (Box-muller).

rand(c,h,w) generate 3 dimensions data.

rand(n,c,h,w) generate 4 dimensions data.

random_select(mt1,mt2,n) select same row data from matrix(mt1) and matrix(mt2)

iex(1)> Cumatrix.reshape([1,2,3,4,5,6],[2,3]) [[1, 2, 3], [4, 5, 6]] iex(2)> Cumatrix.reshape([1,2,3,4,5,6],[1,2,3]) [[[1, 2, 3], [4, 5, 6]]]

set(mt,r,c,val) elt(mt,x,y) := val.

sgd(mt1,mt2,lr,dr) element of mt1 - element of mt2*lr. and dropout with rate dr.

size(mt) or size(tensor) return tuple {rowsize,colsize}

normalizer(ts) calculate average of nth data and sub each elemet the average. when matrix , nothing to do

sub(mt1,mt2) generate matrix mt1-mt2. It is possible to adapt tensor

sum(mt) return sum of elements

to_list(mt) return list that transformed from matrix to_list(tensor) tensor is 3-dimension or 4-dimension

trace(mt) return float number. It is trace of matrix.

transpose(mt) generate transposed matrix

unfull(mt,h,w) transfer from matrix to 4 DIM tensor. tensor(N,C,H,W). N is row size of matrix. C is 1.

unpooing(ts1,ts2,st_h,st_w) unpooling with stride st. ts1 is sparse tensor that save index of max element. ts2 is loss tensor.

visualizer(ts,n,c) display heatmap nth and c channel data. It depends on Matrex.heatmap

Link to this section Functions

accuracy(mt1,ls) return accuracy rate as float number. mt1 is set of row-vector.Each row-vector is onehot. ls is list each element is label integer number.

e.g.

iex(1)> a = Cumatrix.new([[0.0,0.0,1.0],[0.0,0.1,0.3]]) {2, 3, <<0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 205, 204, 204, 61, 0, 0, 128, 63, 154, 153, 153, 62>>} iex(3)> Cumatrix.accuracy(a,[2,2]) 1.0 iex(4)> Cumatrix.accuracy(a,[2,1]) 0.5 iex(5)>

activate(mt,fun) apply fun to mt. fun is :sigmoid, :tanh, :relu :softmax

Link to this function

adagrad(arg1, arg2, arg3, lr)

View Source

adagrad(mt1,mt2,mt3,lr) for each element h = mt2 + mt3mt3.

w = mt1 - lr
(1 / sqrt(h)) * mt2. and dropout w with dr. return tuple(h,w) for learn/3 in DeepPipe2

add(mt1,mt2) generate matrix mt1+mt2. if mt1 or mt2 is row vector, expand size to matrix. This function is for bias add in DL.

Link to this function

add_diff(arg, x, y, val)

View Source

add_diff(mt,r,c,val) elt(mt,x,y) := elt(mt,x,y + val.

Link to this function

add_diff(arg, n1, c1, h1, w1, val)

View Source

analizer(mt,id) analizer(ts,id) display id-number,max-element,min-element,average. for debug.

average(mt) caluculate average of row-vector and generate row-vector that each element is average. For Deep-Learning.

Link to this function

convolute(arg1, arg2, st_h, st_w, pad)

View Source

convolute(ts1,ts2,st_h,st_w,pad) convolution with input-tensor(ts1), filter-tensor(ts2), stride(st_h,st_w), padding(pad)

correct(mt1,ls) return correct number as integer number. mt1 is set of row-vector.Each row-vector is onehot. ls is list each element is label integer number.

e.g.

iex(1)> a = Cumatrix.new([[0.0,0.0,1.0],[0.0,0.1,0.3]]) {2, 3, <<0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 205, 204, 204, 61, 0, 0, 128, 63, 154, 153, 153, 62>>} iex(3)> Cumatrix.correct(a,[2,2]) 2.0 iex(4)> Cumatrix.correct(a,[2,1]) 1.0 iex(5)>

Link to this function

deconvolute(arg1, arg2, st_h, st_w, pad)

View Source

deconvolute(ts1,ts2,st_h,st_w,pad) deconvolution with input-tensor(ts1), filter-tensor(ts2), stride(st_h,st_w), padding(pad) 1st arg loss-tensor 2nd arg filter-tesnor

diff(mt1,mt2,fun) for each element multiply differntial of mt2 and mt1. fun is :sigmoid :tanh, :relu.

generate mask matrix or tensor for dropout

elt(r,c,mt) pick up element of mt(r,c) index is one base

emult(mt1,mt2) generate Hadamard matrix.

full(ts) transfer from 4 DIM tensor to matrix.

Link to this function

gradfilter(arg1, arg2, arg3, st_h, st_w, pad)

View Source

gradfilter(ts1,ts2,ts3,st_h,st_w,pad) gradient by backpropagation. ts1 is input-tesor, ts2 is filter-tensor, ts3 is loss-tensor, st_h and st_w are stride size, pad is padding size. calculate gradient of filter.

1st arg input tensor
2nd arg filter tensor
3rd arg loss tensor
4th arg stride_hight
5th arg stride_width
6th arg padding size

ident(n) generate ident matrix of size n.

if_equal(mt1,mt2) is_equal(ts1,ts2) for debug

is_matrix(x) if x is matrix return true else return false

is_near(mt1,mt2) is_near(ts1,ts2) for debug

is_tesnsor(x) if x is tensor return true else return false

loss(mt1,mt2) mt1 is forwarded-matrix. mt2 is train-matrix. generate float that is average of loss. fun is :square or :cross. :square means mean_square function, and :cross means cross_entropy function. mt1 is calculated data matrix , mt2 is train data matrix. each data is row vector.

Link to this function

momentum(arg1, arg2, arg3, lr)

View Source

momentum(mt1,mt2,mt3,lr) for each element v = 0.5 mt2(x,y) - lr mt3(x,y). w = mt1 + v. return tuple {v,w} for learn/3 in DeepPipe2

generate matrix mt1mt2 with cuBLAS. if mt1 or mt2 is float number. generate matrix that each element is selt(x,y)

new(list) generate matrix with given list. e.g. [[1,2],[3,4]]. ls is also list that express 4-dimension or 3-dimension data

new(r,c) generate matrix with given row col size. Each elements are zero.

new(c,h,w) generate tensor with given size

new(n,c,h,w,val) generate tensor with given size.

new(n,c,h,w,val) generate tensor with given size.Each elements are val.

iex(1)> Cumatrix.nth([1,2,3],2) 2

Link to this function

pooling(arg, st_h, st_w)

View Source

pooling(tensor,st_h,st_w) pooling with stride st_w st_w. size of H and W must be less 1000. max 999*999. return tuple {tensor-for-forward,tensor-for-backward}

print(mt) print(ts) print matrix mt or tensor ts

rand(r,c) generate matrix with random (Box-muller).

rand(c,h,w) generate 3 dimensions data.

rand(n,c,h,w) generate 4 dimensions data.

Link to this function

random_select(arg1, arg2, n)

View Source

random_select(mt1,mt2,n) select same row data from matrix(mt1) and matrix(mt2)

iex(1)> Cumatrix.reshape([1,2,3,4,5,6],[2,3]) [[1, 2, 3], [4, 5, 6]] iex(2)> Cumatrix.reshape([1,2,3,4,5,6],[1,2,3]) [[[1, 2, 3], [4, 5, 6]]]

set(mt,r,c,val) elt(mt,x,y) := val.

sgd(mt1,mt2,lr,dr) element of mt1 - element of mt2*lr. and dropout with rate dr.

size(mt) or size(tensor) return tuple {rowsize,colsize}

normalizer(ts) calculate average of nth data and sub each elemet the average. when matrix , nothing to do

sub(mt1,mt2) generate matrix mt1-mt2. It is possible to adapt tensor

sum(mt) return sum of elements

to_list(mt) return list that transformed from matrix to_list(tensor) tensor is 3-dimension or 4-dimension

trace(mt) return float number. It is trace of matrix.

transpose(mt) generate transposed matrix

unfull(mt,h,w) transfer from matrix to 4 DIM tensor. tensor(N,C,H,W). N is row size of matrix. C is 1.

Link to this function

unpooling(arg1, arg2, st_h, st_w)

View Source

unpooing(ts1,ts2,st_h,st_w) unpooling with stride st. ts1 is sparse tensor that save index of max element. ts2 is loss tensor.

visualizer(ts,n,c) display heatmap nth and c channel data. It depends on Matrex.heatmap