DeepPipe2 v1.1.6 Network View Source

defnetwork is macros to describe network

argument must have under bar to avoid warning message

defnetwork  name(_x) do
_x |> element of network |> ...
end

element

  • w(r,c) weight matrix row-size is r col-size is c. initial val is random * 0.1, default learning late 0.1
  • w(r,c,ir,lr) ir is initial rate to multiple randam, lr is learning rate.
  • w(r,c,ir,lr,dr) dr is dropout rate.
  • b(n) bias row vector size n. initial val is randam * 0.1, default learning late 0.1
  • b(n,ir,lr) ir is initial rate to multiple randam, lr is learning rate.
  • b(n,ir,lr,dp) dr is dropout rate.
  • activate function leru sigmoid tanh softmax
  • f(r,c) filter matrix row-size is r col-size is c. input and output channel is 1, initial val random * 0.1, default learning late 0.1
  • f(r,c,i) filter matrix. i input channel.
  • f(r,c,i,o) filter matrix. o output channel
  • f(r,c,i,o,{st_h,st_w}) filter matrix. st_h and st_w are stride size od hight and width.
  • f(r,c,i,o,{st_h,st_w},pad) filter matrix. pad is padding size.
  • f(r,c,i,o,{st_h,st_w},pad,{:xcavier,dim},lr) filter matrix. generate initial element by Xavier method. Dim is dimension of input, lr is learning rate.
  • f(r,c,i,o,{st_h,st_w},pad,{:he,dim},lr) filter matrix. generate initial element by He method. Dim is dimension of input, lr is learning rate.
  • f(r,c,i,o,{st_h,st_w},pad,ir,lr) filter matrix. ir is rate for initial val, lr is learning rate.
  • f(r,c,i,o,{st_h,st_w},pad,ir,lr,dr) filter matrix. dr is dropout rate.
  • pooling(st_h,st_w) st_h and st_w are pooling size.
  • full convert from image of CNN to matrix for DNN.

for debug

  • analizer(n) calculate max min average of data and display n max min average
  • visualizer(n,c) display a data(n th, c channel) as graphics

data structure

network
[{:weight,w,ir,lr,dr,v},{:bias,b,ir,lr,dr,v},{:function,name},{:filter,w,{st_h,st_w},pad,ir,lr,dr,v} ...]
weight
{:weight,w,ir,lr,dp,v,mask} w is matrix, ir is rate for initial random number,
lr is learning rate, dp is dropout rate.
bias
{:bias,b,ir,lr,dp,v,mask} b is row vector
function
{:function,name} name is function name within sigmoid tanh relu softmax
filter
{:filter,w,{st_h,st_w},pad,ir,lr,dr,v,mask}
pooling
{:pooling,st_,st_w}

Link to this section Summary

Link to this section Functions

Link to this macro

defnetwork(name, list)

View Source (macro)