[关闭]
@nrailgun 2015-10-18T12:40:39.000000Z 字数 1035 阅读 1492

CNNVR: Neural Networks, backpropagation

机器学习


Chain Rule

fx=fggx

NN

Neural network Output Axon:

a=f(iwixi+b)

Sigmoid Function

σ(x)x=[1σ(x)]σ(x)

for sigmoid function σ(x)=11+exp(x).

Using sigmoid as activation, 2 problems:

TanH

ReLU

f(x)=max(0,x)

Pros:

But:
Never update if x<0. Use Leaky ReLU instead.

Maxout

max(w1x+b1,w2x+b2)

In Pratice

Use ReLU, and be careful with learning rate. Tryout Leaky ReLU and Maxout. Don't expect tanh too much. Never use sigmoid.

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注