感知机
机器学习
Perceptron model
Let f(x) denotes perceptron:
f(x)=sign(w⋅x+b),
where
w∈Rn is weight,
b∈R is bias, and
sign(x)={+1,−1,x≥0x<0.
Perceptron learning strategy
The loss function of perceptron sign(w⋅x+b) is defined as
L(w,b)=−∑xi∈Myi(w⋅xi+b),
where
M is the set of incorrectly classified points.
Perceptron learning algorithm
Gradient of loss L(w,b) is given by
∇wL(w,b)=−∑xi∈Myixi,
and
∇bL(w,b)=−∑xi∈Myi.
The selection of initial value of w and b does effect the solution. There are many solutions. If the dataset is not linearly separable, perceptrons will fail to converge.