[关闭]
@nrailgun 2015-10-10T20:10:26.000000Z 字数 1431 阅读 1966

逻辑回归与最大熵模型

机器学习


Logistic Regression Model

Logistic distribution

The probability distribution of random variable X obeying Logistic Distribution is

F(x)=P(Xx)=11+e(xμ)/λ

And its probability density function is

f(x)=F(x)=e(xμ)/λλ(1+e(xμ)/λ)2

Sigmoid Function is logistic distribution function with μ=0 and λ=1:

Sig(x)=11+ex

Binomial logistic regression model

Binomial logistic regression model is following probability distribution:

P(Y=1x)=exp(wx)1+exp(wx)

P(Y=0x)=11+exp(wx)

Estimate model parameter

Minimize loss function

L(w)=i=1N[yi(wxi)log(1+exp(wxi))]

Multi-nomial logistic regression

Convert k-nomial random variable into k1 binomial random variable.

Maximum Entropy Model

Maximum entropy theorem

Define Entropy as

H(p)=H(X)=i=1npilogpi,

where X is a random variable with probability pi=P(X=xi), with range 0H(p)logn.The larger the uncertainly is, the larger entropy H(X) will be.

Define Conditional Entropy as

H(YX)=i=1npiH(YX=xi)=x,yp(x)×p(yx)log(yx)

where X and p are defined as above.

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注