[关闭]
@nrailgun 2015-10-21T21:37:25.000000Z 字数 2823 阅读 1585

隐马模型

机器学习


Basic Concepts

Let Q={q1,q2,,qN} denote state set, V={v1,v2,,vM} denote observation set, I=(i1,i2,,iN) denote state sequence, and O=(o1,o2,,oN) denote corresponding observation sequence.

A=[aij]N×N is state transfer matrix, where

aij=P(ii+1=qjit=qi),i=1,2,,N;j=1,2,,N

B=[bij]N×M is observation probability matrix, where

bj(k)=P(ot=vkit=qj),k=1,,M;j=1,,N

π is initial state probability vector, where πi=P(i1=qi).

Calculalate Probability

Given λ=(A,B,π) and O, find P(Oλ).

By force

P(Oλ)=IP(OI,λ)P(Iλ)

Forward

Define forward probability as

αt(i)=P(o1,o2,,ot,it=qiλ)

Input: Hidden markov model λ and observation sequance O;
Output: Observation sequence probability P(Oλ).

  1. Initialize:
    α1(i)=πibi(o1)
  2. Deduct:
    αt+1(i)=j=1Nαt(j)ajibi(ot+1)
  3. Terminate
    P(Oλ)=i=1NαT(i)

Backward

Define backward probability as

βt(i)=P(ot+1,,oTit=qi,λ)

Input: Hidden markov model λ and observation sequance O;
Output: Observation sequence probability P(Oλ).

  1. Initialize:
    β1(i)=1
  2. Deduct:
    αt(i)=j=1Naijbj(ot+1)βt+1(j)
  3. Terminate
    P(Oλ)=i=1Nπibi(o1)β1(i)

Some probabilities and expectations

Let γt(i) denote probabity at state qi at time t

γt(i)=P(it=qiO,λ)=αt(i)βt(i)Nj=1αt(i)βt(i)

Learning Algorithm

TODO: Solve it with EM.

Predication Algorithm

Approximation

Most likely state it at time t is

it=argmax1iN[γt(i)]

It's not necessary the most likely state sequence, but it's still quite useful.

Viterbi algorithm

  1. Init

    δ1(i)=πibi(o1),i=1,2,,N

    ψi(i)=0,i=1,2,,N

  2. Deduct, for t=2,,T

    δt(i)=ϕt(i)=max1j<N[δt1(j)aji]bi(ot),argmax1j<N(i)[δt1(j)aji],i=1,2,,Ni=1,2,,N(19)(20)

  3. Terminate

    P=max1j<NδT(i)

    iT=argmax1j<N[δT(i)]

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注