particle filter (2)

前一篇 介绍贝叶斯滤波的数学原理和蒙特卡洛定位的算法。本篇将介绍基于序贯重要性采样。粒子滤波的思想就是采用一个加权粒子分布去近似后验概率分布p(x)

蒙特卡洛积分

定义一连续随机变量X, 其概率密度分布函数为 p(X); 定义Y=f(X), 则随机变量Y的数学期望:

实际中,概率密度分布p(X)未知,如何保障所采样的点服从p(X)

直接采样

通过对均匀分布采样,实现对任意分布的采样。

任何未知概率密度分布的累积概率函数cdf都映射在[0-1]区间,通过在[0-1]区间的均匀采样,再函数z = cdf(y)求逆,即是符合真实 y的概率密度分布的采样点。

但如果cdf()函数未知或无法求逆,直接采样不可行。

接受-拒绝采样

用一个已知概率分布函数q(X)去采样,然后按照一定的方法拒绝某些样本,达到近似p(X)分布:

p(x_i) <= k p(x_i)

该采样的限制是确定参数k。

重要性采样

在一定的抽样数量基础上,增加准确度。未知p(x), 在已知概率密度分布的q(x)上采样{x_1, x_2, … x_n}后估计f的期望:

定义新的随机变量:

关于原随机变量Y在未知概率分布p(x)下的期望,转化为新的随机变量Z在已知概率分布q(x)下的期望。已知概率分布,即知道如何采样。这里 p(x)/q(x) 就是权值。

so the posterior expectations can be computed as:

as the importance weights can be defined as:

the problem is we can’t get p(x|z) , but a loosed (unnormalized) importance weights as:

then do normalized from it:

so the posterior expectation is approximated as:

sequential importance sampling(SIS)

consider the full posterior distribution of states X_{0:k} given measurements y_{1:k} :

consider the sequential of q(x):

then the unnormalized importance weights can be as:

namely:

the problem in SIS is the algorithm is degenerate, that variance of the weights increases at every step, which means the algorithm will converget to single none-zero (w=1) weight and the rest being zero.

so resampling.

sequential importance resampling(SER)

resampling process:

1) interpret each weight as the probability of obtaining the sample index i in the set x^i

2) draw N samples from that discrete distribtuion and replace the old sample set with the new one.

SER process:

1) draw point x_i from the q distribution.
2) calculate the weights in iteration SIS and normalized the weights to sum to unity
3) if the effective number of particles is too low, go to resampling process