4 The bootstrap filter
In this chapter we introduce a variant of the bootstrap filter.
The bootstrap filter generates sample-based approximations to the conditional filtering distribution \(P(X_t | y_{1:t}, \theta)\) and conditional smoothing distribution \(P(X_t | y_{1:T}, \theta)\).
This chapter assumes fixed and known values of the model parameters \(\theta\). Chapter 5 handles the estimation of \(\theta\).
4.1 Intuition
The goal of the bootstrap filter is to use observed data \(y_{1:T}\) to learn about the hidden-states \(X_t\).
We provide a different intuitive explanation in the corresponding journal article. Otherwise see Section 4.4 for additional resources.
4.1.1 Sequential Importance (Re)sampling
Assume that we have a collection of samples from the filtering distribution at time \(t\!-\!1\), denoted \(\{x_{t-1}^{(i)}\} \sim P(X_{t-1}|y_{1:t-1}, \theta)\). We call these samples particles, and they collectively represent our knowledge about the hidden-states at this time step.
By coupling these particles with the state-space transition model (Equation 3.1), we can make one-step-ahead predictions. Mathematically we want to obtain the predictive distribution: \[ \underbrace{P(X_t | y_{1:t-1}, \theta)}_{\text{predictive dist.}} = \int \underbrace{P(X_t | X_{t-1}, \theta)}_{\text{state-space transition dist.}} \underbrace{P(X_{t-1}|y_{1:t-1}, \theta)}_{\text{prev. filtering dist.}} \ dX_{t-1} \] Which is a complicated and potentially high-dimensional integral. Fortunately, all we need to do is sample from the state-space transition model while conditioning on our particles: \[ \{\tilde{x}_t^{(i)}\} \sim P(X_t | x_{t-1}^{(i)}, \theta) \]
If we treat this predictive distribution as the prior distribution for \(X_t|y_{1:t}\), we can apply Bayes’ formula:
\[ \underbrace{P(X_t | y_{1:t}, \theta)}_{\text{new filtering dist}} \propto \underbrace{P(y_t|X_t, y_{1:t-1}, \theta)}_{\text{observation dist.}} \underbrace{P(X_t | y_{1:t-1}, \theta)}_{\text{predictive dist.}} \]
Since we already have samples from \(P(X_t | y_{1:t-1}, \theta)\), all we need to do is assign them weights \(w_t^{(i)}\) according to the observation distribution:
\[ w_t^{(i)} = P(y_t | \tilde{x}_t^{(i)}, y_{1:t-1}, \theta) \]
The final set of particles \(\{x_t^{(i)}\}\) is constructed by re-sampling (with replacement) from \(\{\tilde{x}_t^{(i)}\}\) according to the corresponding weights. Thus:
\[ x_t^{(i)} \sim P(X_t | y_{1:t}, \theta) \]
Thus, starting with an initial set of particles \(\{x_0^{(i)}\} \sim P(X_0)\), all we need to do is repeat this procedue at each time step \(t\), storing the particle values as we go. If the resampling-step is performed at every time step where data are observed, this is called a bootstrap filter, which is a special case of a sequential importance resampling algorithm.
4.1.2 State-history resampling
In most
The derivation provided above assumes that our model is Markovian: the state-space transition and observation distributions depend only on \(X_{t-1}\), and not on \(X_{t-2}, X_{t-3}, \ldots\).
4.1.3 Using the particles
4.2 Algorithm
#TODO: write SMC algorithm
4.3 Example
For demonstration, we consider a simple reproduction number estimator. First assume that \(\log R_t\) follows a Gaussian random walk:
[#TODO]
4.4 Resources from other fields
Bootstrap filters (and SMC methods more generally) have found use in many fields. Each field has their own motivation for and notation describing these methods. We provide an overview of other resources here.
Bayesian Filtering and Smoothing (Särkkä 2013)
Those with an engineering background may be familiar with “filtering and smoothing”, where the state of a time-varying system is tracked through the observation of noisy measurements. Classical examples include GPS position tracking or audio signal processing.
The Kalman filter, which provides an analytical solution when the state-space transition and observation models are linear Gaussian and Markovian, is perhaps the best-known example of a filtering method from engineering.
Chapters 7 and 11 of Särkkä (2013) introduce SMC methods under the headings “particle filtering” and “particle smoothing”. We also recommend chapters 1 (What are Bayesian filtering and smoothing?), 4 (Bayesian filtering equations and exact solutions), and 8 (Bayesian smoothing equations and exact solutions).
A survey of Sequential Monte Carlo Methods for Economics and Finance (Creal 2012)
Those with an econometrics background may find this extensive review helpful, although the author focusses on Markovian models. Examples employed in this review include a stochastic volatility model and a nonlinear dynamic stochastic general equilibrium model.
This list is incomplete. If you know of any additional resources that may be helpful, please get in touch!
Data Assimilation Fundamentals: … (Evensen, Vossepoel, and Van Leeuwen 2022)
Those with a background in atmospheric science, oceanography, metereology, or other environmental sciences may be familiar with “data assimilation”, where focus is placed on combining model predictions with observational data. Chapter 9 of this book introduces particle filters as a method for solving the “fully nonlinear data assimilation” problem.