Bayesian Modeling of Event-Related Potentials 🧠

IISA-2023

Dr. Cheng-Han Yu
Department of Mathematical and Statistical Sciences
Marquette University

6/3/23

Electroencephalography (EEG)

Source: https://en.wikipedia.org/wiki/Electroencephalography

The raw EEG can indicate general state, e.g., awake or asleep, but it can’t tell us much about specific neural processes that underlie perception, cognition or emotion.

What ERP Can Do

Source: ERP Boot Camp (Luck and Kappenman, 2020)

Estimate ERP Components

  • Interested in estimating the amplitude and latency of some ERP component.

Source: Rossion & Jacques (2012)

Inference Methods

  • Current methods
    • grand ERP waveform by lots of averaging
    • no uncertainty quantification
    • filtering for removing drifts or noises
    • component search window specification
    • separate ANOVA for control-experimental comparison

Solve the issues by

  • Bayesian hierarchical model with the derivative Gaussian process prior
    • Yu et al., Biometrics (2022)
    • Li et al., JASA T&M (2023)
    • Yu et al., AoAS (2023 expected)

Derivative Gaussian Process (DGP)

  • Force f' = 0 at x = -4, 0, and 4, and all paths follow the constraint.
  • The stationary points can be a maximum, minimum or inflection point.
  • Add 3 constraints, but a path will have at least 3 stationary points.

Shaped-Constrained Regression

\begin{align*} y_i &= f(x_i) + \epsilon _i, ~~ \epsilon _i \sim N(0, \sigma^2), ~~ i = 1, \dots, n, \\ f'(t_m) &= 0, ~~m = 1, \dots, M, \end{align*}

  • In ERP applications,
    • \{y_i\}_{i=1}^n are the ERP time series observations.
    • f(\cdot) is the true underlying smooth ERP waveform.
    • M peaks at t_1, \dots, t_M of a ERP waveform f are such that f'(t_m) = 0.

Derivative Gaussian Process Prior

  • With the derivative-constrained regression, and a GP prior on f, f \sim GP(\mu(\cdot), k(\cdot, \cdot)), we have DGP prior of f, DGP(\mu, k, \mathbf{t}), \mathbf{t}= (t_1, \dots, t_M)
  • Mean function is x_i \mapsto \mu(x_i) - k_{01}(x_i, \bm{t}) k_{11}^{-1}(\bm{t}, \bm{t}) \mu'(\bm{t}) where \mu'() is the mean function of f'.

[k_{01}(\bm{x}, \bm{t})]_{ij} := \frac{\partial }{\partial t_j}k(x_i, t_j), ~~[k_{11}(\bm{t}, \bm{t})]_{ij} := \frac{\partial ^2 }{\partial t_i \partial t_j}k(t_i, t_j)

  • Covariance kernel is (x_i, x_j) \mapsto k(x_i, x_j) - k_{01}(x_i, \bm{t}) k_{11}^{-1}(\bm{t}, \bm{t}) k_{10}(\bm{t}, x_j)

k_{10}(\bm{x}, \bm{t}) = k_{01}^T(\bm{x}, \bm{t})

Prior on Stationary Points

  • t_1, \dots, t_M are parameters to be estimated.

  • Challenge: Unknown number of stationary points M.

  • Use an univariate prior t \sim \pi(t) that corresponds to assuming that f has at least one stationary point, and utilize the posterior of t to infer all stationary points.

An univariate prior of t leads to

  • Efficient computation:

    • The covariance matrix only adds one more dimension to from n by n to (n+1) by (n+1).
    • Simple one-dimensional sampling on t.
  • Avoids possible misspecification of M that distorts the fitted result.

  • Theorem: The posterior of t converges to a mixture of Gaussians as n \rightarrow \infty, and the estimated number of mixture components converges to the number of stationary points.

Mixture of Gaussians

Posterior Inference via Monte Carlo EM

\begin{align*} y_i &= f(x_i) + \epsilon_i, ~~ \epsilon_i \sim N(0, \sigma^2), ~~i = 1, \dots, n, \\ f &\sim GP(0, k(\cdot, \cdot; \boldsymbol \theta)) , ~~ f'(t) = 0,\\ t &\sim \pi(t), ~~ \sigma^2 \sim \pi(\sigma^2), ~~ \boldsymbol \theta\sim \pi(\boldsymbol \theta) \end{align*}

For iteration i = 1, 2, \dots until convergence:

  • Monte Carlo E-step: Metropolis-Hastings on t and Gibbs sampling on \sigma^2.

  • M-step: Given samples of t and \sigma^2, update \boldsymbol \theta to \hat{\boldsymbol \theta}^{(i+1)} by setting \begin{align*} \hat{\boldsymbol \theta}^{(i+1)} &= \arg \max_{\boldsymbol \theta} \frac{1}{J} \sum_{j=1}^J \log p(\bm{y} \mid t_j^{(i)}, (\sigma_j^2)^{(i)}, \boldsymbol \theta^{(i)}) \end{align*}

Simulation: Uncertainty Quantification of t and f

  • f(x) = 0.3 + 0.4x+0.5\sin(3.2x) + 1.1 /(1 + x^2), x\in [0, 2]
  • Two stationary points t_1 = 0.44 and t_2 = 1.46.
  • \pi(t) = Unif(0, 2).

Oracle DGP

  • Better curve fitting if true stationary points were known.

Current ERP Studies

  • Multi-subject
    • Averaging across subjects to obtain the grand ERP waveform.
  • Difference in latency and/or amplitude of ERP components for factors such as age, gender, etc.
    • Basic ANOVA is conducted.
  • A time window or search window needs to be specified to frame the range of the components.
    • No systematic and consistent ways of selecting such windows.

Source: Rossion & Jacques (2012)

Semiparametric Latent ANOVA Model (SLAM)

  • Estimates the amplitude and latency as well as the ERP waveform in a unified framework.

  • Supports a multi-factor ANOVA setting that is capable of examining how multiple factors or covariates affect ERP components’ latency and amplitude.

  • Bayesian approach having probabilistic statements about the amplitude and latency of ERP components.

  • The hierarchical nature of the model provides estimates at both the grand group and individual subject levels.

SLAM Two-way ANOVA example - Likelihood with DGP

\begin{equation*} y_{igls} = f_{gls}(x_i) + \epsilon_{igls}, ~~ \epsilon_{igls} \stackrel{\rm iid}{\sim}N(0, \sigma^2). \end{equation*}

  • Time point i = 1, \dots, n

  • Factor A level g = 1, \dots, G

  • Factor B level l = 1, \dots, L

  • Subject s = 1, \dots, S_{gl}

\begin{equation*} f_{gls} \mid \bm{t}_{gls} \sim \text{DGP}(0, k(\cdot, \cdot; \boldsymbol \theta), \bm{t}_{gls}), \end{equation*} where \bm{t}_{gls} = \{t_{gls}^m\}_{m = 1}^M.

SLAM Two-way ANOVA example - Prior

\begin{align*} t_{gls}^m \mid r_{gl}^m, \eta_{gl}^m &\stackrel{\rm iid}{\sim}\text{gbeta}\left(r_{gl}^m\eta_{gl}^m, \, (1 - r_{gl}^m)\eta_{gl}^m, \, a^{m}, \, b^{m}\right), \, s = 1, \dots, S_{gl}. \end{align*}

  • Subject-level t_{gls}^m \in [a^{m}, b^{m}]

  • Group-level r_{gl}^m \in (0, 1): location of prior mean in [a^{m}, b^{m}]

  • {\mathbb E}\left(t_{gls}^m\right) = (1 - r_{gl}^m)a^{m} + r_{gl}^mb^{m}

SLAM Two-way ANOVA example - Other Priors

\begin{align*} \beta_0^m &\sim N(\mu_0^m, (\sigma_0^m)^2),\\ \beta_{1, a}^m &\sim N(\mu_1^m, (\sigma_1^m)^2), \\ \beta_{2, b}^m &\sim N(\mu_2^m, (\sigma_2^m)^2), \\ \eta_{gl}^m &\sim Ga(\alpha_{\eta}^m, \beta_{\eta}^m),\\ \sigma^2 &\sim IG(\alpha_{\sigma}, \beta_{\sigma}),\\ \boldsymbol \theta&\sim \pi(\boldsymbol \theta) \end{align*}

Simulation

Data are generated with \sigma = 0.25

Point and Interval Estimation of Subject-level Latency

Point and Interval Estimation of Subject-level Latency

Uncertainty Quantification of Group-level Latency

Curve Fitting

Amplitude Estimates

ERP Data

  • Experiment on speech recognition

  • The N100 (60 ~ 140 msec) component captures phonological (syllable) representation.

  • P200 (140 ~ 220 msec) component represents higher-order perceptual processing.

Latency Difference Between Groups

Amplitude Difference Between Groups

Conclusion

  • Novel Bayesian model SLAM estimates the amplitude and latency of ERP components with uncertainty quantification.

  • Solves the search window specification problem by generating posterior distribution of latency.

  • Incorporates the ANOVA and possibly a latent generalized linear model structure in a single unified framework.

  • Examines both the subjects’ individual differences and group-level differences that facilitate comparing different characteristics or factors, such as age/gender.

  • Extensions could include treatments of other noise distributions, for example, the autoregressive correlation or trial variability.

  • Spatial modeling and mapping on electrodes would be another extension.