Why Use HMM with Gaussian Emissions?
In the context of creating an adaptive n-back task using Hidden Markov Models (HMMs), we use HMM with Gaussian emissions for the following reasons:
1. Continuous Observation Variables:
The primary data we’re working with—reaction times and response accuracy—are continuous variables. Reaction times are naturally continuous, and while response accuracy is binary, combining it with continuous reaction time data benefits from a model that can handle continuous emissions.
2. Gaussian Emission Probabilities:
Gaussian emissions are suitable for modeling continuous data. They assume that the observed data can be represented by a normal distribution for each state. This is a reasonable assumption for reaction times, which often follow a bell-shaped curve around the mean.
3. Mathematical Simplicity and Efficiency:
Gaussian emissions simplify the computation of the emission probabilities. The parameters of a Gaussian distribution (mean and variance) can be estimated efficiently, making it easier to implement and computationally efficient to run.
4. Flexibility in Modeling Real-world Data:
Real-world reaction times are often noisy and can vary significantly between trials and individuals. Gaussian emissions can model this variability effectively, capturing the central tendency and spread of the reaction times for each n-back level.
How Gaussian HMM Works
1. States and Observations:
In an HMM, we have hidden states (in this case, different n-back levels) and observable data (reaction times and response accuracy).
2. Gaussian Emissions:
Each hidden state emits observations according to a Gaussian distribution characterized by a mean vector and a covariance matrix.
For example, for a particular n-back level, the reaction times might have a mean of 0.5 seconds with a standard deviation of 0.1 seconds, and response accuracy might be modeled separately or together depending on the implementation.
3. Parameter Estimation:
The HMM with Gaussian emissions estimates the parameters (transition probabilities, means, and covariances) from the data using algorithms like Expectation-Maximization (EM). This allows the model to learn the typical reaction times and accuracy rates for each n-back level and how these evolve over time.
