In contemporary cognitive neuroscience, the construct of “noise” in working memory research operates within two distinct but interrelated conceptual frameworks: first, as a theoretical mechanism explaining the fundamental computational constraints on cognitive performance through neural variability and representational drift, and second, as a methodological challenge that introduces confounding variables and obscures the interpretation of intervention efficacy in training research. These dual conceptualisations reflect the evolution from early discrete “slot” models towards more nuanced resource-based frameworks while simultaneously highlighting the experimental rigour required to isolate genuine training effects from non-specific influences.

Theoretical Noise: A Core Constraint on Working Memory Capacity and Precision

Theoretically, noise represents random neural variability or instability that progressively degrades the quality, fidelity, and precision of information maintained across brief temporal intervals. This conceptualisation has fundamentally reshaped models of working memory architecture, moving beyond the classic notion of fixed discrete slots towards continuous resource or variable-precision models. In these frameworks, cognitive capacity is constrained by a limited pool of representational resource that must be dynamically allocated across items, with the precision of each memory representation being inversely proportional to the amount of noise contaminating its underlying neural code (van den Berg et al., 2012). The seminal work by Bays and Husain (2008) demonstrated this dynamic allocation through psychophysical experiments showing that recall precision systematically decreases as memory load increases, supporting the hypothesis that representational noise escalates as finite neural resources are distributed more thinly across multiple items. At the neurobiological level, this manifests as stochastic drift in the stable patterns of population-level activity—formally described as “attractor states” in computational models—that maintain information across temporal delays in prefrontal cortical networks (Wimmer et al., 2014). This neural drift generates predictable, probabilistic errors in recall that follow characteristic distributions around target values, providing crucial behavioural signatures of underlying representational noise. Consequently, internal noise is not merely an incidental source of error but constitutes a primary determinant of working memory capacity limits and a fundamental explanatory principle for why memory representations systematically fade, blur, and lose precision over both time and increasing cognitive demands (Ma et al., 2014). The variable-precision model further extends this by proposing that noise levels fluctuate not only across items within a trial but also across trials themselves, accounting for the substantial trial-to-trial variability observed in working memory performance (van den Berg et al., 2012).

Methodological Noise: Confounding Variables in the Search for Training Efficacy

In the domain of working memory training research, “noise” assumes a fundamentally different methodological significance as it denotes the constellation of uncontrolled variables, placebo effects, expectancy biases, and non-specific factors that obscure detection of genuine training-induced cognitive transfer. This methodological noise represents a formidable threat to both internal and construct validity, fuelling persistent scientific controversy regarding whether adaptive working memory training produces authentic, far-transfer effects or merely reflects experimental artefacts. Boot et al. (2013) comprehensively delineated why conventional active control groups frequently fail to adequately control for placebo effects, as participants’ mere awareness of their experimental assignment—and associated expectations about potential cognitive enhancement—can substantially influence performance on outcome measures independent of any actual neurocognitive change. Foroughi et al. (2016) provided compelling empirical evidence for this mechanism by demonstrating that participants who believed they were receiving cognitive training showed significant improvements on working memory tasks despite receiving no actual training intervention. Additional confounding sources of methodological noise include differential participant engagement, researcher interaction effects, practice-related improvements on outcome assessments, and the general cognitive stimulation inherent in any structured, challenging activity (Redick et al., 2013). Meta-analytic syntheses consistently reveal that effect sizes for training transfer diminish substantially when studies implement more rigorous methodological controls, with Melby-Lervåg and Hulme (2013) finding minimal evidence for far-transfer effects in well-controlled designs and Sala et al. (2019) documenting through second-order meta-analysis that transfer effects attenuate progressively as methodological stringency increases. This pattern suggests that earlier, less controlled studies systematically overestimated training benefits by failing to account for pervasive methodological noise, leading to both replication failures and inflated claims regarding the transformative potential of commercial “brain training” programmes (Simons et al., 2016).

Bridging the Conceptual Divide: Can Training Reduce Internal Neural Noise?

The most theoretically sophisticated and methodologically rigorous working memory training research seeks to bridge these distinct conceptualisations by investigating whether targeted adaptive practice can fundamentally alter the neural computations underlying working memory, potentially reducing internal representational noise through experience-dependent plasticity (Klingberg, 2010; Shipstead et al., 2012). The central theoretical proposition posits that by persistently engaging cognitive systems at their performance boundaries, neural circuits supporting working memory may undergo functional and structural modifications that enhance computational efficiency—potentially through increased neural resource availability, improved signal-to-noise ratios in relevant cortical regions, enhanced neural synchronisation, or greater stability in the attractor dynamics maintaining representations (Wimmer et al., 2014). Critically, evidence for such mechanistic changes should extend beyond mere performance improvements on trained tasks; it would predict a qualitative transformation in error distributions on precision-based measures, specifically manifesting as narrower, more concentrated recall distributions around target values that directly indicate reduced representational noise (Wei et al., 2015). However, conclusively demonstrating this linkage between training and reduced neural noise necessitates exceptionally stringent experimental methodology that isolates true neurocognitive change from the pervasive influence of methodological noise. This requires not only active, placebo-controlled, double-blind designs but also comprehensive assessment batteries that differentiate between general strategic improvements and fundamental changes in representational precision. The field has increasingly recognised that individual differences in baseline cognitive abilities, attentional control capacities, and neural efficiency likely moderate training responsiveness, suggesting that noise reduction mechanisms may operate more effectively in specific subpopulations (Tsukahara et al., 2020). Ultimately, substantive progress in understanding working memory training efficacy depends on the principled integration of precise computational models characterising working memory constraints with experimentally pristine designs capable of isolating genuine neurocognitive change within a sea of potential confounding variables.

References

Bays, P. M., Catalao, R. F., & Husain, M. (2009). The precision of visual working memory is set by allocation of a shared resource. Journal of Vision, 9(10), 7. https://doi.org/10.1167/9.10.7

Bays, P. M., & Husain, M. (2008). Dynamic shifts of limited working memory resources in human vision. Science, 321(5890), 851–854. https://doi.org/10.1126/science.1158023

Boot, W. R., Simons, D. J., Stothart, C., & Stutts, C. (2013). The pervasive problem with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspectives on Psychological Science, 8(4), 445–454. https://doi.org/10.1177/1745691613491271

Foroughi, C. K., Monfort, S. S., Paczynski, M., McKnight, P. E., & Greenwood, P. M. (2016). Placebo effects in cognitive training. Proceedings of the National Academy of Sciences, 113(27), 7470–7474. https://doi.org/10.1073/pnas.1601243113

Klingberg, T. (2010). Training and plasticity of working memory. Trends in Cognitive Sciences, 14(7), 317–324. https://doi.org/10.1016/j.tics.2010.05.002

Luck, S. J., & Vogel, E. K. (2013). Visual working memory capacity: From psychophysics and neurobiology to individual differences. Trends in Cognitive Sciences, 17(8), 391–400. https://doi.org/10.1016/j.tics.2013.06.006

Ma, W. J., Husain, M., & Bays, P. M. (2014). Changing concepts of working memory. Nature Neuroscience, 17(3), 347–356. https://doi.org/10.1038/nn.3655

Melby-Lervåg, M., & Hulme, C. (2013). Is working memory training effective? A meta-analytic review. Developmental Psychology, 49(2), 270–291. https://doi.org/10.1037/a0028228

Redick, T. S., Shipstead, Z., Harrison, T. L., Hicks, K. L., Fried, D. E., Hambrick, D. Z., & Engle, R. W. (2013). No evidence of intelligence improvement after working memory training: A randomised, placebo-controlled study. Journal of Experimental Psychology: General, 142(2), 359–379. https://doi.org/10.1037/a0029082

Sala, G., Aksayli, N. D., Tatlidil, K. S., Tatsumi, T., Gondo, Y., & Gobet, F. (2019). Near and far transfer in cognitive training: A second-order meta-analysis. Collabra: Psychology, 5(1), 18. https://doi.org/10.1525/collabra.203

Shipstead, Z., Redick, T. S., & Engle, R. W. (2012). Is working memory training effective? Psychological Bulletin, 138(4), 628–654. https://doi.org/10.1037/a0027473

Simons, D. J., Boot, W. R., Charness, N., Gathercole, S. E., Chabris, C. F., Hambrick, D. Z., & Stine-Morrow, E. A. (2016). Do “brain-training” programmes work? Psychological Science in the Public Interest, 17(3), 103–186. https://doi.org/10.1177/1529100616661983

Tsukahara, J. S., Harrison, T. L., Draheim, C., Martin, J. D., & Engle, R. W. (2020). Attention control: The missing link between sensory discrimination and intelligence. Attention, Perception, & Psychophysics, 82, 3445–3478. https://doi.org/10.3758/s13414-020-02044-9

van den Berg, R., Shin, H., Chou, W. C., George, R., & Ma, W. J. (2012). Variability in encoding precision accounts for visual short-term memory limitations. Proceedings of the National Academy of Sciences, 109(22), 8780–8785. https://doi.org/10.1073/pnas.1117465109

Wei, X. X., Stocker, A. A., & Kiani, R. (2015). Bayesian inference with incomplete knowledge explains perceptual confidence and its deviations from accuracy. Nature Communications, 6, 6192. https://doi.org/10.1038/ncomms7192

Wimmer, K., Nykamp, D. Q., Constantinidis, C., & Compte, A. (2014). Bump attractor dynamics in prefrontal cortex explains behavioural precision in spatial working memory. Nature Neuroscience, 17(3), 431–439. https://doi.org/10.1038/nn.3645