Computerized cognitive training (CCT) has been widely promoted for older adults, but its effectiveness for cognitively health older adults has been unclear in systematic reviews to date. In a new systematic review and meta-analysis published in PLOS Medicine, Michael Valenzuela (Brain and Mind Research Institute, University of Sydney, Sydney, New South Wales, Australia) and colleagues evaluated 52 datasets in 51 studies and found a small overall effect of CCT (g = 0.22; 95% CI 0.15 to 0.29, where g<0.30 is considered a small effect) on performance of cognitive tests that were not included in the training program. Longer term outcomes and effects on activities of living were not evaluated. They also found that while some cognitive domains, specifically nonverbal, verbal, and working memory, processing speed, and visuospatial skills, displayed small to moderate effects, they found no effect on executive function or attention. Importantly, training completed in a group and under supervision showed a positive effect on cognitive outcomes, whereas training completed by oneself at home did not. Another practical finding was that training for more than three sessions per week was ineffective, whilst training one to three sessions per week was linked to positive outcomes.

The authors caution that their review ''provides no indication about the durability of the observed gains, nor their transfer into real-life outcomes such as independence, quality of life, daily functioning, or risk of long-term cognitive morbidity.'' They conclude, "CCT is modestly effective at improving cognitive performance in healthy older adults, but efficacy varies across cognitive domains and is largely determined by design choices. Further research is required to enhance efficacy of the intervention."

In an accompanying Perspective, Druin Burch (consulting editor for PLOS Medicine) states, "CCT has a market approaching a billion dollars a year and an uncertain evidence base." He points out that "Doing something repeatedly can make you better at it, which is not the same as saying it makes you better. For that reason, Valenzuela and colleagues' review is of studies assessing how practice at particular tasks transferred to more general ones." The study "could not evaluate whether any of the small changes detected (which may or may not extrapolate to settings outside of specific cognitive tests) persist, even to the next day." He asks, "Does a billion dollar gap exist between our knowledge about ''standardized computerized tasks with clear cognitive rationale'' and the industry selling them? Valenzuela and colleagues' overview of the evidence for CCT in cognitively intact older adults suggests it does."