Although someone who is tone deaf cannot feel the difference between two notes in a song, their brains are actually able to perceive the difference, according to a report released on June 10, 2008 in the open source on-line journal PLoS ONE.

Tune deafness, or tone deafness, is characterized by a cognitive inability to distinguish between pitches, reproduce melodies, or to identify changes in melody, despite having normal hearing. It is an auditory processing disorder, and most tone deaf people not only do not enjoy music, but they have trouble understanding what makes it special. “For severely affected tune-deaf people, Yankee Doodle is no different than traffic noise in Manhattan. It’s fairly meaningless to them,” states Dr. Drayna, one of the authors of the study.

The difference between consciousness and unconsciousness has also been a point of interest for neuroscientists. Namely, many sensory disorders have been found which allow the brain to idtentify a stimulus but not recognize it in conscious awareness. Unfortunately, most of these disorders involve a high variability in the subjects and direct damage to the brain, so volunteer patients are difficult to find and controlled studies are often impossible. According to the results of this study, however, tone deafness could be an exception.

Not only is tone deafness relatively prevalent, but it is largely hereditary. For this reason, subjects are more common, and quantitative genetic tools may be used in its study. As a result, the study tone deafness may be well suited for exploring the difference between consciousness and unconsciousness. “The prevalence of tune deafness is surprisingly high — perhaps as much as 2 percent of the population is tune deaf and it exists in an otherwise normal, uninjured brain,” said James F. Battey, Jr., M.D., Ph.D., director of the National Institute on Deafness and Other Communication Disorders (NIDCD,) part of the National Institutes of Health (NIH.) “These factors, combined with the fact that tune deafness is largely genetic in origin, now raises the possibility of using tune deafness as a new way to study consciousness.”

In this study, Dr. Drayna worked with neuroimaging scientist Allen Braun, M.D., and colleagues in NIDCD’s Division of Intramural Research. They randomly screened 1,218 potential subjects using an online exercise called the Distorted Tunes Test. This survey is a standardized way of identifying whether a short melody has been played correctly. (The online version can be found on the NIDCD website at: http://www.nidcd.nih.gov/tunetest/.) Volunteers who scored in the bottom 10% on this quiz were further screened for hearing loss and other factors, and finally seven severely tune deaf subjects were selected who were otherwise medically normal and willing to take part in the study. Ten healthy control subjects who performed normally on the Test also participated in the study.

These patients then were observed by Dr. Braun, Joseph McArdle, Ph.D., and others using electroencephalography (EEG), a method of brain imaging that measures the electrical impulses of the neurons in the brain using electrodes placed around the head. As volunteers listened to a new version of the Distorted Tunes Test in which the incorrect melodies differed from the correct melodies by one note at the end, the researchers measured the patients’ responses. The subjects listened to 102 familiar melodies, approximately half of which were played correctly and half of which had an incorrect last note. Then, those imagines with the right note played were compared to those with the wrong note, and the response to the stimulus was isolated.

There were two signals of particular interest to the researchers: these are normally generated when the brain is presented with a stimulus that does not match what note the brain next expects to hear. The first, called the mismatch negativity (MMN) is a large negative signal that generally occurs 200 milliseconds after the unexpected stimulus is heard. The second signal, called P300, is a large positive signal generally occurring 300 milliseconds after the unexpected stimulus.

The scientists initially hypothesized that tune deaf patients would not generate either MMN or P300 signals, because they do not consistently consciously recognize wrong notes. However, tune deaf individuals processed the P300 signal in the same way normal participants did, regardless of their awareness of the deviation. Correct notes were also processed equally well for all volunteers, regardless of tone deafness.

This dissonance between registering a wrong note and the subject’s awareness of the wrong note bears more explanation. The researchers explain this by noting that the MMN and P300 signals are created in different regions of the brain. That is, the MMN is near the primary auditory cortex in the temporal lobe, while the P300 is generated in the frontoparietal cortex, downstream from the auditory cortex. While normal brains process sounds in a series, with the front and parietal cortices receiving signals that have been processed previously in the auditory cortex, the tune deaf brain has a disrupted path for this information, routing the two regions in parallel and independent pathways. Thus, information about the wrong note never makes it to the auditory cortex, while the information that does arrive at the frontoparietal cortex is never consciously recognized.

In the future, the research group hopes that studies will hone in on the locations from which the MMN and P300 signals come in the brain. Additionally, genetic studies on the causes of tone deafness could help elucidate the mechanism lying behind this on a molecular level.

About PLoS ONE

All works published in PLoS ONE are open-access. Everything is immediately available – to read, download, redistribute, include in databases and otherwise use – without cost to anyone, anywhere, subject only to the condition that the original authorship and source are properly attributed. Copyright is retained by the authors. The Public Library of Science uses the Creative Commons Attribution License.

PLoS ONE is the first journal of primary research from all areas of science to employ both pre- and post-publication peer review to maximize the impact of every report it publishes. PLoS ONE is published by the Public Library of Science (PLoS), the Open-access publisher whose goal is to make the world’s scientific and medical literature a public resource.

About the National Institute on Deafness and Other Communication Disorders (NIDCD):

NIDCD supports and conducts research and research training on the normal and disordered processes of hearing, balance, smell, taste, voice, speech and
language and provides health information, based upon scientific discovery, to the public. For more information about NIDCD programs, see the Web site
at www.nidcd.nih.gov.

About the National Institutes of Health (NIH):

The National Institutes of Health (NIH) – The Nation’s Medical Research Agency – includes 27 Institutes and Centers and is a component of the U.
S. Department of Health and Human Services. It is the primary federal agency for conducting and supporting basic, clinical, and translational medical
research, and it investigates the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs,
visit www.nih.gov.

Tune Deafness: Processing Melodic Errors Outside of Conscious Awareness as Reflected by Components of the Auditory ERP.
Braun A, McArdle J, Jones J, Nechaev V, Zalewski C, et al.
PLoS ONE 3(6): e2349.
doi:10.1371/journal.pone.0002349
Click Here For Full Length Article

Written by Anna Sophia McKenney