Their nervous system fails to distinguish one consonant from the other. What we've discovered here is a fundamental mechanism – ‘response stability’ – that helps us understand why poor readers have difficulty with auditory processing in the first place.
Professor Nina Kraus
Reported today in The Journal of Neuroscience
, research conducted at Northwestern University
in Illinois appears to have found a biological marker for dyslexia. Dyslexia is unrelated to intelligence, vision or hearing and comes in many different guises, but it continues to make learning to read (and write) difficult for as many as one in every ten people.
Nina Kraus, Hugh Knowles Professor of Neurobiology, Physiology and Communication and principal investigator at Northwestern’s Auditory Neuroscience Laboratory
, took the time to expound on the link she and her colleagues have found between reading ability and consistency of sound encoding for ScienceOmega.com
Many people whose reading is affected by dyslexia have trouble remembering, sequencing, and categorising sounds, and problems picking up on the timing of sounds, including rhythms.
"It had been known for some time that many dyslexics have auditory processing difficulties," remarked Professor Kraus. "We aimed to better understand the biological basis for this, and have discovered that sound is processed inconsistently or unstably by the brain in poor readers."
Presumably poor readers are failing to make successful 'sound to meaning' connections; they are failing to link elements of sound with the associated language meaning. Consequently, the auditory system does not become shaped by this experience – as it does in good readers – to encode important sounds more consistently.
Learning to read involves linking the auditory representations of sounds with visual symbols, i.e. letters, but to do this the brain must first process the sound.
"The nervous system needs to transcribe the acoustic information contained in sound waves into the currency of the nervous system: electricity," said Professor Kraus. "We can capture this electricity in humans in the form of brain waves, and can thus determine how well the neurons are encoding meaningful sound elements."
Professor Kraus explained how it has previously been found that poor readers often fail to encode meaningful aspects of sounds well, especially consonants, which are more acoustically complex and shorter than vowels.
"Their nervous system fails to distinguish one consonant from the other," she said. "What we've discovered here is a fundamental mechanism – ‘response stability’ – that helps us understand why poor readers have difficulty with auditory processing in the first place."
The researchers recorded the automatic brain wave responses of 100 schoolchildren to various speech sounds and found that the best readers were those whose neural responses were most consistent over repeated exposures. The readers who were least able showed the most inconsistent encoding of sound, suggesting that brain response stabilises when the appropriate meaning becomes associated with a certain sound.
The results could help in finding and implementing new ways to help children with dyslexia, as Professor Kraus outlined, because a relatively simple test could uncover at least one of the roots of the problem.
"These physiological responses can be measured in individual children and can help determine whether inconsistent neural responses to sound are contributing to the reading difficulty. We also know that children with inconsistent responses before training are the most likely to benefit from training
and improve in their reading abilities."
The researchers know this from a previous study in which FM training was introduced to the classroom, with children known to have reading difficulties using an assistive listening device for a year. Such devices transmit the teacher’s voice directly to the pupils’ ears, meaning that other distracting noise can be filtered out. An improvement in reading ability was recorded alongside an increased propensity to encode speech sounds consistently.
"We would like to see this technology developed such that it could be easily put into the hands of those interested in understanding or assessing the biological bases of hearing," stressed Professor Kraus.
She and her colleagues at the Auditory Neuroscience Lab
are currently in the first year of a five year longitudinal study of early reading biomarkers in preschool children. The Biotots project is following children from three years of age until seven years. At the moment, the team only has cross-sectional data from eight to12 year olds who already have difficulty reading, but the project is aiming high.
"In Biotots, we're tracking brain activity and language development with the aim of being able to one day predict at age three which children may be at risk for developing reading problems later," Professor Kraus shared.
Early intervention is certainly helpful for promoting optimal outcomes, but, importantly, the team are also keen to gain a clearer understanding of the biological mechanisms that undergird normal and disordered reading processes.
"We are also working on getting our biological approach onto a user-friendly platform so it can be put into the hands of the many people interested in hearing and its disorders, including geneticists, biologists, epidemiologists, sociologists, specialists in education and medicine, and individuals developing training approaches."