Volume 5 Number 4, 1998
Children and adults with good auditory processing skills can determine the number of sounds, the sameness/differences between sounds, and the sequence of sounds in both isolated sounds and sequential sounds.
This article will identify ways to develop auditory skills in everyday situations.
Auditory attention is developed when a person calls attention to and labels a particular sound: “Listen, did you hear the dog barking?” When standing in line at the checkout counter you can call attention to the continuous and intermittent sounds. Label the sounds. “That’s the crackling from the plastic bag.” “That beep beep sound is made by the computer scanner. It lets the cashier know the price has been scanned.”
Auditory figure-ground discrimination can be targeted at the same time as auditory attention. Figure-ground is comparable to separating the tree from the forest. In other words, you concentrate on the one person talking with you, even though there is talking or noise in the background. The adult version of good auditory figure-ground discrimination is when you can hear on the phone when sitting in a noisy office. The child needs to learn to focus on the speaker even though there are noises in the environment.
Practice the Copy Cat game in different settings. In the Copy Cat game, the child repeats exactly what is said. This is often called `echolalia’ in children with autism, but is a very normal functional behavior for developing auditory memory in children at the age of 30 months. The behavior of copying the speech of others resurfaces throughout childhood. There are developmental stages to auditory memory. Everyone remembers a sibling or friend who copied exactly what was said in order to annoy the speaker. While I don’t think echolalic children copy to annoy others, I do think it serves a communicative purpose. In clinical practice with children with learning disabilities I purposely teach them to “develop the tape recorder inside their brain.” We also teach the concepts of: fast forward, reverse, pause and stop, using the terminology of a tape player. This allows “Random Access Memory (RAM) Retrieval.” I have met children that can recite entire movies, but cannot pick and choose one scene from a movie. They must tell the entire movie from title to credits. Developing flexibility in retrieval is a very important skill.
Copy Cat for gradually longer sentences improves auditory memory. Playing this game in the car with windows up or down, radio on and off, can develop auditory figure-ground discrimination and auditory memory.
If a child has difficulty using non-echolalic speech, a practice session with Copy Cat/No Copy Cat prompts can help him/her understand when to play and not to play “the game.”
Auditory Discrimination is the ability to determine if two sounds or words are same or different. The easiest level (wide discrimination) is a discrimination between two very different sounding words with different meanings.
- Wide Discrimination example: cat mountain
- Finer Discrimination: cat lion
- Finest Discrimination: cat hat
Auditory discrimination for isolated sounds are:
- Wide Discrimination : /oe/ /v/
- Finer Discrimination: /oe/ /ae/
- Finest Discrimination: /oe/ /oo/
In general, vowels are more difficult to discriminate than consonants.
Auditory Memory for whole words can be practiced in sentences as described. The different levels are: word sequences and sound sequences. An example of Copy Cat Word Sequences is: “Copy me: chair, table, floor” Seven non-related items is an average adult memory level. An example of sound sequences is: “Copy me: /b/, /ch/, /oe/.”
Auditory segmentation is the ability to break apart a spoken word into its individual sounds. Auditory synthesis is the ability to blend those parts back together to make a whole word.
An example of auditory segmentation at the “syllable” level is to break the word, broccoli to three parts, “broc-co-li”. Auditory segmentation at the sound level is breaking the word, “stream” into five parts, “s-t-r-ea-m.”
Accurate synthesis of individual sounds is difficult for many children and adults with weak auditory processing systems. These individuals can sound out a word, “c-a-p” and synthesize it as “cat.” They have difficulty blending the sounds into words. A car game to practice segmentation and synthesis can be great fun. Start with the directions, “I’m going to break some words into sounds. See if you can figure out what words I am saying.” “Do-nut-shop.” The child says, “Donut shop.” I pick objects in the car at first and do lots of repetition if necessary. When your child has developed faster auditory processing you can pick things outside the car and have the child look for the object before you drive past it.
Syllable Synthesis is easier than sound synthesis. Make sure the child “guesses” the word 100% of the time even if that means you need to hold on to the vowel and connect the word, “Dooonuuut shoooop.” You are not testing the child’s skills, you are teaching skills. Help out until there is independent accuracy.
Sound-to-Symbol Correspondence is the ability to match the sound with the letter of the alphabet. This is an associative learning task that incorporates both auditory and visual memory skills. Repetition of the sound along with the visual presentation of the letter is the way most children learn this association. There are multi-sensory methods such as “Zoo Phonics” that associate the sound with a picture, letter and body movement.
The Lindamood method associates the sound, letter and mouth movement. Sound Symbol correspondence can be one of the final steps of phonemic awareness training. The child does not have to know the sounds of the alphabet before initiating an intensive pre-reading program. Some children have precocious development of decoding skill and begin to read at 2 or 3. Other children struggle with the sound symbol associations until 7 or 8. Auditory development can occur independently from sound symbol association and will make the eventual learning of the secret code of reading decipherable.
As you can see, there are many ways to help your child pay attention to words and sounds within words; and this can be fun and exciting. In addition, practicing in different settings, such as the car, grocery store, etc., will make it easier for your child to generalize what he/she has learned. Have fun this summer teaching phonological awareness!
The Earobics program utilizes many computerized training techniques, such as acoustic enhancement of speech signals, systematic control of learning variables, and adaptive training methods. These techniques help promote the development of auditory and phonological skills which are critical for speech and language development. The comprehensive training program includes: skill development in auditory attention, auditory discrimination, auditory figure- ground discrimination, auditory memory, phonemic synthesis, sound segmentation, auditory and phonemic identification, sound-symbol correspondence, rhyming and phonological awareness. Research has shown that mastery of these skills is necessary for success in reading as well as speech/language development.
Earobics uses six computer games to teach these auditory skills. Within each game, there are numerous levels of difficulty, as many as 114 levels; and each level is designed to be slightly more difficult than the previous one. These games are made to be appealing to children between the ages of 4-7 years; however, chronologically older children with developmental delays may also enjoy and benefit from the program. New versions are being developed for use with older children.
There are currently three versions of Earobics; however, the actual games on all three versions are the same.
Earobics Home version is available directly to parents at a cost of $59. No special supervision is required for use. This version tracks the progress using a simple chart.
Earobics Pro is available to professionals and parents for $149. In order for parents to directly purchase the Pro version for home use, they must submit a letter from their professional (e.g., speech/language therapist, educational consultant, special education teacher, etc.) indicating that the professional is supervising the child in his/her home program. The Pro version prints out learning objectives for IEP use, tracks the child’s progress, provides details such as percent correct, and stores data for up to 25 users at a time.
Earobics Pro Plus is the newest version, available for $299. As with the Pro version, parents must be supervised by a professional in order to purchase this version for home use. The new features on the Plus include: the ability to select starting level of play for each child if their skill level is higher than the beginning level, being able to skip levels of play or repeat levels of play for extra review, being able to customize auditory presentations and other features, and the ability to limit games available for play. It also has unlimited re-usability, security controls for limited access for confidentiality, etc.
For more information on Earobics, contact Cognitive Concepts at 847-328-8199. You can also visit their web site at: www.cogcon.com/
It is now believed that even if music and language occupy separate cognitive systems, at some other level there must be neural circuits that are shared or lie so close together in the cortex that a traumatic injury could spread damage over both. A research study suggests that even though we hear a tune’s melody and rhythm as an integrated whole, the brain may be processing the two components separately.
Brain imaging techniques have been used to identify the circuits responsible for components of musical perception. When subjects were asked to simply listen to a tune, the PET scans showed activity in an area of the right temporal lobe called the `superior temporal gyrus.’ This region has long been associated with auditory stimulation. When subjects were asked to attend to particular pitches within the tunes and make comparisons, thus employing `working memory,’ the scans showed patterns of processing involving several regions of the brain. The question as to whether music is a right brain or left brain function may not be to the point. Music may well be engaging the entire brain.
Early musical training actually appears to alter brain anatomy. The corpus callosum was significantly larger in musicians who had begun their musical training at a very early age. Since playing an instrument requires fine coordination between both hands, early musical training may develop either more wiring or better-insulated wiring which speeds motor communication between the two hemispheres. There were also anatomic differences in the brains of musicians with perfect pitch. The planum temporale is larger on the left side of the brain than on the right in the average human brain. It is presumed that this difference is due to involvement in language processing. However, in musicians, this disparity in size was even more significant. It is possible that the planum temporale may be involved in the task of classifying sound, which underlies our perception of both music and language.
Jaak Panksepp, a biopsychologist at Bowling Green State University in Ohio and a member of SAIT’s Board of Directors, has explored the emotions that can be elicited by music. Musical “chills” may derive from the ability of particular acoustic structures-a high pitched crescendo, or a solo instrument emerging from the background-to excite primitive mammalian regions of the brain that respond to the distress signal of an infant who has suddenly lost his/her parents. The effect of that wail is to make the parents feel a physical chill and prompt them to seek warmth in a reuniting embrace.
Mitch Waterman, a psychologist at the University of Leeds, in England, believes the emotions triggered by music provide a way to stimulate ourselves safely, without the psychological consequences risked with real feeling. Music may be a resource to make us feel. It helps keep our brains working properly. In fact, some studies have shown music has a long-term effect on enhancing abstract reasoning and other skills. A control group of first graders received the school system’s standard visual arts and music curriculum. The experimental group received intensive instruction in music and art. Pre-test scores for the experimental group were initially below those of the control group. At the end of seven months, the experimental group’s scores in reading were equal to the control group, and math scores significantly surpassed those of the control group.
New technology is enabling us to push farther into the mysteries of brain function. It provides new information and answers to questions, but also generates additional questions that need answers. It does not seem as though the question as to why music exists has been answered yet, but the search is leading to fascinating new knowledge.
The following protocol should be followed in deciding when AIT should be repeated:
— Autism: every 6 months to 1 year until there’s no more improvement of the behavior, then stop. This means that the person’s hearing problem has been fixed and he/she does not need more help in this area.
— Hearing loss: one series every 6 months to 1 year, until the audiogram seems stabilized, then stop, and verify annually.
— Other cases: check after 3 months, 6 months, 1 year, and repeat AIT only if the audiogram is not normal.
Never, never apply short booster sessions!
Deviations in ways of using my device, and use of other devices, which may be available, but don’t correspond to my method, may be harmful to the listeners, and consequently to the method itself, and to the practitioners working correctly under my label.