Volume 8 Number 1, 2000
We would like to thank Rose-Marie Davis for her work during the past two years as Treasurer.
The Board of Directors also elected Dr. Margaret Creedon of Chicago to SAIT’s Professional Advisory Board. Dr. Creedon served on SAIT’s Board of Directors for several years.
The task of uploading the newsletter was much more difficult than expected. This project involved retyping some of the old newsletters because we had changed our desktop publisher software, and the old desktop publisher program was no longer compatible with the computer operating system. Additionally, the task of programming password protection was much more difficult than expected. But we did it! We hope you will enjoy this new service.
The updated version of ‘Summaries of Research on Auditory Integration Training,’ published by the Autism Research Institute (ARI) is nearly finished. This paper includes all known studies on the use of the Berard method of AIT. It will be sent by postal mail to SAIT members and posted on ARI’s (www.autism.com/ari) and SAIT’s (www.sait.org) web sites.
(1) Interpreting the audiogram and selecting filters. One of the goals of AIT is to decrease both hypersensitivity to specific frequencies and auditory distortions by reducing or eliminating auditory peaks present in one’s hearing. An auditory peak can be defined as hearing a specific frequency more keenly than its two adjacent frequencies.
(2) During the listening sessions, the person hears processed music selected to cover a wide frequency range. Frequencies which audiograms show to be hypersensitive may be dampened by using filters. The width and depth of the filters as well as the total number of filters varies, depending on the AIT device. The filters on the AudioKinetron are: 750 Hz, 1 KHz, 1.5 KHz, 2 KHz, 3 KHz, 4 KHz, 6 KHz, and 8 KHz.
(3) Audiograms for both left and right ears need to be obtained in a quiet, but not sound-treated, room. Audiograms obtained in a sound-treated room or obtained through play audiometry should not be used when determining filter settings for AIT.
(4) No more than two filters should be used at one time. If more than two peaks are present in a person’s audiogram (considering both ears), refer to Table A below to determine which two peaks should be filtered. However, in some cases, no filters or only one filter should be activated, depending on which auditory peaks are present. Certain configurations in the audiogram are more important than others. The more important peaks should be filtered before other peaks are considered. Filters are determined by examining the severity of the peaks and the difference between the peak in question and its adjacent frequencies.
(5a) ‘Primary peaks’ or ‘primary pairs’ refer to two specific peaks present in the same ear. All of the primary pairs include 8 KHz and one other peak. The chart below lists different primary peaks associated with 8 KHz, the amount or difference between the two adjacent frequencies, and the priority of filtering.
|2.0 KHz||5-5 dB||1st|
|1.5 KHz||5-5 dB||2nd|
|3.0 KHz||5-10 dB||3rd|
|1.0 KHz||5-10 dB||4th|
|750 Hz||10-10 dB||5th|
|4.0 KHz||10-10 dB||6th|
(5b) The ‘minimum difference from adjacent frequencies’ column refers to the minimum difference, in decibels, between the auditory peak and its adjacent frequencies. ‘5-10’ refers to a 5 dB or greater difference on one side of the peak (i.e., either side of the peak), and a 10 dB or greater difference on the other side of the peak.
(5c) For all of the primary pairs listed above, there should be a minimum of a 5 dB peak at 8 KHz.
(5d) The most severe case is when a person has the same peaks in both ears. Peaks present only in the left ear are considered second in severity, and peaks present in only the right ear are considered third in severity.
(5e) As stated above, the most severe peaks are filtered first. Thus, if an audiogram has three peaks (e.g., 750 Hz – 2 KHz – 8 KHz), then filters should be set at 2 KHz and 8KHz since these two peaks have a higher priority than 750 Hz and 8 KHz.
(5f) Since the beginning of the development of Berard AIT, Dr. Berard’s recommendations have evolved through his own experiences and those of his practitioners. Initially, Dr. Berard determined that a peak at 4000 Hz was not to be filtered since the 6000 Hz frequency is too variable (often referred to as the ‘wandering frequency’) and might give the appearance of a real peak at 4000 Hz and 8000 Hz. Recently, it has become evident that it may be important to filter 4000 Hz when the difference is 10-10 dB or more with a peak at 8000 Hz (see Table A for priority order). It should be filtered as a secondary peak, if there is a 10-10 dB or more difference and if no other more important secondary peaks require filtering (see Table B below).
Examples When to Filter Primary Pairs
(6) A difference equal to or greater than 5 dB on both sides of 2.0 KHz and a difference equal to or greater than 5 dB on the left side of 8 KHz.
(7) A difference equal to or greater than 5 dB on one side of 3.0 KHz, and a difference equal to or greater than 10 dB on the other side of 3.0 KHz, and at least a 5 dB difference on the left side of 8 KHz.
(8) A difference equal to or greater than 10 dB on both sides of 750 Hz, and a difference equal to or greater than 5 dB on the left side of 8 KHz. (9) Secondary peaks. If primary peaks are not present in the audiogram, then single peaks, if present, should be filtered. These are referred to as ‘Secondary Peaks.’
|Secondary Peaks||Minimum difference
|1.0 KHz||10-10 dB|
|1.5 KHz||10-10 dB|
|2.0 KHz||10-10 dB|
|3.0 KHz||15-15 dB|
|4.0 KHz||10-10 dB|
(10) Plateaus. A ‘plateau’ refers to two peaks occurring next to each other. A plateau should be filtered if no other peaks are present, and the individual is having difficulty pronouncing vowels and dipthongs.
|1.0 and 1.5 KHz||5-10 dB|
|1.5 and 2.0 KHz||5-10 dB|
(11) Individual peaks. There are a few additional rules to follow when determining filters. With the exception of 1 KHz, 1.5 KHz, 2 KHz, 3 KHz, and 4 KHz, single peaks are not usually filtered. Single peaks at 750 Hz or 6 KHz should not be filtered.
(12) Situations in which filters are not used. If the audiogram contains four or more peaks involving different frequencies in the right and/or left ears, then filters should not be used. The only exception is when there are peaks at 2 KHz and 8 KHz. In this case, filters should be activated.
(13) If a person’s audiogram indicates poor hearing acuity in one or both ears, filters should not be used for the first 5 hours of AIT. The aim in this situation is to treat the person’s hearing first, and later treat the person’s auditory peaks. If acuity continues to be poor after 5 hours of AIT, then filters should not be used for the remaining 5 hours.
(14) If a reliable audiogram cannot be obtained from a person, then filters should not be used.
(15) Mid-way through the listening sessions. After five hours of AIT, the listener should be given a second audiotest to determine whether the initial peaks have decreased and whether new peaks have emerged. This audiogram should not be conducted immediately after a listening session. Several hours should pass to allow the person’s hearing to rest before obtaining the audiogram.
(16) If new peaks emerge, then filters should be set for these peaks based on the order of importance described above. If the audiogram does not have any peaks in either ear (i.e., a relatively straight line), then filters should not be used for the remaining 5 hours of AIT; however, if the initial audiogram had peaks at 2 KHz and 8 KHz and these peaks are no longer present, these filters should still be continued to ensure that the peaks do not return.
(17) An audiotest should also be given after ten hours of AIT to determine whether the peaks have been eliminated, indicated by a generally flattened pattern in the audiogram. Again, it is best to allow at least several hours of rest before conducting the final audiogram.
(18) If the listener has speech and language problems, (e.g., mute, echolalic, nonsense speech), the volume level for the left ear should be decreased after five hours of AIT. In this way, the volume on the right ear is louder and is stimulating the left hemisphere of the brain. The left hemisphere is responsible for most of our speech and language, and sounds entering the right ear are sent directly to the left hemisphere. This difference in sound level between the left and right ears should be consistent for the remaining listening sessions.
Scientists at the Yale-affiliated John B. Pierce Laboratory are using synesthesia to study the complex ways that the six senses* complement each other. This research may help clarify how the brain and nervous system are organized.
Most people create perceptions in their minds. A certain sound may elicit a memory of a particular color. However, people with synesthesia have a completely different experience. Some ‘see’ sounds, as a wash of colors similar to an after-image. For others, the colors are part of the sounds themselves. An artist from Manhattan, Carol Steen, says, “It’s like reading in Technicolor. Numbers and letters are colors.” When Carol injured her knee in a rock climbing accident, the landscape turned orange. “It’s like wearing sunglasses.” The level of pain was measured in color changes. “Blue is just ouch.”
Previous research has been based on the assumption that the senses are independent systems. Synesthesia would suggest that the senses may be combined in extremely complicated ways. When we look at a whistling teapot, sight helps identify sound; and the sound helps create the visual image. People with synesthesia may have an overdeveloped system for cross-sensory perception, or they may lack the neural equipment to separate the senses.
So far, studies have not indicated any apparent differences in the brains of those with synesthesia. However, plans are being made to image brains of people with synesthesia. This may reveal what areas are switched on in response to sound, sight or other stimuli. Perhaps these individuals will show greater activity or activity over a greater area of the brain. At this point, researchers have concluded that the brain cannot use one sense without being influenced by others. Understanding the workings of perception could unlock neural organization and mechanisms involved in memory and learning.
*The sixth sense? Not ESP but balance.
In his new book, Shaw discusses the details of the original study. He also summarizes other research studies investigating the effects of music on cognitive development and cognition in general.
A CD-Rom is also included with the book. The CD-Rom contains a recording of Mozart’s ‘Sonata for Two Pianos’ in D major and a demonstration software program called the ‘Spatial-Temporal Animation Reasoning’ (S.T.A.R.™) program. S.T.A.R. is an animated, interactive program designed to teach spatial-temporal reasoning skills.
Specifically, the IM program involves tapping the hand and foot in synch to a rhythmic, ‘metronome’ beat. There are numerous rhythmic exercises, which require tapping the hands and/or feet using different sequencing patterns in time with a beat.
IM was developed to improve timing skills in musicians and athletes, but Cassily found the program also to be beneficial for individuals with various disabilities. Interactive Metronome spent seven years researching, developing, and field testing the IM. They are now training practitioners to use this program.
A computer keeps track of how well the person claps his/her hands and/or taps his/her foot in relation to the rhythmic beat. Sensors are placed on a person’s hand using a glove and on a floor pad, and these two sensors are connected to a computer. A sound pitch provides feedback to the individual on how well he/she performs the exercises. Each session lasts one hour; and the entire program lasts about 15 hours, on average.
A study on the efficacy of IM will appear shortly in the American Journal of Occupational Therapy. The study, which involved a double-blind experimental design, included 56 boys with AD/HD. They were evaluated prior to and after the completion of the IM program. Two comparisons groups were utilized in the study; a control group without intervention and a placebo group who received another type of training on a computer. The researchers found significant improvements for those who participated in the IM program in comparison to the two control groups. The improvements included increased concentration, motor planning, reading, language processing, and better control of aggression.
For more information on the Interactive Metronome, call 1-877-99 I-MPRO
Dr. Steve Edelson wrote a response to their article in which he clarified some of the issues raised by Herbert and Sharp. SAIT’s web site, www.sait.org, contains a link to Herbert’s and Sharp’s article and to Dr. Edelson’s response.
Visual scanning is an important pre-reading task. The game, ‘Catch of the Day,’ presents three target numbers. The child is instructed to search a screen full of letters and find the three numbers. This develops the skills of looking for one particular item, ignoring irrelevant visual targets (selective attention), and sustaining focus on the task at hand (sustained attention). It can be thought of as ‘looking for specific trees in the forest.’
Animal pictures and sounds were introduced in the ‘Little Duck’ game; and now in ‘Word Practice,’ animal names are introduced. This receptive and expressive language development game begins at the single word labeling level and progresses to the phrase, sentence and command levels. A special Parentese button has been included to bridge between the single word and phrase levels. Children usually can understand speech at a level above their expressive levels. The Parentese level facilitates expressive language. It is easier for children to copy shortened phrases that do not contain articles and other non-substance words. For example, a sentence may say, “The puppy is hot.” The Parentese version would state, “hot puppy.” When children develop expressive language they go through a normal stage of deletion and put only some words in the expressed utterance. This is temporary, and more words will be added to expressive utterances as comprehension and auditory memory increase. The levels in ‘Word Practice’ include the categories: Verbs, Contrast Words, such as up/down, and Animals.
· ‘Show Me’ is an expressive pointing task that uses the vocabulary words learned in the ‘Word Practice’ game. A sample prompt would be, “Show me… the rooster.”
· ‘Match Ups’ is a memory building game for shapes, letters, animals, faces and lots of other common and uncommon items. The levels of difficulty are easily chosen and include 6 to 48 cards per screen.
· ‘Match Same to Same’ has seven levels of difficulty. This game prepares a child for visual and auditory discrimination tasks at the single word level.
· ‘Let’s Talk About It’ has 63 scenes with auditory and written prompts for conversation and expressive language. The picture is labeled in Word then a Who, What, When, Where, Why or Which question follows. The fourth button uses a pronoun in the sentence, and the fifth button gives additional factual information about the main topic or describes the function of the item. This can be both an expressive or receptive language task.
To learn more about this and other software programs, visit LocuTour’s web site at: www.locutour.com
Musical hallucinations are quite rare. It is thought that these hallucinations are brought upon by damage to the dorsal pons. These hallucinations can occur following a stroke, brain hemorrhage, encephalitis, tumor, or abscess. Other musical hallucinations reported in the literature include hearing Mozart music and the Glenn Miller band. In most cases, the individual hears music that is familiar to him/her.
Musical hallucinations may also occur in elderly people, especially those who suffer from chronic hearing loss. It is thought that these hallucinations are due to sensory (auditory) deprivation.
Pauline Allen gave a presentation on auditory integration training, and Billie Thompson gave a presentation on the Tomatis method. During an Awards Dinner, Drs. Guy Berard’s and Alfred Tomatis’ work were honored for their pioneering efforts with sound.
Other presenters (and presentations) included: Jeffrey D. Thompson (Changing Consciousness with Sound), Sri Shyam Bhatnagar (Inner Tuning Through Luminous Sound), Kay Gardner (The Art and Science of Healing with Sound), Jean Bouchet (Holopsony/Modified Music for Emotional and Physical Health), David Siever (Applications of Audio-Visual Entrainment Technology for Treating ADD, LDs and for Enhancing Peak Performance), Wayne Perry (Healing with Music, Sound and Toning: An Introduction to Toning, Sound Therapy and Vibrational Healing), and Charles Butler (The Curative Powers of Low Frequency Sound).
Written by: Pauline Allen of The Sound Learning Centre, 12 The Rise, London, England. Ms. Allen is an AIT practitioner and an authorized trainer of the Berard AIT method. Her email address is: firstname.lastname@example.org
It is not clear why these individuals have such high levels of mercury, but many clinicians and parents suspect the mercury came from vaccination shots (mercury is used as a preservative in many vaccines). It has also been suggested that these high levels of mercury may be due to ingestion of mercury (e.g., the mother ate mercury-tainted fish during her pregnancy or while nursing). In their review of the literature on mercury and autism, Sallie Bernard and her colleagues reported that mercury poisoning is associated with sound sensitivity, auditory disturbances, and difficulties differentiating voices in a crowd.
There is now much discussion in the autism field regarding the possible association between autism and mercury as well as the best ways to lower mercury levels in these individuals.
You can read Bernard’s review of the literature on mercury and autism at the Autism Research Institute’s web site at www.autism.com/ari