Volume 8 Number 1, 2000

News and Notes
SAIT’s Board of Directors elected officers for the 2000-2001 term. Sally Brockett was elected as President, Dr. Steve Edelson is now Vice-President, Cherri Saltzman will serve as Secretary and Beverly Hall as Treasurer.

We would like to thank Rose-Marie Davis for her work during the past two years as Treasurer.

The Board of Directors also elected Dr. Margaret Creedon of Chicago to SAIT’s Professional Advisory Board. Dr. Creedon served on SAIT’s Board of Directors for several years.

***
Bill Clark, developer of the BGC Audio Tone Enhancer/Trainer, recently analyzed the output of the new AIT device, the ‘Earducator.’ The results indicated that the output was very similar to the output of the AudioKinetron. Additionally, Bill Clark felt that the sound spectrum and the filters were more accurate than the AudioKinetron’s sound spectrum and filters.

***
Dr. Jeffrey Lewine of the University of Utah Medical School is currently investigating the efficacy of the ‘Digital Auditory Aerobics’ device. Dr. Lewine has already studied the effects of the AudioKinetron and found an improvement in the electrical activity of the brain using Magnetoencephalogrphy and EEG recordings.

*****
Back Issues of The Sound Connection on SAIT’s Web Site
Previous issues of The Sound Connection, starting with our very first issue published in 1993, are now posted on SAIT’s web site. This service is intended only for SAIT members. The newsletter section is password protected. Please do not share your password with people who are not SAIT members. Your personal password is located in the upper right-hand corner of this newsletter’s mailing label.

The task of uploading the newsletter was much more difficult than expected. This project involved retyping some of the old newsletters because we had changed our desktop publisher software, and the old desktop publisher program was no longer compatible with the computer operating system. Additionally, the task of programming password protection was much more difficult than expected. But we did it! We hope you will enjoy this new service.

The updated version of ‘Summaries of Research on Auditory Integration Training,’ published by the Autism Research Institute (ARI) is nearly finished. This paper includes all known studies on the use of the Berard method of AIT. It will be sent by postal mail to SAIT members and posted on ARI’s (www.autism.com/ari) and SAIT’s (www.sait.org) web sites.

*****
Filtering Auditory Peaks Using the Berard Method of AIT
Guy Berard, Stephen M. Edelson and Sally Brockett
Based on feedback received by SAIT, it is evident that the filtering procedure used in the Berard method of AIT is not clearly understood by many practitioners. The following information is presented to clarify the recommended filtering protocol.

(1) Interpreting the audiogram and selecting filters. One of the goals of AIT is to decrease both hypersensitivity to specific frequencies and auditory distortions by reducing or eliminating auditory peaks present in one’s hearing. An auditory peak can be defined as hearing a specific frequency more keenly than its two adjacent frequencies.

(2) During the listening sessions, the person hears processed music selected to cover a wide frequency range. Frequencies which audiograms show to be hypersensitive may be dampened by using filters. The width and depth of the filters as well as the total number of filters varies, depending on the AIT device. The filters on the AudioKinetron are: 750 Hz, 1 KHz, 1.5 KHz, 2 KHz, 3 KHz, 4 KHz, 6 KHz, and 8 KHz.

(3) Audiograms for both left and right ears need to be obtained in a quiet, but not sound-treated, room. Audiograms obtained in a sound-treated room or obtained through play audiometry should not be used when determining filter settings for AIT.

(4) No more than two filters should be used at one time. If more than two peaks are present in a person’s audiogram (considering both ears), refer to Table A below to determine which two peaks should be filtered. However, in some cases, no filters or only one filter should be activated, depending on which auditory peaks are present. Certain configurations in the audiogram are more important than others. The more important peaks should be filtered before other peaks are considered. Filters are determined by examining the severity of the peaks and the difference between the peak in question and its adjacent frequencies.

(5a) ‘Primary peaks’ or ‘primary pairs’ refer to two specific peaks present in the same ear. All of the primary pairs include 8 KHz and one other peak. The chart below lists different primary peaks associated with 8 KHz, the amount or difference between the two adjacent frequencies, and the priority of filtering.

Table A: Filter Priorities for Peaks Associated with 8 KHz

Primary Peaks
Associated with
8 KHz
Minimum difference
from adjacent
frequencies
Priority
2.0 KHz 5-5 dB 1st
1.5 KHz 5-5 dB 2nd
3.0 KHz 5-10 dB 3rd
1.0 KHz 5-10 dB 4th
750 Hz 10-10 dB 5th
4.0 KHz 10-10 dB 6th

(5b) The ‘minimum difference from adjacent frequencies’ column refers to the minimum difference, in decibels, between the auditory peak and its adjacent frequencies. ‘5-10’ refers to a 5 dB or greater difference on one side of the peak (i.e., either side of the peak), and a 10 dB or greater difference on the other side of the peak.

(5c) For all of the primary pairs listed above, there should be a minimum of a 5 dB peak at 8 KHz.

(5d) The most severe case is when a person has the same peaks in both ears. Peaks present only in the left ear are considered second in severity, and peaks present in only the right ear are considered third in severity.

(5e) As stated above, the most severe peaks are filtered first. Thus, if an audiogram has three peaks (e.g., 750 Hz – 2 KHz – 8 KHz), then filters should be set at 2 KHz and 8KHz since these two peaks have a higher priority than 750 Hz and 8 KHz.

(5f) Since the beginning of the development of Berard AIT, Dr. Berard’s recommendations have evolved through his own experiences and those of his practitioners. Initially, Dr. Berard determined that a peak at 4000 Hz was not to be filtered since the 6000 Hz frequency is too variable (often referred to as the ‘wandering frequency’) and might give the appearance of a real peak at 4000 Hz and 8000 Hz. Recently, it has become evident that it may be important to filter 4000 Hz when the difference is 10-10 dB or more with a peak at 8000 Hz (see Table A for priority order). It should be filtered as a secondary peak, if there is a 10-10 dB or more difference and if no other more important secondary peaks require filtering (see Table B below).

Examples When to Filter Primary Pairs

(6) A difference equal to or greater than 5 dB on both sides of 2.0 KHz and a difference equal to or greater than 5 dB on the left side of 8 KHz.

(7) A difference equal to or greater than 5 dB on one side of 3.0 KHz, and a difference equal to or greater than 10 dB on the other side of 3.0 KHz, and at least a 5 dB difference on the left side of 8 KHz.

(8) A difference equal to or greater than 10 dB on both sides of 750 Hz, and a difference equal to or greater than 5 dB on the left side of 8 KHz. (9) Secondary peaks. If primary peaks are not present in the audiogram, then single peaks, if present, should be filtered. These are referred to as ‘Secondary Peaks.’

Table B: Secondary Peaks

Secondary Peaks Minimum difference
from adjacent
frequencies
1.0 KHz 10-10 dB
1.5 KHz 10-10 dB
2.0 KHz 10-10 dB
3.0 KHz 15-15 dB
4.0 KHz 10-10 dB

(10) Plateaus. A ‘plateau’ refers to two peaks occurring next to each other. A plateau should be filtered if no other peaks are present, and the individual is having difficulty pronouncing vowels and dipthongs.

Table C: Plateaus

Plateaus Minimum difference
from adjacent
frequencies
1.0 and 1.5 KHz 5-10 dB
1.5 and 2.0 KHz 5-10 dB

(11) Individual peaks. There are a few additional rules to follow when determining filters. With the exception of 1 KHz, 1.5 KHz, 2 KHz, 3 KHz, and 4 KHz, single peaks are not usually filtered. Single peaks at 750 Hz or 6 KHz should not be filtered.

(12) Situations in which filters are not used. If the audiogram contains four or more peaks involving different frequencies in the right and/or left ears, then filters should not be used. The only exception is when there are peaks at 2 KHz and 8 KHz. In this case, filters should be activated.

(13) If a person’s audiogram indicates poor hearing acuity in one or both ears, filters should not be used for the first 5 hours of AIT. The aim in this situation is to treat the person’s hearing first, and later treat the person’s auditory peaks. If acuity continues to be poor after 5 hours of AIT, then filters should not be used for the remaining 5 hours.

(14) If a reliable audiogram cannot be obtained from a person, then filters should not be used.

(15) Mid-way through the listening sessions. After five hours of AIT, the listener should be given a second audiotest to determine whether the initial peaks have decreased and whether new peaks have emerged. This audiogram should not be conducted immediately after a listening session. Several hours should pass to allow the person’s hearing to rest before obtaining the audiogram.

(16) If new peaks emerge, then filters should be set for these peaks based on the order of importance described above. If the audiogram does not have any peaks in either ear (i.e., a relatively straight line), then filters should not be used for the remaining 5 hours of AIT; however, if the initial audiogram had peaks at 2 KHz and 8 KHz and these peaks are no longer present, these filters should still be continued to ensure that the peaks do not return.

(17) An audiotest should also be given after ten hours of AIT to determine whether the peaks have been eliminated, indicated by a generally flattened pattern in the audiogram. Again, it is best to allow at least several hours of rest before conducting the final audiogram.

(18) If the listener has speech and language problems, (e.g., mute, echolalic, nonsense speech), the volume level for the left ear should be decreased after five hours of AIT. In this way, the volume on the right ear is louder and is stimulating the left hemisphere of the brain. The left hemisphere is responsible for most of our speech and language, and sounds entering the right ear are sent directly to the left hemisphere. This difference in sound level between the left and right ears should be consistent for the remaining listening sessions.

***
Editors’ Note: Most published research supporting the efficacy of the Berard AIT method has relied on the procedure outlined above. The use of other filtering methods is likely to be less effective. (It is also possible that a person may have improved after receiving another form of AIT, but their results may have been better had he/she received the Berard AIT method.) There are some AIT practitioners who claim to offer the Berard method of AIT, but they do not truly follow his procedure. In many cases, these individuals were taught the “Berard AIT” method by someone other than Dr. Berard or one of the people he authorized to teach his method. We encourage our readers to share this article with other AIT practitioners–whether or not they are SAIT members. In this way, more families can be assured that their children are receiving the real Berard method of AIT. This article will also be posted on SAIT ‘s web site (www.sait.org).

*****
The Color of Sound, The Feel of Words
We often use color words to describe our feelings (“I feel blue,” “green with envy”) or words associated with sound to describe clothing (“That’s a loud tie”); and we are all familiar with words that associate flavor with touch, such as sharp cheese. These are descriptive phrases we learn and use without giving it much thought. However, a small group of people actually do hear color, watch music and see pain. Voice may have hues and the alphabet is a rainbow. These individuals have a rare psychological trait called synesthesia, which means “to perceive together.”

Scientists at the Yale-affiliated John B. Pierce Laboratory are using synesthesia to study the complex ways that the six senses* complement each other. This research may help clarify how the brain and nervous system are organized.

Most people create perceptions in their minds. A certain sound may elicit a memory of a particular color. However, people with synesthesia have a completely different experience. Some ‘see’ sounds, as a wash of colors similar to an after-image. For others, the colors are part of the sounds themselves. An artist from Manhattan, Carol Steen, says, “It’s like reading in Technicolor. Numbers and letters are colors.” When Carol injured her knee in a rock climbing accident, the landscape turned orange. “It’s like wearing sunglasses.” The level of pain was measured in color changes. “Blue is just ouch.”

Previous research has been based on the assumption that the senses are independent systems. Synesthesia would suggest that the senses may be combined in extremely complicated ways. When we look at a whistling teapot, sight helps identify sound; and the sound helps create the visual image. People with synesthesia may have an overdeveloped system for cross-sensory perception, or they may lack the neural equipment to separate the senses.

So far, studies have not indicated any apparent differences in the brains of those with synesthesia. However, plans are being made to image brains of people with synesthesia. This may reveal what areas are switched on in response to sound, sight or other stimuli. Perhaps these individuals will show greater activity or activity over a greater area of the brain. At this point, researchers have concluded that the brain cannot use one sense without being influenced by others. Understanding the workings of perception could unlock neural organization and mechanisms involved in memory and learning.

*The sixth sense? Not ESP but balance.

*****
New Book on the ‘Mozart Effect’
A new, 374-page book entitled Keeping Mozart in Mind was recently published on the ‘Mozart Effect’ (Academic Press, 2000, ISBN: 0126392900). The book was written by Gordon Shaw, Ph.D. who co-authored the first study published on the ‘Mozart Effect.’ In his study, spatial-temporal reasoning scores increased significantly in college students after they heard Mozart’s ‘Sonata for Two Pianos in D Major.’ The ‘Mozart Effect’ lasted 10 to 15 minutes following the music. When the study was published in Nature in 1993, Shaw and his colleagues received world-wide attention for their intriguing findings.

In his new book, Shaw discusses the details of the original study. He also summarizes other research studies investigating the effects of music on cognitive development and cognition in general.

A CD-Rom is also included with the book. The CD-Rom contains a recording of Mozart’s ‘Sonata for Two Pianos’ in D major and a demonstration software program called the ‘Spatial-Temporal Animation Reasoning’ (S.T.A.R.™) program. S.T.A.R. is an animated, interactive program designed to teach spatial-temporal reasoning skills.

*****
The Interactive Metronome
The Interactive Metronome (IM) was developed in 1992 by James F. Cassily and is distributed by a company of the same name. This program involves training a person to plan and to sequence his/her natural timing, which, according to the manufacturers, influences general movement, balance, symmetry, coordination, and concentration.

Specifically, the IM program involves tapping the hand and foot in synch to a rhythmic, ‘metronome’ beat. There are numerous rhythmic exercises, which require tapping the hands and/or feet using different sequencing patterns in time with a beat.

IM was developed to improve timing skills in musicians and athletes, but Cassily found the program also to be beneficial for individuals with various disabilities. Interactive Metronome spent seven years researching, developing, and field testing the IM. They are now training practitioners to use this program.

A computer keeps track of how well the person claps his/her hands and/or taps his/her foot in relation to the rhythmic beat. Sensors are placed on a person’s hand using a glove and on a floor pad, and these two sensors are connected to a computer. A sound pitch provides feedback to the individual on how well he/she performs the exercises. Each session lasts one hour; and the entire program lasts about 15 hours, on average.

A study on the efficacy of IM will appear shortly in the American Journal of Occupational Therapy. The study, which involved a double-blind experimental design, included 56 boys with AD/HD. They were evaluated prior to and after the completion of the IM program. Two comparisons groups were utilized in the study; a control group without intervention and a placebo group who received another type of training on a computer. The researchers found significant improvements for those who participated in the IM program in comparison to the two control groups. The improvements included increased concentration, motor planning, reading, language processing, and better control of aggression.

For more information on the Interactive Metronome, call 1-877-99 I-MPRO

*****
Misinformation About AIT Published on Internet
Dr. James D. Herbert and Ian Sharp of the American Council on Science and Health recently published an article on www.drkoop.com titled ‘Pseudoscientific Treatments for Autism, Part 1.’ In their paper, Herbert and Sharp presented the reader a view of auditory integration training (AIT) based on some misinformation.

Dr. Steve Edelson wrote a response to their article in which he clarified some of the issues raised by Herbert and Sharp. SAIT’s web site, www.sait.org, contains a link to Herbert’s and Sharp’s article and to Dr. Edelson’s response.

*****
Look! Listen! Learn Language! Version 2: New Software from LocuTour
Look! Listen! Learn Language! Version 2 has newly revised games designed specifically for children with autism and language delays. The sound-to-picture identification game, ‘Little Duck Says Quack, Quack,’ has four levels. The target picture is presented at the top of the screen with the matching picture and one foil picture at the bottom. This forced-choice format encourages visual and auditory discrimination for animal pictures and sounds. The discrimination levels increase from one target/two choices to one target/eight choices.

Visual scanning is an important pre-reading task. The game, ‘Catch of the Day,’ presents three target numbers. The child is instructed to search a screen full of letters and find the three numbers. This develops the skills of looking for one particular item, ignoring irrelevant visual targets (selective attention), and sustaining focus on the task at hand (sustained attention). It can be thought of as ‘looking for specific trees in the forest.’

Animal pictures and sounds were introduced in the ‘Little Duck’ game; and now in ‘Word Practice,’ animal names are introduced. This receptive and expressive language development game begins at the single word labeling level and progresses to the phrase, sentence and command levels. A special Parentese button has been included to bridge between the single word and phrase levels. Children usually can understand speech at a level above their expressive levels. The Parentese level facilitates expressive language. It is easier for children to copy shortened phrases that do not contain articles and other non-substance words. For example, a sentence may say, “The puppy is hot.” The Parentese version would state, “hot puppy.” When children develop expressive language they go through a normal stage of deletion and put only some words in the expressed utterance. This is temporary, and more words will be added to expressive utterances as comprehension and auditory memory increase. The levels in ‘Word Practice’ include the categories: Verbs, Contrast Words, such as up/down, and Animals.

· ‘Show Me’ is an expressive pointing task that uses the vocabulary words learned in the ‘Word Practice’ game. A sample prompt would be, “Show me… the rooster.”

· ‘Match Ups’ is a memory building game for shapes, letters, animals, faces and lots of other common and uncommon items. The levels of difficulty are easily chosen and include 6 to 48 cards per screen.

· ‘Match Same to Same’ has seven levels of difficulty. This game prepares a child for visual and auditory discrimination tasks at the single word level.

· ‘Let’s Talk About It’ has 63 scenes with auditory and written prompts for conversation and expressive language. The picture is labeled in Word then a Who, What, When, Where, Why or Which question follows. The fourth button uses a pronoun in the sentence, and the fifth button gives additional factual information about the main topic or describes the function of the item. This can be both an expressive or receptive language task.

To learn more about this and other software programs, visit LocuTour’s web site at: www.locutour.com

*****
Musical Hallucinations: Patients Hear Music in their Minds
A recent article published in the August 8, 2000 issue of Neurology, a publication of the American Academy of Neurology, described a 57-year old man who experienced auditory hallucinations. An MRI indicated that he was suffering from a lesion in a part of the brainstem called the dorsal pons. While experiencing these hallucinations, he heard folk songs sung by choruses of men and children.

Musical hallucinations are quite rare. It is thought that these hallucinations are brought upon by damage to the dorsal pons. These hallucinations can occur following a stroke, brain hemorrhage, encephalitis, tumor, or abscess. Other musical hallucinations reported in the literature include hearing Mozart music and the Glenn Miller band. In most cases, the individual hears music that is familiar to him/her.

Musical hallucinations may also occur in elderly people, especially those who suffer from chronic hearing loss. It is thought that these hallucinations are due to sensory (auditory) deprivation.

*****
Drs. Guy Berard and Alfred Tomatis Are Honored
A conference entitled ‘Light and Sound’ was held in June, 2000 at the Dominican University in Chicago and was sponsored by the Spectrum International Institute For Wellness, Education and Research and The Chicago Medical Society. The focus of the conference was on the use of light and sound as a means to improve the well-being of individuals.

Pauline Allen gave a presentation on auditory integration training, and Billie Thompson gave a presentation on the Tomatis method. During an Awards Dinner, Drs. Guy Berard’s and Alfred Tomatis’ work were honored for their pioneering efforts with sound.

Other presenters (and presentations) included: Jeffrey D. Thompson (Changing Consciousness with Sound), Sri Shyam Bhatnagar (Inner Tuning Through Luminous Sound), Kay Gardner (The Art and Science of Healing with Sound), Jean Bouchet (Holopsony/Modified Music for Emotional and Physical Health), David Siever (Applications of Audio-Visual Entrainment Technology for Treating ADD, LDs and for Enhancing Peak Performance), Wayne Perry (Healing with Music, Sound and Toning: An Introduction to Toning, Sound Therapy and Vibrational Healing), and Charles Butler (The Curative Powers of Low Frequency Sound).

Written by: Pauline Allen of The Sound Learning Centre, 12 The Rise, London, England. Ms. Allen is an AIT practitioner and an authorized trainer of the Berard AIT method. Her email address is: pallen@thesoundlearningcentre.co.uk

*****
Mercury Poisoning: A Possible Reason for Auditory Problems?
There is much interest in the possible role mercury may play in the development of some cases of autism and other developmental disabilities. Physicians are finding very high mercury levels in many of their autistic clients.

It is not clear why these individuals have such high levels of mercury, but many clinicians and parents suspect the mercury came from vaccination shots (mercury is used as a preservative in many vaccines). It has also been suggested that these high levels of mercury may be due to ingestion of mercury (e.g., the mother ate mercury-tainted fish during her pregnancy or while nursing). In their review of the literature on mercury and autism, Sallie Bernard and her colleagues reported that mercury poisoning is associated with sound sensitivity, auditory disturbances, and difficulties differentiating voices in a crowd.

There is now much discussion in the autism field regarding the possible association between autism and mercury as well as the best ways to lower mercury levels in these individuals.

You can read Bernard’s review of the literature on mercury and autism at the Autism Research Institute’s web site at www.autism.com/ari