NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
National Institutes of Health (US); Biological Sciences Curriculum Study. NIH Curriculum Supplement Series [Internet]. Bethesda (MD): National Institutes of Health (US); 2007.
Introduction
Sound offers us a powerful means of communication. Our sense of hearing enables us to experience the world around us through sound. Because our sense of hearing allows us to gather, process, and interpret sounds continuously and without conscious effort, we may take this special sense of communication for granted. But, did you know that
- Human communication is multisensory, involving visual, tactile, and sound cues?
- The range of human hearing, from just audible to painful, is over 100-trillion-fold?
- Tiny specialized cells in the inner ear, known as hair cells, are responsible for converting the vibrational waves of sound into electrical signals that can be interpreted by the brain?
- Tinnitus, commonly known as "ringing in the ears," is actually a problem that originates in the brain?
- A recent study showed that men who hunt experience an increased risk of high-pitched hearing loss of 7 percent for every five years that they hunt? Nearly all (95 percent) of these same hunters report that they do not use hearing protection while hunting.11
Contemporary hearing research is guided by lessons learned from sensory research, namely that specialized nerve cells respond to different forms of energy—mechanical, chemical, or electromagnetic—and convert this energy into electrochemical impulses that can be processed by the brain. The brain then works as the central processor of sensory impulses. It perceives and interprets them using a "computational" approach that involves several regions of the brain interacting all at once. This notion is different from the long-held view that the brain processes information one step at a time in a single brain region. Over the past decade, scientists have begun to understand the intricate mechanisms that enable the ear to convert the mechanical vibrations of sound to electrical energy, thereby allowing the brain to process and interpret these signals.
Scientific understanding of the role of genes in hearing is also increasing at an impressive rate. The first gene associated with hearing was isolated in 1993. By the end of 2000, more than 60 genes related to hearing were identified.15 In addition, scientists have pinpointed over 100 chromosomal regions believed to harbor genes affecting the hearing pathway. Many genes were first isolated in the mouse, and from this, the human genes were identified. Completion of the Mouse and Human Genome Projects is helping scientists isolate these genes.
The rapid growth in our understanding is of more than academic interest. In a practical sense, sharing this information with young people can enable them to adopt a lifestyle that promotes the long-term health of their sense of hearing. With this in mind, this supplement will address several key issues, including
- What is the nature of sound?
- What mechanism allows us to process sounds with great precision—from the softest whisper to the roar of a jet engine, from a high-pitched whistle to a low rumble?
- What are the roles of hearing, processing, and speaking in human communication?
- What happens when the hearing mechanism is altered or damaged? How does sound processing change?
- What can be done to prevent or accommodate damage to our sense of hearing?
Misconceptions Related to Sensory Perception and Hearing
In presenting the material contained within this supplement, you may have to deal with students' incomplete understanding about hearing. Some of the likely misconceptions about hearing that students have follow:
Misconception 1: Our senses provide a complete and accurate picture of the world
Younger students are often unaware of the limitations of their senses. They may believe that what they perceive is all that there is. Most students would be quite surprised to learn that their ears produce measurable sounds of their own that are normally inaudible to the brain. Also, they might not be aware that some animals use sound frequencies that are out of our hearing range. For example, whales communicate using low-frequency sounds that are inaudible to humans and can carry across vast expanses of ocean. This module will make students aware that our senses react to only a limited range of the energy inputs available. Much sensory information exists beyond our ability to experience it. Our level of awareness is influenced by our individual abilities, our genes, our environment, and our previous experiences, as well as the interactions among them. Learning about the limitations of our senses can help students interpret their environment more accurately.
Your ears produce sounds of their own that are normally inaudible to the brain.
Misconception 2: Our senses function independently of one another
Students may believe that because each sense is specialized for a particular type of sensation, senses function by themselves and do not interact with one another or with the rest of the body. Research, however, reveals many interactions between the senses.7 During this module, students will learn about the sensory integration that takes place in the brain.
Misconception 3: As we age, our brain networks become fixed and cannot be changed
Scientific research has shown that the brain never stops changing and adjusting to its environment.1 This ability is important for acquiring new knowledge and for compensating for deficiencies that result from age or injury. The ability of the brain to "reprogram" itself is called plasticity. Special brain exercises, or training techniques, exploit brain plasticity to help people cope with specific language and reading problems.
Misconception 4: Our senses do not really require any preventive maintenance
Students may believe that because our senses function without any conscious input, always being "on," their function and health are not influenced by what we do. The module will make students aware that the overall health of their senses, like all other bodily systems, is affected by the lifelong demands placed on them. Students will learn about biological mechanisms in which potentially harmful input can lead to both short-term and long-term hearing impairments, and they will learn about simple, effective ways to minimize harmful stimuli.
Major Concepts Related to Hearing and Communication
Research into hearing and communication is providing a scientific foundation for understanding the anatomy, physiology, and genetics of the hearing pathway, as well as the social and cultural aspects of human communication. The following discussion is designed to introduce you to some major concepts about hearing and communication.
Communication is multisensory
Although some people might define communication as an interaction between two or more living creatures, it involves much more than this. For example, we are constantly receiving information from, and changing our relationship with, our environment. This communication is received through our senses of smell, taste, touch, vision, and hearing. Communication with others makes use of vision (making eye contact or assessing body language) and sound (using speech or other sounds, such as laughing and crying). When a group of people shares a need or desire to communicate, language is born. The most common human language is the language of words. Words may be communicated in various ways. Although they are usually spoken, they also may be written, fingerspelled, or expressed through sign language.
Communication with others makes use of sound and vision.
Language acquisition: imprinting and critical periods
Since the time of Plato, there has been debate over the nature of language. Some believe that language is inborn and purposeful, while others believe it to be artificial and arbitrary. Some consider language to be an evolutionary product, while others do not. It appears that words are not "built into" the brain, because language is a relatively recent evolutionary development and also because languages differ substantially from one another. Language and communication are made possible by specialized structures. We have evolved a sophisticated apparatus for both speech and hearing. Our brains have specific regions devoted to speech, hearing, and language functions. Still, the mechanisms by which children acquire language are only partially understood.
Our brains have specific regions devoted to speech, hearing, and language functions.
There are two concepts important to the acquisition of language. One is imprinting, which refers to the ability of some animals to learn rapidly at a very early age and during a well-defined period in their development. Imprinting generally refers to the ability of offspring to acquire the behaviors characteristic of their parents. This process, once it occurs, is not reversible. A famous example of imprinting was described by Nobel laureate Konrad Lorenz in the 1930s.5 Lorenz observed that newly hatched goslings would follow him, rather than the mother goose, if they saw him first. The period of imprintability may be very short, just hours for some species.
A second concept, related to imprinting, is critical periods. A nonhuman example of a critical period is the limited time frame within which a male bird must acquire his song.8 For instance, a male white-crowned sparrow usually begins singing its full song between 100 and 200 days of age. Proper song acquisition is needed for mating and for marking territory. However, to learn his song, the young bird must be exposed to an adult bird's song consistently and frequently between one week and two months after hatching (its critical period for song acquisition). If the male sparrow hears the song only before or after its critical period, then he will not be able to learn the song correctly.
These examples demonstrate the brain's flexibility—its ability to be changed or to adapt to its environment. They demonstrate that an animal may alter its behavior or acquire a behavior that helps improve its chances for survival. Do animals have anything to teach us about our own acquisition of language? The answer seems to be yes. Consider the following: Scientists have reported that seal pups learn to recognize their mothers' voices within a few days of being born.2 This is important because the mother seals must leave their pups after roughly a week to go hunting. Upon returning, mother seals vocalize and wait for their pups to respond. By playing recordings of various females, the investigators determined that for the first few hours after birth, seal pups will respond to the voice of any adult female. However, after two to five days, the pups learn to respond only to their mother's voice.
Very soon after birth, human infants learn to distinguish speech sounds from other types of sound. Within the next month or two, the infant learns to distinguish between different speech sounds.4, 14 An 18-month-old toddler can recognize and use the sounds (called phonemes) of his or her language and can construct two-word phrases. A 3½-year-old child can construct nearly all of the possible sentence types. From this point on, vocabulary and language continue to expand and be refined.12
Communication is truly a multisensory experience. For most individuals, the pathway from creating sound (speaking) to receiving, processing, and interpreting sound (hearing) is critical.
The parameters of language development and developmental phases are under rigorous study. For ethical reasons, investigators cannot explore such questions through human experimentation that would deprive infants of language intentionally. Occasionally, however, unusual circumstances provide us with a glimpse of how humans acquire or do not acquire spoken language. There are examples of individuals who were not exposed to spoken language from birth but who had normal hearing. These individuals never developed normal language or speech. Although such examples have been put forth as justification for the critical-period hypothesis for language development, there are confounding issues. The possibility exists that these individuals had some type of brain abnormality that was responsible for their not acquiring spoken language. Nonetheless, the importance of sound as a stimulus for the hearing apparatus and its need to be processed by the brain for understanding is unquestioned.
Communication is truly a multisensory experience. For most individuals, the pathway from creating sound (speaking) to receiving, processing, and interpreting sound (hearing) is critical. This module focuses on the key issues of how sound is processed so that communication is achieved.
Sound has a physical basis
Sound represents vibrational energy. It is created when a medium such as air, wood, metal, or a person's vocal cords vibrate. Sounds carried as energy are transferred from one molecule to the next in the vibrating medium. To understand sound, consider the analogy in which a stone is dropped into a body of water. This action produces ripples that will spread out in all directions from the point where the stone contacted the water. The ripples become weaker (decrease in intensity) as they get farther away from the origin. So it is with sound. The vibration through a medium proceeds in waves. However, unlike ripples on water, sound waves move away from their point of origin in three dimensions, not just two.
Sound waves possess specific characteristics. Frequency represents the number of complete wave cycles per unit of time, usually one second (see Figure 5). Frequency is expressed in hertz (Hz), which means cycles per second. Low-frequency sounds are those that vibrate only a few times per second, while high-frequency sounds vibrate many more times per second. The term used to distinguish your perception of higher-frequency sounds from lower-frequency sounds is pitch.
The speed of sound is constant for all frequencies, although it does vary with the medium through which it travels. In air, sound travels at a speed of roughly 340 meters per second. Sound travels fastest through metals because the molecules of that medium are packed very closely together. Similarly, sound travels about four times faster in water than in air. It follows that sound travels faster in humid air than dry air; in addition, humid air absorbs more high frequencies than low frequencies, leading to differences in the perception of sound heard through the two media. Finally, temperature can affect the speed of sound in any medium. For instance, the speed of sound in air increases by about 0.6 meters per second for each degree Celsius increase in temperature.
The human ear responds to frequencies in the range of 20 Hz to 20,000 Hz (20 kHz),18 although most speech frequencies lie between 100 and 4,000 Hz. Frequencies above 20,000 Hz are referred to as ultrasonic. Though ultrasonic frequencies are outside the range of human perception, many animals can hear these sounds. For instance, dogs can hear sounds at frequencies as high as 50,000 Hz, and bats can hear sounds as high as 100,000 Hz. Other sounds, such as some produced by earthquakes and volcanoes, have frequencies of less than 20 Hz. These sounds, referred to as infrasonic or subsonic, are also outside the range of human hearing.
We all know that sounds can be louder or softer, but what does this mean? Sound is energy, and this energy, when traveling through air, displaces, or vibrates, air molecules. For example, the softest sound humans can hear is a sound that displaces particles of air by one-billionth of a centimeter.13 The extent to which air particles move from their original resting point determines the amplitude of the sound wave (see Figure 7). The greater the amplitude of the sound wave, the greater the intensity, or pressure, of the sound. Intensity refers to the overall amplitude of a sound. This distinction in terms is necessary, since nearly all sounds to which we are exposed are complex sounds made up of a combination of sound waves. Loudness is our perception of the intensity, frequency, and duration of a sound.
Sound intensity is measured in relation to an accepted reference point. One such reference is the threshold at which a sound can be heard. How the intensity of any given sound compares with this standard reference level is given in units known as decibels (dB). The decibel is one-tenth of a bel, a unit named after the inventor Alexander Graham Bell. The decibel scale is not a linear one, but rather represents the ratio of the sound to the reference standard. To understand why ratios are necessary, consider the tremendous range of sound intensities we are capable of hearing. Scientists estimate that the human ear is sensitive to about 100,000,000,000,000 (1014) units of intensity. Also consider that a shout is about 1,000,000 (106) times more powerful than a whisper. Because dealing with such large numbers is cumbersome, the decibel scale is used to simplify comparisons (see Table 1). Every 10-dB increase in sound intensity represents a 10-fold increase in sound intensity and a perceived doubling in loudness. Therefore, a sound at 60 dB is 100 times as intense as a sound at 40 dB but is only perceived as four times as loud. In this way, the predominant range of human hearing is represented on a scale from 0 to 140 dB. The average intensities of some everyday sounds are presented in Table 2.
Every 10-dB increase in sound intensity represents a 10-fold increase in sound intensity and a perceived doubling in loudness.
Individuals are often unaware of the damage loud noise does to their hearing. Even common noises, such as highly amplified music and gas-engine mowers or leaf blowers, can damage human hearing with prolonged exposure. Sporting events can also expose individuals to hazardous decibel levels as defined by the Occupational Health and Safety Administration (OSHA). Under OSHA guidelines, the limit of continuous noise exposure for an eight-hour day in an industrial setting is 90 dB. OSHA also prohibits workplace impact noise (short bursts of sound) greater than 140 dB. By increasing our awareness of decibel levels of common environmental noises, we can better limit our exposure to hazardous noise levels or take measures to protect our ears.
Even common noises, such as highly amplified music and gas-engine mowers or leaf blowers, can damage human hearing with prolonged exposure.
Perception of sound has a biological basis
When sound, as vibrational energy, arrives at the ear, it is processed in a complex but distinct series of steps. These steps reflect the anatomical division of the ear into the outer ear, middle ear, and inner ear (see Figure 8).
The pathway from the outer ear to the inner ear is remarkable in its ability to precisely process sounds from the very softest to the very loudest and to distinguish very small changes in the frequency of sound (pitch). Humans can discern a difference in frequency of just 0.1 percent. This means that humans can tell the difference between sounds at frequencies of 1,000 Hz and 1,001 Hz.
The outer ear
The outer ear is composed of two parts. The pinna is the outside portion of the ear and is composed of skin and cartilage. The second part is called the ear canal (also called the external auditory canal). The pinna, with its twists and folds, serves to enhance high-frequency sounds and to focus sound waves into the middle and inner portions of the ear. The pinna also helps us determine the direction from which a sound originates. However, the greatest asset in judging the location of a sound is having two ears. Because one ear is closer to the source of a sound than the other, the brain detects slight differences in the times and intensities of the arriving signals. This allows the brain to approximate the sound's location. Interestingly, the position and orientation of the pinna, at the side of the head, help reduce sounds that originate behind us. This helps us hear sounds that originate in the direction we are looking and reduces distracting background noises.
Some students (and adults) may believe that the size of the ear is an indication of the organism's hearing ability—that is, the larger the ear, the better the ability to hear. This misperception doesn't take into account the internal structures of the ear that process sound vibrations. A large pinna may serve a function that is unrelated to hearing. For example, the external ear of the African elephant is filled with small blood vessels that help the animal dissipate excess heat. The external ear may be specialized in other ways, as well. Cat owners, for example, have undoubtedly observed the rather dramatic movement of their pet's pinnae as the animal attempts to locate the source of a sound.
The ear canal is about 2.5 cm (1 inch) long and leads to the tympanic membrane (eardrum) of the middle ear. The outer two-thirds of the canal contains glands that secrete a wax-like substance. This earwax, along with hairs that are present, serves to keep dust, insects, and other foreign material from going deeper into the ear. It also helps maintain a constant humidity and temperature for the middle ear. Individuals should not attempt to remove earwax, since this secretion will work itself out of the canal naturally in most cases. To avoid damage, it should be removed by a medical professional. Hearing researchers strongly concur with the truth of the adage: Put nothing smaller than your elbow into your ear. In addition to its protective function, the ear canal acts as an amplifier for sound frequencies between 3,000 and 4,000 Hz.
The ear canal acts as an amplifier for sound frequencies between 3,000 and 4,000 Hz.
The elegance of the middle ear system lies in its ability to greatly amplify sound vibrations before they enter the inner ear.
The middle ear
The tympanic membrane (eardrum) separates the outer ear from the middle ear. It is a continuously growing structure, which means that damage to the membrane can generally be repaired. The membrane is circular in shape. The elastic properties of the tympanic membrane allow it to vibrate in response to sound waves. Vibrations from the tympanic membrane tend to focus near the center of the structure. From there, the vibrations are transferred to the malleus, the first of the three bones of the middle ear. The three bones of the middle ear are collectively called the ossicles. The second bone of the middle ear is the incus, which is connected to the malleus and vibrates in concert with it. A third bone, the stapes, is connected to the incus, and also vibrates. The stapes sits in an opening in the bony wall, called the oval window, that separates the middle ear from the inner ear. The elegance of the middle ear system lies in its ability to greatly amplify sound vibrations before they enter the inner ear. Amplification occurs in part because the tympanic membrane is 15–30 times larger than the oval window. This size difference allows the force from the initial movement of the tympanic membrane to be concentrated as this energy transfers to the inner ear. The ossicles are the smallest bones in the body. The three bones are smaller than an orange seed. The malleus reaches an average length of about 8 mm, the incus 9 mm, and the stapes, only 3 mm. These bones also are referred to informally as the hammer, anvil, and stirrup, respectively.
The middle ear is an air-filled space. It is connected to the back of the throat by a small tube called the eustachian tube, which allows the air in the middle ear space to be refreshed periodically. The eustachian tube can become blocked by infection, and fluid may fill the middle ear space. Changes in air pressure can also affect the tympanic membrane, resulting in the ear-popping phenomenon experienced by people who fly in airplanes or drive over mountain roads. The membrane may bend in response to altered air pressure and then "pop" back to its original position when the eustachian tube opens and internal and external air pressures are equalized.
The process of converting the vibrational energy of sound into nerve impulses is called transduction.
The inner ear
Two interconnected parts that form a system of small cavities and passageways make up the inner ear. One part is the vestibular system, which is responsible for helping maintain balance. The second part is the cochlea, a coiled cavity about 35 mm long. The human cochlea makes about two turns. It is shaped like a spiral seashell or snail shell and is the hearing portion of the inner ear. It is responsible for converting the vibrational energy produced by the middle ear into nerve impulses (electrical energy) that will travel to the brain. The process of converting energy from one form into another is called trans-duction. Because the brain is incapable of interpreting the information in the vibrational energy of a sound source, transduction is a critical process, providing information to the brain in a form that it can process.
The cochlea is divided into an upper chamber, called the scala vestibuli or vestibular canal, and a lower chamber, called the scala tympani or tympanic canal. These are seen most easily if the cochlea is represented as uncoiled, as in Figure 9.
Both the upper and lower chambers are filled with a fluid, called perilymph, which is nearly identical to spinal fluid. The stapes vibrates against the oval window, creating fluid vibrations that are transmitted as pressure waves all the way through the cochlea. As represented by the arrows in Figure 10, these waves move from the upper chamber to the lower chamber, to the round window. The round window allows the release of the hydraulic pressure caused by vibration of the stapes in the oval window. Additionally, the diameter of the chambers decreases from base (closest to the windows) to apex.
The upper and lower chambers are separated from one another by the cochlear duct. The cochlear duct is separated from the lower chamber by the basilar membrane and is filled with endolymph, a fluid similar to that found within cells. Sitting on the basilar membrane is the highly sensitive organ of hearing called the organ of Corti, named after Alfonso Corti, the Italian anatomist who discovered it in the late 1800s. The relationships between the basilar membrane and the organ of Corti are depicted in Figure 11.
Hair cells of the organ of Corti are the specialized receptor cells of hearing. Under a microscope, these cells appear as elongated ovals with hairlike extensions, the stereocilia, waving at one end. Like microphones, hair cells ultimately translate, or transduce, mechanical vibrations occurring in the outer, middle, and inner ear into electrical impulses. These nerve impulses are then relayed to the brain via the auditory nerve. There are actually two types of hair cells. The inner hair cells are arranged in a single row along the full length of the organ of Corti. There are about 3,500 of them in total. The outer hair cells run the full length of the organ of Corti but are arranged in three parallel rows. There are nearly four times more outer hair cells than inner hair cells (about 12,000 per ear). The inner hair cells contact nearly all of the nerve fibers of the auditory nerve that transmits information to the brain. The outer hair cells primarily contact nerve fibers that carry information from the brain. Hair cells are quite sensitive to stimulation by slight sounds and also are extremely rapid in their responses and communication with auditory neurons. Hair cells, for example, respond 1,000 times faster to stimulation than do visual receptor cells. The key to their sensitivity lies in part with their structure. The membrane-bound hairlike structures that give hair cells their name, stereocilia, extend from the cell tops and are embedded in an overhanging sheet of cells called the tectorial membrane (see Figure 12). Each hair cell may have about 100 stereocilia. In a resting state, the stereocilia lean on one another and have the overall appearance of a conical bundle.
Hair cells ultimately translate, or transduce, mechanical phenomena occurring in the outer, middle, and inner ear into electrical impulses.
To understand how hair cells function to transduce the mechanical vibrations of sound, consider Figures 10 and 12.
The bodies of hair cells sit on top of the basilar membrane (see Figure 12). The stereocilia of hair cells connect the body of the cell with the tectorial membrane. Pressure waves in the cochlea (see Figure 10) move the basilar membrane and cause the stereocilia to move. This movement initiates biochemical events in the cells that result in the generation of electrical signals.
Sound is mapped to different parts of the cochlea according to frequency. Figure 13 shows where tones of different frequencies cause vibrations of maximum amplitude along the length of the cochlea. The base, close to the stapes, is stiff and narrow and responds more to high-frequency (high-pitched) sounds. The apex, far from the stapes, is broad and responds more to low-frequency sounds.
Transmission to the brain
Extending from the organ of Corti are 30,000–40,000 nerve fibers that form the auditory nerve. The number of fibers required to carry a sound signal may give the brain a measure of the sound's intensity. The fibers of the auditory nerve proceed a short distance to the brainstem. From there, fibers extend to the midbrain and then to the auditory cortex, which is located in the temporal lobe of the brain (see Figure 14). Through mechanisms that remain unknown, the brain interprets the electrochemical information it receives, thus allowing us to perceive sounds as having varying loudness and pitch.
The brain recognizes and interprets sound in our environment through a sequence of events called auditory processing. A disorder, known as auditory processing disorder (APD), came to prominence in the 1970s.3, 9 In APD, something interferes with the brain's ability to process or interpret information about sound, although hearing seems to be normal. Children with APD typically have normal hearing and intelligence. Symptoms of APD are having difficulty paying attention and remembering information presented orally; poor listening skills; difficulty carrying out multistep directions; poor spelling, vocabulary, and reading comprehension skills; difficulty processing information; low academic performance; behavioral problems; language difficulty (tendency to confuse syllable sequences); and difficulty developing vocabulary and understanding language. APD is sometimes called "word-deafness" because children with the disorder may not recognize the subtle differences between sounds in words. For example, children with APD may hear the sentence "Tell me how a couch and a chair are alike" as "Tell me how a cow and a hair are alike."
What causes this apparent deficiency or slowing in the brain's ability to process auditory information? Researchers do not know. Auditory processing is a learned function, and if something interferes with the brain's training, the result may be a deficit in the capacity to process sound.
Sound direction is localized by virtue of our having two ears and our ability to use different parts of the auditory system to process distinct aspects of incoming directional information.
Sound direction is localized by virtue of our having two ears and our ability to use different parts of the auditory system to process distinct aspects of incoming directional information. Certain cells in the brainstem compare the intensities of sound coming into each ear and then relay a computed signal to the auditory cortex to estimate the sound's direction. Another group of brainstem cells contributes to the interpretation of sound direction by specifically comparing the time lag between the sound reaching them from the right ear versus the left ear.
Nerve fibers coming from the brain may carry information back to the ear. This is the brain's way of filtering out signals that are unimportant, and concentrating only on important signals. Other nerve fibers proceed from the brain to the middle ear, where they control muscles that help protect against the effects of dangerously loud sounds.
Not only does the inner ear process the sound vibrations it receives, it also creates its own sound vibrations. When hair cells respond to vibration, their movement in the fluid environment of the cochlear duct produces friction, and this results in a loss of energy. However, a group of hair cells replaces the lost sound energy by creating their own. Some of this sound energy leaks back out of the ear and can be detected using a computer-based sound analyzer and a probe inserted into the outer third of the ear canal. This ability of hair cells to respond to sound by producing their own sound is the basis of one type of hearing test performed on infants and young children.
Hearing Loss
The auditory pathway is capable of providing a lifetime of useful service. It is, however, fragile and subject to damage from a variety of sources. Hearing loss and deafness can result from sound exposure, heredity, ototoxic drugs (chemicals that damage auditory tissues), accidents, and disease or infection. Conductive hearing loss results from damage to the outer or middle ear, and sensorineural hearing loss results from damage to the inner ear.
Hearing loss and deafness can result from sound exposure, heredity, ototoxic drugs, accidents, and disease or infection.
Damage associated with conductive hearing loss interferes with the efficient transfer of sound to the inner ear. Conductive hearing loss is characterized by a loss in sound intensity. Voices may sound muffled, while at the same time the individual's own voice may seem quite loud. It can be caused by anything that interferes with the vibration of the eardrum or with the movement of the bones of the middle ear. Even a buildup of earwax can lead to conductive hearing loss.
A number of treatment options exist for conductive hearing loss. The appropriate response depends upon the cause of the problem. For example, an ear doctor can simply remove a buildup of earwax. It should be pointed out, however, that you should never try to remove wax from your own ears. You can too easily push the wax further into the ear canal and even damage your eardrum. A common cause of conductive hearing loss in children is ear infections. Other causes of conductive hearing loss are a punctured eardrum or otosclerosis (a buildup of spongy tissue around the middle ear). These can be treated through surgery.
Sensorineural hearing loss is generally associated with damage to the hair cells in the inner ear. Such damage is the most common cause of hearing loss and can result from a number of factors working alone or in combination.
Noise exposure
When hair cells are damaged, their ability to participate in sound transduction is compromised. If your hair cells are completely destroyed, you will be unable to hear any sounds, no matter how loud they are. If the hair cells are damaged, you may still hear sounds, but the sounds will be distorted. Recall that different hair cells respond to different pitches. The pattern of hair-cell damage determines which pitches are preferentially lost. Typically, hair cells that respond to higher pitches are lost first. One reason is that the basilar membrane vibrates more vigorously in response to higher pitches. These vibrations can cause the delicate stereocilia of the hair cells to be sheared off (see Figure 15). One consequence of this damage is that it becomes more difficult to understand the higher-pitched voices of women and children. It also becomes more difficult to distinguish a person's speaking voice from background noise. The effects of noise-induced hearing loss may be temporary or permanent, depending on the intensity and duration of the exposure. Although a person's hearing may recover from temporary, slight damage to the hair cells, the complete loss of hair cells is irreversible in humans. Reptiles and birds are able to regenerate hair cells, however, so scientists are currently exploring ways to encourage regeneration of hair cells in humans.
The effects of noise-induced hearing loss may be temporary or permanent, depending on the intensity and duration of the exposure.
The phrase "too loud, too long, too close" (see the WISE EARS! Web site, http://www.nidcd.nih.gov/ health/wise/index.asp) summarizes the causes of noise-induced hearing loss. The intensity, duration, and proximity of sound to the listener determine whether or not damage occurs and if that damage is reversible or permanent. Hearing loss can result from a single loud noise, such as an explosion, but more commonly results from repeated exposure to less intense sounds that are close by.
The phrase "too loud, too long, too close" summarizes the causes of noise-induced hearing loss.
Aging
Damage to hair cells is associated with aging, though it is not inevitable. Such damage can result from a combination of factors, such as noise exposure, injury, heredity, illness, and circulation problems. Some of these factors, such as noise exposure, can take many years before their damaging effects are noticeable. Hearing loss often begins when a person is in his or her 20s, though it may not be noticed until the person is in his or her 50s. Not surprisingly, the greater the noise exposure over a lifetime, the greater the hearing loss. Because the hair cells at the base of the cochlea "wear out" before those at the apex, the higher pitches are lost first, followed by the lower ones.
Ototoxic drugs
Medications and chemicals that are poisonous to auditory structures are called ototoxic. Certain antibiotics can selectively destroy hair cells, enabling scientists to better understand hair-cell function in normal and abnormal hearing. Other types of drugs can be used to selectively destroy other tissues of the auditory pathway. A few common medications can produce the unwanted side effect of tinnitus, or ringing in the ears. One such drug is aspirin. Arthritis sufferers, who may consume large amounts of aspirin, sometimes experience tinnitus and hearing loss as a side effect of their aspirin use. Fortunately, the effect is temporary and the tinnitus tends to disappear when aspirin use is discontinued.
Disease and infections
A variety of diseases and infections can lead to hearing loss. Children are especially prone to the ear infection called otitis media from viruses or bacteria. Children are more susceptible to infection than adults are, partly because the location of their eustachian tube in relation to the middle ear allows easier access to bacteria from the nasal passages. These infections cause pain and may result in a buildup of fluid, which can lead to hearing loss. Usually, the bacterial infections can be controlled by antibiotics. Antibiotics are ineffective against viruses, however. The over-prescription of antibiotics to treat viral forms of otitis media has led to a rise in bacteria that are resistant to antibiotics. If allowed to progress untreated, ear infections can lead to a much more serious condition called meningitis. Young children who experience ear infections accompanied by hearing loss for prolonged periods also may exhibit delayed speech development. The reason for this is that the first three years of life are a critical period for acquiring language, which depends upon a child's ability to hear spoken words.
Young children who experience ear infections accompanied by hearing loss for prolonged periods also may exhibit delayed speech development.
Otosclerosis refers to a condition in which the bones of the middle ear are damaged by the buildup of spongy or bone-like tissue. The impaired function of the ossicles (the malleus, incus, and stapes) can reduce the sound reaching the ear by as much as 30 to 60 dB. This condition may be treated by surgically replacing all or part of the ossicular chain with an artificial one.
Ménière's disease affects the inner ear and vestibular system, the system that helps us maintain our balance. In this disorder, the organ of Corti becomes swollen, leading to a loss of hearing that comes and goes. Other symptoms include tinnitus, episodes of vertigo (dizziness), and imbalance. The disease can exist in mild or severe forms. Unfortunately, the cause of the disease is not well understood and effective treatments are lacking.
Heredity
The Mouse and Human Genome Projects are setting the stage for identifying the genetic contributions to hearing. Though deciphering the genetics underlying any developmental pathway is complex, identifying genes involved in the hearing pathway can greatly aid our understanding of the hearing process. Genes associated with a number of hereditary conditions that cause deafness, such as Usher syndrome16 and Waardenburg syndrome,17 already have been isolated. The identification of hearing-related genes has moved at an incredibly fast pace in the past decade. The first genetic mutation affecting hearing was isolated in 1993; by the end of 2000, the number of identified auditory genes was over 60. Scientists have also pinpointed over 100 chromosomal regions believed to harbor genes affecting the hearing pathway.
An important technology for investigating the roles that genes play in hearing is the production of transgenic and "knockout" mice, which result when scientists insert a foreign gene into (transgenic) or delete a targeted gene from (knockout) the mouse genome. The hearing responses of transgenic or knockout mice are compared with their unaltered counterparts. If differences are detected, they are presumed to be caused by the specific gene that was inserted or deleted. Eventually, scientists hope to use their understanding of the genetic basis of hearing to develop treatments for hereditary hearing loss and deafness.
The Mouse and Human Genome Projects are setting the stage for identifying the genetic contributions to hearing.
Cochlear implants
A cochlear implant (see Figure 16) is a hearing device designed to bypass absent or damaged hair cells. The cochlear implant is a small, complex, electronic device that can help provide an interpretable stimulus to a person who is profoundly deaf or severely hard-of-hearing. The implant is surgically placed under the skin behind the ear, and consists of four basic parts:
- a microphone that picks up sound from the environment;
- a speech processor, which selects and arranges sounds picked up by the microphone;
- a transmitter and receiver/stimulator that receives signals from the speech processor and converts them into electric impulses; and
- electrodes that collect the impulses from the stimulator and send them to the brain.
A cochlear implant does not restore or create normal hearing. Instead, under the appropriate conditions, it can give a deaf or severely hard-of-hearing person a useful auditory understanding of the environment, including sirens and alarms. A cochlear implant is very different from a hearing aid. Whereas hearing aids amplify sound and change the acoustical signal to match the degree of hearing loss, cochlear implants compensate for damaged or nonworking parts of the inner ear by bypassing them altogether. When hearing is functioning normally, complex processes in the inner ear convert sound waves in the air into electrical impulses. These impulses are then sent to the brain, where a hearing person recognizes them as sound. A cochlear implant works in a similar manner: it electronically transforms sounds and then sends them to the brain. Hearing through an implant sounds different from normal hearing, but it allows many people with severe hearing problems to participate fully in oral communication.
Outcomes for patients with cochlear implants vary. For many, the implant provides sound cues that help them better understand speech. Many are helped to such an extent that they can carry on a telephone conversation. Originally, only patients with profound hearing loss were deemed suitable for the procedure. One reason for this restrictive policy is that when a patient receives a cochlear implant, whatever hearing they have is destroyed. Eventually, it was discovered that patients with some residual hearing could benefit more from the procedure than those with profound hearing loss. For appropriate individuals, cochlear implants can be extremely beneficial. Each case must be examined individually to determine whether the cochlear implant is the best treatment available.
The use of cochlear implants can be controversial, especially among some deaf people. Just as spoken language helps define the culture of the hearing world, sign language helps define the culture of the deaf community. The issues surrounding the use of speech or American Sign Language by the deaf community illustrate the profound effects of language, hearing, and communication on one's sense of self.
Prevention of Noise-Induced Hearing Loss
Noise-induced hearing loss (NIHL) is a serious health problem. It occurs on the job as well as in nonoccupational settings. An estimated 10 million Americans have suffered irreversible hearing damage due to noise exposure. Another 30 million Americans are exposed to dangerous levels of noise every day.10 This is especially tragic because NIHL is completely preventable. Although the consequences may vary for people who are exposed to identical levels of noise, some general conclusions can be stated. For example, studies have shown that sound levels of less than 75 dB are unlikely to cause permanent hearing loss, even after prolonged exposure. However, sound levels equal to or greater than 85 dB—about the same level as loud speech—for eight hours per day will produce permanent hearing loss after many years. At this time, it is not possible to predict a given individual's degree of sensitivity to dangerous noise. Some people may be more sensitive to noise exposures than others.
In the work environment, employers are obligated to protect their workers from hazardous noise. Hearing-conservation programs, when implemented effectively, are associated with increased worker productivity and decreased absenteeism. They also lead to fewer workplace injuries and workman's compensation claims. Whenever hazardous levels of sound are encountered, either on the job or at home, you can protect yourself by using ear protection such as earplugs or special earmuffs. Do not simply put your fingers in your ears or stuff cotton in them. Additionally, anyone exposed to significant levels of noise for long durations should receive regular hearing tests to detect changes in hearing.
An estimated 10 million Americans have suffered irreversible hearing damage due to noise exposure.
Tinnitus is the medical term for the perception of sound when no external sound is present. The disorder is characterized by ringing, roaring, or repeated soft clicks in the ears. It is known that the ear continuously sends electrical impulses to the brain, even in the absence of sound. Some scientists speculate that when hair cells are damaged, the impulses are disrupted and the brain responds by generating its own sound signals. Normally, when an ear is stimulated by sound, auditory regions on both the left and right side of the brain become active. People experiencing tinnitus show brain activation in only one side of the brain, however. This difference in neural activity caused by external sounds (bilateral activation) versus tinnitus (unilateral activation) indicates that the disorder is likely to be a result of changes in the brain itself. Tinnitus may be produced by disturbances in auditory processing by the brain.
Over 50 million Americans experience tinnitus at some point in their lives. The disorder is perceived by some as an annoying background noise while others are incapacitated by loud noise that disturbs them day and night. Although the exact causes of tinnitus are not known, scientists agree that it is associated with damage to the ear. Possible triggers of tinnitus include NIHL, too much alcohol or caffeine, stress, inadequate circulation, allergies, medications, and disease. Of these factors, exposure to loud noise is by far the most probable cause of tinnitus. Perhaps not surprisingly, there is no single effective treatment. Depending on the suspected cause, individuals may be given drugs to increase blood flow, or provided guidance on ways to reduce their stress or to change their diets. The best advice for those concerned about NIHL is to limit exposure to hazardous noise (both proximity to and duration of), wear ear protection when exposed, and have hearing tests performed regularly.
References
- 1.
- Barinaga M. Neurobiology: New leads to brain neuron regeneration. Science. 1998;282:1018–1019. [PubMed: 9841441]
- 2.
- Charrier I, Matheron N, Jouventin P. Newborns need to learn their mother's call before she can take off on a fishing trip. Nature. 2001;412:873.
- 3.
- Chermak GD, Hall JW, Musiek FE. Differential diagnosis and management of central auditory processing disorder and attention deficit hyperactivity disorder. Journal of the American Academy of Audiology. 1999;10:289–303. [PubMed: 10385872]
- 4.
- Kuhl PK, Andruski JE, Chistovich IA, Chistovich LA, Kozhevnikova EV, Ryskina VL, Stolyarova EI, Sundberg U, Lacerda F. Cross language analysis of phonetic units in language addressed to infants. Science. 1997;277:684–686. [PubMed: 9235890]
- 5.
- Lorenz K. Studies in Animal and Human Behavior. Vol. 1. Cambridge, MA: Harvard University Press; 1970.
- 6.
- Loucks-Horsley S, Hewson P, Love N, Stiles K. Designing Professional Development for Teachers of Science and Mathematics. Thousands Oaks, CA: Corwin Press; 1998.
- 7.
- Massaro DW, Storck DG. Speech recognition and sensory integration. American Scientist. 1998;86:236–244.
- 8.
- Mooney R. Sensitive periods and circuits for learned birdsong. Current Opinions in Neurobiology. 1999;9:121–127. [PubMed: 10072364]
- 9.
- NIDCD Fact Sheet. Auditory processing disorder in children: What does it mean? 2001. Retrieved from http://www
.nidcd.nih .gov/health/voice/auditory.asp. - 10.
- NIDCD Consensus Statement. Noise and hearing loss. 2001. Retrieved from http://consensus
.nih .gov/cons/076/076_intro.htm. [PubMed: 2202895] - 11.
- Nondahl DM, Cruikshanks KJ, Wiley TL, Klein R, Klein BEK, Tweed TS. Recreational firearm use and hearing loss. Archives of Family Medicine. 2000;9:352–357. [PubMed: 10776364]
- 12.
- Petitto LA. On the biological foundations of human language. In: Emmorey K, Lane H, editors. The signs of Language Revisited: An Anthology in Honor of Ursula Bellugi and Edward Klima. New Jersey: Lawrence Erlbaum Assoc; 2000.
- 13.
- The Physics Classroom: A High School Physics Tutorial. Sound waves and motion: Lesson 2: Sound properties and their perception. 2002. Retrieved from http://www
.physicsclassroom .com/Class/sound/U11L2a.html. - 14.
- Ramus F, Hauser MD, Miller C, Morris D, Mehler J. Language discrimination by human newborns and by cotton-top tamarin monkeys. Science. 2000;288:349–351. [PubMed: 10764650]
- 15.
- Resources: Roots of hearing loss. Science. 2000 December 15;290:2027. (in Net-Watch)
- 16.
- Weil D, Blanchard S, Kaplan J, Guil-ford P, Gibson F, Walsh J, et al. Defective myosin VIIA gene responsible for Usher syndrome type 1B. Nature. 1995;374:60–61. [PubMed: 7870171]
- 17.
- Xia JH, Liu CY, Tang BS, Pan Q, Huang L, Zhang BR, et al. Mutations in the gene encoding gap junction protein beta-3 associated with autosomal dominant hearing impairment. Nature Genetics. 1998;20:370–373. [PubMed: 9843210]
- 18.
- Yost WA. Fundamentals of Hearing: An Introduction. 4th ed. San Diego, CA: Academic Press; 2000.
Glossary
- amplitude
The displacement of a wave. In the case of a sound wave, the greater the amplitude of the wave, the greater the intensity, or pressure, of the sound. The extent to which air particles are displaced in response to the energy of a sound.
- APD
See auditory processing disorder.
- auditory cortex
The area of the brain (in the temporal cortex) that connects fibers of the auditory nerve and interprets nerve impulses in a form that is perceived as sound.
- auditory nerve
The eighth cranial nerve, which connects the inner ear to the brainstem and is responsible for hearing and balance.
- auditory processing disorder (APD)
Reduced or impaired ability to discriminate, recognize, or comprehend complex sounds, such as those used in words, even though the hearing is normal (such as coat/boat or sh/ch).
- basilar membrane
Found in the organ of Corti, it is the cellular membrane in which the hair cells are embedded. The basilar membrane moves in response to pressure waves in the cochlea, initiating a chain of events that results in a nerve impulse traveling to the brain.
- brainstem
A region of the brain that connects the spinal cord to higher levels of the brain, such as the cortex.
- cochlea
Snail-shaped structure in the inner ear that contains the organ of hearing. The cochlea is a coiled, fluid-filled cavity responsible for converting vibrational energy from the middle ear into nerve impulses that travel to the brain.
- cochlear duct
See scala media.
- cochlear implant
A medical, electronic device that bypasses the damaged structures in the inner ear and directly stimulates the auditory nerve. An implant does not restore or create normal hearing. Instead, under the appropriate conditions, it can give a deaf person a useful auditory understanding of the environment and help him or her understand speech. The implant is surgically placed under the skin behind the ear. An implant has four basic parts: a microphone, which picks up sound from the environment; a speech processor, which selects and arranges sounds picked up by the microphone; a transmitter and receiver/stimulator, which receives signals from the speech processor and converts them into electric impulses; and electrodes, which collect the impulses from the stimulator and send them to the brain.
- conductive hearing loss
A type of hearing loss that results from dysfunction of the outer or middle ear (such as a punctured eardrum or buildup of ear wax) that interferes with the efficient transfer of sound to the inner ear; characterized by a loss in sound intensity.
- critical period
A period of time during an organism's development in which the brain is optimally capable of acquiring a specific ability, provided that appropriate environmental stimuli are present. Humans as well as some animals are known to have a critical period during which language is acquired.
- decibel (dB)
A unit that measures the intensity of sound.
- ear canal
A component of the outer ear that leads to the tympanic membrane (eardrum) of the middle ear. The ear canal is lined with wax and hairs that prevent small foreign material from traveling deeper into the ear.
- endolymph
A fluid that is located in the labyrinth, the organ of balance in the inner ear.
- eustachian tube
A small tube that connects the middle ear with the back of the throat. It allows the air in the middle ear to be refreshed periodically.
- frequency
The number of times a sound vibrates per unit of time. Frequency is expressed in hertz (Hz), a unit of measurement equal to one cycle per second.
- gene
The functional and physical unit of heredity. Genes are segments of DNA found along a chromosome. They typically encode information used to produce a specific protein. Human DNA is organized into 46 chromosomes—23 from the father and 23 from the mother. The study of mice with hereditary hearing loss has enabled researchers to begin understanding the role that DNA and genetics play in human hearing disorders.
- hair cells
Found in the organ of Corti in the cochlea of the inner ear, these are the specialized receptors of hearing. The name refers to stereocilia, bundles of hairlike projections jutting upward from the cells. When the stereocilia are moved by sound vibrations, the hair cells translate this mechanical stimulation into an electrical nerve impulse that is carried to the brain by the auditory nerve.
- hertz (Hz)
A unit of frequency equal to one cycle per second.
- impact noise
A short burst of sound.
- imprinting
The process by which young individuals of a species acquire irreversible behavior patterns of that species. With respect to hearing, imprinting involves the ability of the brain to distinguish and process the sounds and rhythms of the first language or languages the young hear.
- incus
The center bone of the series of three small bones, or ossicles, of the middle ear. Sometimes called the anvil.
- infrasonic
Sounds with frequencies below 20 Hz and, therefore, beyond the range of human hearing.
- intensity
The amplitude of a sound wave. Sound intensity, which is expressed in decibels, is measured in relation to an accepted reference, such as the threshold at which an average person can hear a sound.
- inner ear
The most interior portion of the ear, made up of two interconnected parts: the vestibular system, a balance organ, and the cochlea, a hearing organ.
- loudness
Our perceived impression of the intensity, frequency, and duration of a sound.
- malleus
The first bone in the series of three small bones, or ossicles, of the middle ear. Sometimes called the hammer.
- Ménière's disease
Inner ear disorder that can affect both hearing and balance. Ménière's disease can cause episodes of vertigo, hearing loss, tinnitus, and the sensation of fullness in the ear.
- midbrain
A region of the brain that relays sound input to the auditory cortex.
- middle ear
The part of the ear that includes the eardrum and ossicles and ends at the round window that leads to the inner ear. An air-filled space connected to the back of the throat by the eustachian tube.
- NIHL
See noise-induced hearing loss.
- noise-induced hearing loss (NIHL)
Irreversible hearing loss caused by exposure to very loud impulse sounds, such as an explosion, or to less-intense sounds for an extended period of time. Loud noise levels damage hair cells of the inner ear.
- organ of Corti
The sensitive organ of hearing within the cochlear duct. The organ of Corti contains specialized cells called hair cells that transduce sound vibrations into electrical impulses.
- ossicles
The three smallest bones in the human body. The ossicles consist of the malleus, incus, and stapes (known also as the hammer, anvil, and stirrup, respectively), found in the middle ear. They are part of the system that amplifies sound vibrations that enter the middle ear.
- ossicular chain
The three bones that make up the ossicles of the middle ear (the malleus, incus, and stapes).
- otitis media
An inflammation of the middle ear, usually associated with a buildup of fluid related to a viral or bacterial infection. The obstruction can cause hearing problems, which may arise when the fluid interferes with the ability of the ossicles to conduct sound vibrations to the inner ear.
- otosclerosis
An abnormal growth of bone in the middle ear, which prevents structures within the ear from working properly, causing hearing loss.
- ototoxic
Any substance that damages auditory tissues, including a special class of antibiotics, called aminoglycoside antibiotics, that can damage hearing and balance organs for individuals who are sensitive.
- outer ear
The part of the ear composed of the pinna and the ear canal.
- oval window
An opening in the bony wall that separates the middle ear from the inner ear.
- perilymph
A fluid, nearly identical to spinal fluid, that fills the cochlea.
- perilymph fistula
The leakage of inner ear fluid into the middle ear. It is associated with head trauma, physical exertion, or exposure to severe pressure, but it can also occur without apparent cause.
- phonemes
The basic sound elements of a spoken language.
- pinna
The outer ear, which is composed of skin and cartilage. The pinna focuses sound waves into the middle and inner ears. Having two pinnae helps animals determine the location of a sound. In some animals, the pinna serves additional functions, such as heat dissipation.
- pitch
The perception of a sound based on its frequency.
- round window
An opening in the cochlea that allows pressure from sound waves to be released.
- scala media
Also called the cochlear duct, this region between the upper and lower chambers of the cochlea contains the organ of Corti.
- scala tympani
The lower chamber of the cochlea.
- scala vestibuli
The upper chamber of the cochlea.
- sensorineural hearing loss
Hearing loss caused by damage to the hair cells or nerve fibers of the inner ear.
- sensory integration
The involuntary process by which the brain assembles a picture of our environment at each moment in time using information from all of our senses. Children with learning disabilities or autism have difficulties with sensory integration. (See the Web site of Sensory Integration International, The Ayres Clinic, http://www.sensoryint.com/faq.html.)
- sound
Vibrational energy. A pressure disturbance propagated through a medium and displacing molecules from a state of equilibrium. The auditory perception of this disturbance. Something heard by the ears.
- sound intensity
The magnitude of a sound, measured against a standard reference in units known as decibels (dB). Intensity refers to the amplitude of a sound.
- sound waves
The longitudinal progressive vibrations in an elastic medium by which sounds are transmitted.
- stapes
The final bone in the series of three small bones, or ossicles, of the middle ear. Sometimes called the stirrup.
- stereocilia
Hairlike extensions jutting from one end of the inner ear's hair cells into the cochlear fluid.
- subsonic
See infrasonic.
- tectorial membrane
Found in the organ of Corti of the cochlea, this sheet of cells lies above the stereocilia of the hair cells. Movement of the basilar membrane (to which the hair cells are attached) causes the stereocilia to move against the tectorial membrane, initiating a nerve impulse that travels from the hair cell to the brain.
- temporal lobe
A region of the brain that contains the auditory cortex, which is necessary for interpreting sounds.
- tinnitus
The term for the perception of sound when no external sound is present. The sensation of ringing, roaring, buzzing, or clicking in the ears or head. An ailment that is associated with many forms of hearing impairment and noise exposure.
- transduction
A process by which energy is converted from one form to another.
- tympanic canal
See scala tympani.
- tympanic membrane
The eardrum. A structure that separates the outer ear from the middle ear and vibrates in response to sound waves. These vibrations are transferred to the small bones in the middle ear.
- ultrasonic
Sounds with frequencies above 20,000 Hz and, therefore, beyond the range of human hearing.
- vertigo
The illusion of movement. A sensation that the external world is revolving around an individual (objective vertigo) or the individual is revolving in space (subjective vertigo). May be caused by an inner ear dysfunction.
- vestibular canal
See scala vestibuli.
- vestibular system
The system responsible for maintaining balance, posture, and the body's orientation in space. This system also regulates locomotion and other movements and keeps objects in visual focus as the body moves. Located next to the cochlea, the vestibular system consists of three semicircular canals oriented in different planes. Movement of fluid within the canals responds to movements of the head and visual information, allowing the brain to process an animal's current state of balance.
- PubMedLinks to PubMed
- Information about Hearing, Communication, and Understanding - NIH Curriculum Sup...Information about Hearing, Communication, and Understanding - NIH Curriculum Supplement Series
- LOC105166110 [Sesamum indicum]LOC105166110 [Sesamum indicum]Gene ID:105166110Gene
Your browsing activity is empty.
Activity recording is turned off.
See more...