Week Two Presbycusis Paper

Week Two Presbycusis Paper

Presbycusis is a condition affecting the hearing of many adults. There are a variety of causes, including damage to structures of the middle and inner ear. The result is loss of hearing—not complete deafness, but difficulty in detecting certain sounds within the normal range of hearing. Use each numbered item as a required subheading in your paper. Students should preview the grading rubric  before  beginning the assignment.

1. explain how normal hearing occurs. Include in your discussion the following points:

· How is sound transmitted from the environment outside the body to the inner ear? What structures are involved, and how do they transmit sound?

· What happens in the inner ear (cochlea) when sound waves are converted to neural signals? How is sound frequency (pitch) processed?

2. Next, summarize the causes of presbycusis and explain how they will interfere with the normal processing of sound as outlined above.

· Discuss one source of presbycusis involving a problem with the outer/middle ear.

· Discuss one source of presbycusis involving a problem with the inner ear.

3. Finally, describe what it might be like to have presbycusis. Include the following points:

· If you have normal hearing now, how would your ability to converse with others be affected?

· What activities that you now enjoy would be limited by this condition?

· How would such a condition affect your work life?

 

The paper should include:

· Be a minimum of 3.5 and a maximum of 4.5 full pages, size 12 font Times New Roman, double spaced with 1 inch margins, NOT including the Title and References pages.

· A title page

· Include the sub headings of: “Introduction”, “How Normal Hearing Occurs”, “Causes of Presbycusis”, “Having Presbycusis”, and “Conclusions”.

·  Introductory paragraph ending in a clear thesis statement

· Several well-developed (5-7 sentences) body paragraphs that explore the assignment questions in detail

· A summary and conclusions paragraph

· Three references, two of which cannot be from the class readings.

Be sure to submit your project in one Word document in APA 7th ed.

READING

https://www.youtube.com/embed/NET2xZ5zRXI?wmode=opaque&rel=0

https://openstax.org/books/biology-2e/pages/36-introduction

https://saylordotorg.github.io/text_introduction-to-psychology/s08-sensing-and-perceiving.html

Introduction

Topics to be covered include:

· The components of sound and how they interact

· The function of the cochlea

· Localization of sound

In this lesson, we will learn more about sound and the auditory systems that sound waves pass through as they are transmuted to signals the brain can understand. Sound travels as vibrations through the outer and middle ears before it is transmuted to electrical signals in the inner ear. We will also look at how we are able to identify where a sound came from, and how sound hits each of our ears.

How We Rely on Sound

A close up of a microphone

For many, sight is the first sense we rely on. We see something and go by what we see. Yet, we cannot always see something, and what we perceive based on our sight is not always accurate. So, which sense do we rely on more than we realize? We can hear in the dark, and while we can be fooled by sounds, we might be a little more cautious with what we hear as opposed to what we see. We use our hearing to listen to and identify different sounds. Some sounds are enjoyable, and others might be a little too loud, or have an unpleasant sound, like a siren or a child playing the same note on a recorder for the fiftieth time trying to get it just right.

Yet, let’s look at an example that will help us explain sound and auditory perception. We are at a concert for second grade children playing their recorders, the plastic flute-like instruments elementary children often learn to play notes on. A couple of children seem to be doing better than others, and have solo parts. Parents scramble to record their children and happily move to the sounds that fill the auditorium. Of course, some visitors might not conclude that the recorders are quite as melodious as they listen to the concert. In each case, pressure changes in the air create the stimulus for hearing, similar to how light is processed by visual senses. This change in air pressure activates the auditory senses. The information travels through the outer ear to the middle ear, then to the inner ear. The information is processed and sent through brains systems to create a perceptual experience. We have systems that help us determine where the sound comes from, based on how quickly it hits an ear, and which ear it hits first. In some ways, this information is more reliable than visual senses.

Physical and Perceptual Definitions of Sound

A graph representing sound, with time on the x-axis and air pressure on the y-axis

This video shows how sounds are produced and how you hear them:  What is Sound?

 

Open file: Transcript

 

1/5 ›

· The Stimulus

Like vision, sound begins with a distal stimulus. In our example, the distal stimulus would be the sound of the recorder. The vibration of the recorder causes changes in the air that trigger auditory organs to process this representation of sound and send it to the brain. This sound is physically based on the pressure changes that occur as the sound is emitted from the distal stimulus (Goldstein & Brockmole, 2017). The sound is also perceptually based on our experience– we perceive the recorder sound as wonderful (if you are mom), or as perhaps a little annoying (if you are anyone other than mom). So, we have the recorder vibrating with a frequency of 1,000 Hertz (Hz), which is the physical stimulus, and the experience of sound based on your enjoyment of the recorder concert (Goldstein & Brockmole, 2017).

Loudness and Pitch

An audibility graph showing the dB level needed to hear sounds of different frequencies

The frequency of sound is on the horizontal axis; the dB levels at which we can hear each frequency is on the vertical axis.

· LOUDNESS

· PITCH

· TIMBRE

The amplitude of a sound is expressed in dB. The perceptual aspect of the sound stimulus  loudness is related to the level of an auditory stimulus. The higher the dB the louder we perceive a sound, but this varies with the frequency of the sound. The  audibility curve  indicates the range of frequencies we can hear. Underneath the audibility curve we would not be able to hear talk, but above the curve we can hear tones. This area above the curve is called the  auditory response area . The area above the upper range of the audibility curve is the threshold of feeling, which is an area where the amplitudes are so high that we can feel them, and they would likely cause us pain, but we wouldn’t necessarily hear them (Goldstein & Brockmole, 2017). How many of you have ever heard of a dog whistle? The amplitude of a dog whistle is so high that we, as humans, cannot hear it but dogs can. Dogs can hear frequencies higher in the human audibility curve. As you get older, the range of frequencies you can hear shrinks. You can test your hearing at:  Hearing Test . (transcript not yet available)

The video plays sounds of the frequency indicated on the screen. Watch the video until you can hear the sound. That is the lower threshold of your hearing. Towards the end of the video you will probably find that you cannot hear sounds above a certain frequency.

The Journey through the Ear

1/3 ›

· The anatomy of the ear as described in this section.

THE OUTER EAR

Now that we have seen sound travel from the distal stimulus to the ear, it is time to see happens once it reaches the ear. We took an abbreviated journey through the ear in Lesson 1 and now we will look at this journey in more detail. The journey begins with the outer ear. The structure of the outer ear that we all see is called the  pinna  (plural pinnae). From the pinnae sound travels through the  auditory canal , which is the tube-like recess that leads to the  eardrum , also called the  tympanic membrane . When you find wax in your ear, you find it in the auditory canal. The purpose of the wax and the small size of the canal is to protect the eardrum. The auditory canal also enhances the intensity of sound through resonance.  Resonance  is a result of the interaction between soundwaves reflected back from the close end of the auditory canal with new soundwaves entering the canal (Goldstein & Brockmole, 2017).

Vibrations and Electrical Signals

The organ of Corti

· FROM SOUND TO ELECTRICAL SIGNALS TO BRAIN

· PLACE THEORY

· FREQUENCY TUNING CURVE

· COCHLEAR AMPLIFIER

As sound vibrations move through the stapes and press against the oval window, the oval window begins a back-and-forth motion that transmit the vibrations to the liquid inside the cochlea, which, in turn, sets the basilar membrane into an up and down motion. Remember that the basilar membrane lies below the organ of Corti, so the up and down motions cause the organ of Corti to move up and down also. The organ of Corti in turn causes the tectorial membrane to move back and forth just above the outer hair cells. At this point, the vibrations are transformed into electrical signals, beginning the process of transduction. As the cilia of the hair cells bend in one direction structures called tip links are stretched, opening tiny ion channels in cilia membranes. When its channels are open, positive ions flow into the cell and create an electric signal. When the cilia bend in the opposite direction, the tip links go slack, ion channels close and the electrical signal stops. This causes alternating bursts of electrical signals and no electrical signals as the tip links stretch and then slacken. When signals are sent, neurotransmitters are released to cross the synapse between the inner hair cells and the auditory nerve fibers, which causes the nerve fibers to fire. If you think about this, you see a pattern. The auditory nerve fibers fire with the rising and falling pressure of a pattern from a pure tone. When the auditory nerve fibers fire at the same place in the sound stimulus is called  phase locking  (Goldstein & Brockmole, 2017).

Frequency Theory

Remember that pitch is concerned with the quality of the sound described as high or low. This is determined based on the frequency, which we have just seen is impacted by place. So, what is pitch impacted by? One other theory is  frequency theory , which proposes that the frequency of the sound wave throughout the basilar membrane is the same as the firing rate of the hair cells. If, for example, a frequency of the sound is 300 Hz, the firing rate of the hair cells across the basilar membrane would be 300 pulses per second. So, if we put the place theory and the frequency theory together, what would we get? Research has determined that specific locations on the basilar membrane match specific sound wave frequencies – except for the lower ones. The lower ones seem to match the frequency theory and the firing rate of the entire basilar membrane. There is a maximum firing rate for nerve cells, and cells take turns firing, which increases the maximum firing rate for all of the cells in the group. This process is called the volley principle, and between place theory, frequency theory, and the volley principle, we can see how information is processed by the brain to perceive pitch (Griggs, 2016).

 

A graph representing sound, with time on the x-axis and air pressure on the y-axis

This video shows how sounds are produced and how you hear them:  What is Sound?

 

Open file: Transcript

 

2/5 ›

· Vibrating Objects Create Sound

You might be wondering where that 1,000 Hz frequency information came from. Let’s go through the process. The sound stimulus occurs as the vibrations of an object, such as the recorder, cause changes in pressure in air, water, or any other source that can transmit the vibrations. As the vibrating object moves towards the listener, the surrounding air molecules are pushed together, called  condensation . Condensation causes a slight increase in density of the molecules near the distal stimulus. This increased density increases the air pressure in a small area. As the vibrating object moves away from the listener, the air molecules in its path become less dense. This process is called  rarefaction . In the area of decreased density there is a slight decrease in air pressure too. These changes in pressure are similar to ripples in the water. So, as our recorder player plays notes the sound vibrates out in ripples. If you look at water that has ripples you see areas of peaks and valleys in the ripples. This would be similar to condensation with the peaks and rarefaction with the valleys. Now, these ways create air pressure changes and move outward but the air molecules that fill in that space just moved back and forth but stay in the same space (Goldstein & Brockmole, 2017). This goes back to the example of ripples in the water. While the ripples go up and down the water does not actually move forward.

Loudness and Pitch

An audibility graph showing the dB level needed to hear sounds of different frequencies

The frequency of sound is on the horizontal axis; the dB levels at which we can hear each frequency is on the vertical axis.

· LOUDNESS

· PITCH

· TIMBRE

The amplitude of a sound is expressed in dB. The perceptual aspect of the sound stimulus  loudness is related to the level of an auditory stimulus. The higher the dB the louder we perceive a sound, but this varies with the frequency of the sound. The  audibility curve  indicates the range of frequencies we can hear. Underneath the audibility curve we would not be able to A graph representing sound, with time on the x-axis and air pressure on the y-axis

This video shows how sounds are produced and how you hear them:  What is Sound?

 

Open file: Transcript

 

3/5 ›

· Components of a Sound Wave

Now, we are going to look at the components of sound waves. Let’s start with a  pure tone , a simple soundwave that occurs when changes in air pressure oscillates in a pattern called a  sine wave . The graph shown here is a sine wave. The high-pitched notes produced by flute would be likely to produce something similar to pure tones. With your pure tones, the vibration in and out of the sound occurs in a sine-wave motion. This vibration would be measured by the  frequency , or the number of cycles per second that the pressure changes repeat, and the  amplitude , which is the size of the pressure change. Frequency is measured in units called Hertz (Hz), where 1 Hz is one cycle per second). Amplitude in seen as half of the distance between the high peak and low valleys of the sound wave, which is the magnitude of the pressure change, as is labelled in the graph. The amplitude range in our world can be very high. It can range from barely a whisper to the roar of a jet engine. Because this range is so large, is measured in units of sound called decibels (dB), which can convert these large ranges into numbers that are easier to manage (Goldstein & Brockmole, 2017).

Loudness and Pitch

An audibility graph showing the dB level needed to hear sounds of different frequencies

The frequency of sound is on the horizontal axis; the dB levels at which we can hear each frequency is on the vertical axis.

· LOUDNESS

· PITCH

· TIMBRE

As we talk about pitch, we will go back to our recorder concert.  Pitch  is the quality of sound that is perceived as either “high” or “low” (Goldstein & Brockmole, 2017). For pure tones, the higher the frequency, the higher the pitch. So how does this relate to the recorder concert? Most sounds that we hear are produced by a combination of sources with different frequencies. Suppose you hear a sound with the second and third harmonics, but not the fundamental frequency. Our brains perceive the pitch of the fundamental frequency. This is called the effect of the  effect of the missing fundamental  (Goldstein & Brockmole, 2017). To illustrate this, think about our recorder solo during the concert. Let’s say the recorder hits a long high tone, then two more recorders are added that play lower notes. The noise of these two new recorders playing a different note reduces the ability to distinguish the higher harmonics of the initial solo recorder, but the tones pitch remains the same for that recorder.

· Mathematical Description

The various components of soundwaves include some math. Looking back, remember that 1 Hz is one cycle per second. So, if we have 100 Hz it would be 100 impulses per second. The range for human hearing, or what we are able to perceive as sound is between 20 to 20,000 Hz (Goldstein & Brockmole, 2017).

That seems pretty straightforward, right? Now, let’s move on to dB. With dB conversion we use a logarithmic equation:

decibel bracket(dB)=20sub(log,10) frac(p,sub(p,0))

Which is easier to manage. Using this equation, p is the measured pressure of a sound wave, and p0 is reference pressure, which is the lowest pressure the average human can hear a sound with a frequency of 1,000Hz, and is normally set at 20 micropascals (m).

Sound pressure (p) = 2,000 micropascals:

decibel bracket(dB)=20sub(log,10) frac(p,sub(p,0))=20 logfrac(2000,20)

=20log bracket(100)=20bracket(2)=40 dB

Loudness and Pitch

An audibility graph showing the dB level needed to hear sounds of different frequencies

The frequency of sound is on the horizontal axis; the dB levels at which we can hear each frequency is on the vertical axis.

· LOUDNESS

· PITCH

· TIMBRE

While missing harmonics do not impact the tone’s pitch, the removal of them does change the tone’s timbre. Timbre is the quality that allows us to distinguish between two tones that have the same loudness, pitch, and duration, but are different nonetheless (Goldstein & Brockmole, 2017). The recorder has a loudness that is similar to that of a flute. If someone played a recorder with the same loudness, pitch, and duration as someone playing a flute at the same time, you would still be able to tell the difference between the two. If the concert was recorded and the beginning and ending sounds were removed from the recording, it might be difficult to determine that there is a flute and a recorder involved. The beginning of the tone as it builds up is called the tone’s  attack  while the end of the tone as the sound diminishes is called the tone’s  decay  (Goldstein & Brockmole, 2017).

Up to now, we have been discussing pure tones and harmonics created by instruments, both of which have repetitive patterns of pressure changes. These are called  periodic sounds . There are also sounds that do not have repetitive patterns of pressure changes, called  aperiodic sounds . Aperiodic sounds are complex sounds that would occur when dropping a book or hearing static on the radio. If one of our concert performers dropped their recorder, the sound the recorder made when it dropped would be an aperiodic sound (Goldstein & Brockmole, 2017).

hear talk, but above the curve we can hear tones. This area above the curve is called the  auditory response area . The area above the upper range of the audibility curve is the threshold of feeling, which is an area where the amplitudes are so high that we can feel them, and they would likely cause us pain, but we wouldn’t necessarily hear them (Goldstein & Brockmole, 2017). How many of you have ever heard of a dog whistle? The amplitude of a dog whistle is so high that we, as humans, cannot hear it but dogs can. Dogs can hear frequencies higher in the human audibility curve. As you get older, the range of frequencies you can hear shrinks. You can test your hearing at:  Hearing Test . (transcript not yet available)

The video plays sounds of the frequency indicated on the screen. Watch the video until you can hear the sound. That is the lower threshold of your hearing. Towards the end of the video you will probably find that you cannot hear sounds above a certain frequency.

 

From the Cochlea to the Brain

The auditory pathway

Now that we have seen what happens in the cochlea, let’s move out of the cochlea and continue toward the brain. The auditory nerve carries the signal away from the cochlea toward a sequence of  subcortical structures . The first structure is the  cochlear nucleus , and then the  superior olivary nucleus  in the brain stem. This signal then moves to the  inferior colliculus  located in the midbrain, and then on to the  medial geniculate nucleus  in the thalamus. The signal continues from the thalamus to the  primary auditory cortex  in the temporal lobe. While the exact location of the brain specifically responsible for response to pitch, the most responsive area seems to be the anterior auditory cortex, which is an area close to the front of the brain (Goldstein & Brockmole, 2017).

Hearing Loss

A graph showing the hearing loss of workers in a noisy weaving factory

This graph demonstrates the hearing damage for workers in a noisy weaving factory. dBA is another abbreviation of dB.

So far, we have looked at the process for normal hearing. What if someone experiences a loss of hearing? How does that happen? Most hearing loss is associated with the outer hair cells, and damage to auditory nerve fibers. Damage to outer hair cells results in a loss of sensitivity in the basilar membrane, making it harder for someone to separate sounds, such as hearing a door close during a concert. Inner hair cell damage can also result in loss of sensitivity.

One form of hair loss is  presbycusis , which is caused by damage to hair cells from extended exposure to loud noise, ingestion of substances that can cause hair cell damage, and age-related degeneration. There is a loss of sensitivity that is more pronounced at higher frequencies with presbycusis, and tends to have a higher prevalence in males than females. Noise-induced hearing loss is another form of hearing degeneration resulting from loud noises. In this case, the damage often involves the organ of Corti. It is also possible to have hearing loss that is not indicated by standard hearing test results, called  hidden hearing  loss. Standard hearing tests often measure hair cell function, which might not indicate issues with complex sounds (Goldstein & Brockmole, 2017).

Perception of Sound

We have covered perception of sound based on pitch, frequency and amplitude, so now what about how we perceive where a sound comes from? Imagine you are at the concert and you hear a baby crying in the audience. You turn your head to the left and see the parent quickly ferrying the child out of the auditorium. You knew where to look based on auditory localization. Now, let’s say you are in the school’s waiting room, waiting with other parents for your child’s name to be called so you can pick them up. It is a small room with quite a few parents, and when the teacher calls your name, you are able to hear it the first time, even though it travels two different paths – directly from the teacher’s mouth to your ears, and by bouncing off the walls of the small room. The fact that your auditory perception relies mainly on the direct path is called precedence effect. Think about this small, noisy waiting room again. Many parents are talking to each other. You are speaking with two parents, and are able to hear what they are saying even though others are talking all around you. Your ability to segregate your conversation from the other conversations in the area is called auditory stream segregation (Goldstein & Brockmole, 2017).

Localization of Sound

Let’s think back to our first scenario where we heard the baby crying while the concert recorder band is playing. You hear sounds from two different directions, which creates an  auditory space  When you locate the sound of the baby in that auditory space, it is called  auditory localization . If you think about the baby’s cry and the sound of the recorders, you will see that they are different and would stimulate different hair cells and nerve fibers in the cochlea. Thus, the auditory system uses location cues created by the way the sound interacts with your head and ears. The two location cues are  binaural cues , which depend on information from both ears, and  monaural cues , which depends on information from just one ear. Research indicates three dimensions are involved in location of sound: the  azimuth , extending from left to right, the  elevation , extending up and down, and the  distance  the sound travels from its source to the person listening to it.

Binaural cues use the time it takes to reach both ears to determine horizontal positions (left or right), but they do not help with vertical information (azimuth). There are two types of binaural cues,  interaural level difference , which is based on the difference in sound level, and  interaural time difference , which is based on the difference between the time it takes for a sound to reach the left ear, and the time it takes for a sound to reach the right ear. Both time and level differences can be the same at different elevations, which means they do not account for the elevation of a sound, causing a place of ambiguity, or  cone of confusion . Information using monaural cues can locate sounds at different elevations using the spectral cue (Goldstein & Brockmole, 2017).

NEURAL SIGNALS

Now that we have identified different cues, think about how they might send and receive signals through neural circuits. One theory, the Jeffress model, proposes that neurons used to transmit signals from the ears are designed to receive signals from both ears. In other words, each neuron processes signals from both ears. The signals move inward and ultimate meet as the neurons sending the sound from the right ear meet the neurons sending the sound from the left ear. The neuron they meet at are called  coincidence detectors  because they only fire when both signals meet at the same time. When they meet at the same time at this neuron, the neuron indicates that interaural time difference is zero. If the sound comes from one side first, the signal from the ear on that side begins sending signals before the other ear (Goldstein & Brockmole, 2017).

Auditory Areas of the Brain

Areas of the brain that have been indicated in sound location include the back of the cortex, or posterior belt area, and an area toward the front of the cortex, or the anterior belt area. There seems to be a “ what” auditory pathway  that extends from the anterior belt to the frontal cortex, and the “where” auditory pathway, which extend from the posterior belt to the frontal cortex. The “what” pathway works with determining what a sound is, and the “where” pathway determines where the sound is coming from (Goldstein & Brockmole, 2017).

BACK TO THE WAITING ROOM

We are going to return to the recorder concert. If the concert had been outside, perception of the sounds would have directly moved from the recorders to your ears, or  direct sound . This concert was inside in an auditorium, so sound reached the ears of the parents through the direct path, and by bouncing off of the various surfaces of the auditorium, which is  indirect sound . As parents talk to each other in separate groups, adding to a general array of sound sources the environment is called the auditory scene. You are able to separate out and listen to your conversation with another parent even though numerous conversations were going on around you. This ability to separate the sound from each source is called  auditory scene analysis .

Imagine that you hear your name from a female voice while you are talking to a parent, and you saw someone open their mouth and look your way at the same time, so you believed the sound of your name came from that person (even though another parent said your name). You did this based on the  ventriloquist effect , which occurs when sounds come from one place, but appear to come from another. In this case, you relied more on your vision than your hearing, and you were wrong. On the other side of this, people can use  echolocation  to detect the positions and shapes of objects without sight. People who cannot see often learn this technique of making a clicking sound and listening for echoes to determine locations and shapes (Goldstein & Brockmole, 2017). These examples show how important hearing is as a source of sensory information.

Conclusion

A simple concert shows us how much we use our hearing in our daily lives. Sound is processed as vibrations that are transported through the outer ear to the middle and then inner ear systems. Systems in the inner ear are responsible for transforming the vibrations into electrical signals that the brain can understand as audio messages. We also have mechanisms that help us determine where a sound is coming from based on which ear the sound arrives at first. Of course, sometimes we can be mistaken. This can happen when our eyes register one thing while our ears register a sound, causing us to make an assumption about where the sound comes from. Sound is important, and our ears can provide information when our eyes cannot, or when our eyes are mistaken.

Sources

Goldstein, E. B. & Brockmole, J. R. (2017). Sensation and perception (10th ed.). Boston, MA: Cengage.

Griggs, R. A. (2016). Psychology: A concise introduction (5th ed.). New York, NY: Worth Publishers.

Image Citations

“A close up of a microphone ” by https://pixabay.com/en/microphone-shure-singing-music-2498641/.

“A graph representing sound, with time on the x-axis and air pressure on the y-axis” by  http://oceanexplorer.noaa.gov/explorations/sound01/background/acoustics/media/sinewave_261.jpg .

“An audibility graph showing the dB level needed to hear sounds of different frequencies” by https://upload.wikimedia.org/wikipedia/commons/b/bc/Audible.JPG.

“The anatomy of the ear as described in this section.” by 13699578_ML.

“The middle ear anatomy” by 13699578_ML.

“The anatomy of the cochlea ” by 46938501.

“The organ of Corti” by 73652691.

“The auditory pathway” by 15313015.

“A graph showing the hearing loss of workers in a noisy weaving factory” by  https://commons.wikimedia.org/w/index.php?search=threshold+of+hearing&title=Special:Search&profile=default&fulltext=1&searchToken=975xk

You didn't find what you were looking for? Upload your specific requirements now and relax as your preferred tutor delivers a top quality customized paper

Order Now