Graduation Year

2023

Document Type

Dissertation

Degree

Ph.D.

Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Communication Sciences and Disorders

Major Professor

David A. Eddins, Ph.D.

Committee Member

Erol J. Ozmeral, Ph.D.

Committee Member

Nathan C. Higgins, Ph.D.

Committee Member

Michelle Arnold, Au.D., Ph.D., CCC-A

Committee Member

Robert D. Frisina, Ph.D.

Keywords

audiovisual perception, hearing aids, multi-talker conversation, natural group conversation, speech localization

Abstract

In a complex environment with only a single-source speech target, modern hearing aids can improve signal-to-noise ratios (SNRs) by using directional microphones, and consequently, improve speech intelligibility. However, in many realistic listening situations where there are multiple sound sources, especially in a turn-taking conversation, improving the SNRs in one location may come at a cost of another equally important location. Critically, the hearing aid user is also unlikely to retain a static head orientation to which the hearing aids reference in these complex environments. For this reason, modern premium hearing aids are now equipped with motion sensors (e.g., accelerometers) to monitor head movements and provide head-related processing decisions (i.e., scene classification) to the devices. To take full advantage of these new sensors to allow hearing aid users to hear better when they are either moving or staying still, we must first understand typical head orienting behaviors in communications settings, and importantly, how hearing aids may interact with head movements in these scenarios. The long-term goal is to aid the development of systems capable of adapting to the listener environment along an individualized basis. The overall objectives in this dissertation are to understand the head movement of an individual listener in their natural listening environment and to understand where, when, and how their devices contribute to their overall hearing success. The central hypothesis is that in a complex environment with multiple sound sources, there exists specific, stereotypic head orienting behaviors individuals exhibit, and hearing aid signal processing can have profound effects on head movement when listening to speech in noise. The central hypothesis was tested by pursuing three specific experiments. Experiment 1 simultaneously measured head orienting behaviors during speech localization and speech detection. Experiment 2 examined head movement strategies when aided listeners followed a simulated conversation of multiple talkers. Experiment 3 assessed head orienting behavior in a live conversation when listeners were engaged in a multi-talker conversation. The listeners wore a head mount with head tracker markers and hearing aids and were tasked with speech localization and speech detection tasks across auditory-visual environments. Behavioral performance regarding detection accuracy, localization ability, and head orienting behaviors was assessed. In addition, the effect of hearing aids on these behaviors was also examined. The research proposed in this dissertation is innovative because it benchmarks head orienting behaviors of normal-hearing and hearing-impaired listeners during speech localization in order to understand the individual listener in their natural listening environments. Therefore, this proposed research provides broad application to the field.

Share

COinS