Warning: mkdir(): Permission denied in /home/virtual/lib/view_data.php on line 87 Warning: chmod() expects exactly 2 parameters, 3 given in /home/virtual/lib/view_data.php on line 88 Warning: fopen(/home/virtual/audiology/journal/upload/ip_log/ip_log_2024-11.txt): failed to open stream: No such file or directory in /home/virtual/lib/view_data.php on line 95 Warning: fwrite() expects parameter 1 to be resource, boolean given in /home/virtual/lib/view_data.php on line 96 Effects of Frequency-Composition Algorithm and Phoneme-Based Auditory Training in Older Hearing Aid Users

Effects of Frequency-Composition Algorithm and Phoneme-Based Auditory Training in Older Hearing Aid Users

Article information

J Audiol Otol. 2024;28(4):300-308
Publication date (electronic) : 2024 October 10
doi : https://doi.org/10.7874/jao.2024.00122
1Department of Audiology and Speech Language Pathology, Hallym University of Graduate Studies, Seoul, Korea
2Irene Hearing Aids Center, Seoul, Korea
3HUGS Center for Hearing and Speech Research, Hallym University of Graduate Studies, Seoul, Korea
Address for correspondence Jae Hee Lee, PhD Department of Audiology and Speech Language Pathology, Hallym University of Graduate Studies, 427 Yeoksam-ro, Gangnam-gu, Seoul 06197, Korea Tel +82-2-2051-4952 Fax +82-2-3451-6618 E-mail leejaehee@hallym.ac.kr
Received 2024 February 8; Revised 2024 April 18; Accepted 2024 May 10.

Abstract

Background and Objectives

Frequency-lowering (FL) algorithms improve the audibility of high-frequency sounds by shifting inaudible high-frequency components to audible low-frequency regions. However, the FL algorithm has yielded mixed findings. This study involved two experiments. The first experiment investigated whether objective and subjective auditory outcomes would be enhanced by activating frequency composition (Fcomp), which is a type of FL technique. The second experiment determined whether auditory training with Fcomp activation would provide perceptual benefits to older hearing aid users.

Subjects and Methods

Twelve older hearing aid users with moderate to profound high-frequency sensorineural hearing loss participated in this study. In Experiment I, all participants received a 4-week adaptation period to Fcomp before the measurement, and then the influence of Fcomp was evaluated. In Experiment II, among the 12 participants in Experiment I, five received 8-week phoneme-based auditory training while activating Fcomp, whereas the remaining seven did not receive training but continued to use Fcomp as non-trained controls.

Results

In Experiment I, 4-week passive accli­matization period to Fcomp did not improve speech-in-quiet recognition or self-perceived sound quality. In Experiment II, active pho­neme-based training enhanced recognition of consonants and words as well as better speech-quality ratings for trained participants. The consecutive use of Fcomp did not lead to any differences for non-trained participants.

Conclusions

Overall, phoneme-based auditory training would allow older hearing aid users to relearn frequency-lowered speech sounds and reduce phonetic confusion. However, the analytical training approach did not lead to transfer to sentence recognition or overall satisfaction with the hearing aids.

Introduction

Frequency lowering (FL) is one type of signal-processing technique available in commercial hearing aids (HAs). Although several HA manufacturers use different FL algorithms, all FL techniques aim to improve the audibility of high-frequency sounds by shifting inaudible high-frequency components to audible low-frequency regions. This technology can be beneficial, especially for people with severe to profound sloping hearing loss, considering the limited gains and bandwidth at high frequencies, distortion, and acoustic feedback problems.

The advantages of the FL algorithms have been evaluated by comparing the FL-activated and FL-deactivated outcomes in adults and children across several decades [1-12]. Non-linear frequency compression (NLFC) is one of the com­mercially accessible FL techniques. NLFC compresses inaudible high-frequency cues into lower frequencies based on individualized cut-off frequencies and compression ratios. Several studies found that NLFC enhanced consonant or word recognition [1-4], sentence-in-noise recognition [5,6], and perceived sound quality [7], while some studies did not consistently find such benefits depending on stimuli, tasks, and age groups [8-11]. Frequency composition (Fcomp) is another commercially available FL technique. Fcomp algorithm is considered a combination of frequency transposition and frequency compression. It preserves high-frequency information in source regions by superimposing the adjacent high-frequency components into the low-frequency target or destination areas [12,13]. From an acoustical perspective, it is reasonable to expect perceptual benefits and superior sound quality over conventional amplification. However, the effects of Fcomp activation have been inconsistent. Salorio-Corbetto, et al. [14] demonstrated that Fcomp activation increased word-final detection of /s/ and /z/ but not consonant-in-quiet recognition. There were no advantages to Fcomp in speech recognition or listening effort, but some listeners preferred Fcomp activation to conventional amplification [15].

One possible source of the considerable variability in FL benefits might be individual variances in adaptation to some inherent spectral distortions occurring by the FL algorithm [16]. Regardless of the type of FL algorithm implemented, it may introduce undesired spectral distortion due to changes in formant or harmonic cues [17]. Active auditory training may facilitate a quicker relearning of novel frequency-lowered sounds compared to passive acclimatization. Previous studies [18,19] demonstrated that offering active auditory training with FL activation could facilitate adaptation to frequency-lowered sounds. Dickinson, et al. [18] simulated NLFC algorithm with the maximum compression setting and examined the effects of 4-day active auditory training in normal-hearing adults. The authors found significant training benefits based on a group comparison, although training benefits highly depended on the speech type used in training. Although the previous study [18] did not include actual HA users, their results emphasized the efficacy of active auditory training on recognizing FL-processed speech. Ahmed, et al. [19] conducted a 4-day auditory training at syllabic, word, and sentence levels on recognizing voiceless consonants for adult users of FL algorithms. The results revealed a slight but significant improvement in consonant recognition, indicating that intensive auditory training would probably lead to better perceptual outcomes.

Given this, the present study was primarily designed to determine the following research questions: 1) Would the Fcomp activation influence speech recognition and perceived sound quality of older HA users? 2) Would the 8-week auditory training combined with Fcomp activation induce perceptual benefits for objective and subjective outcomes? This study chose older listeners above 60 since this population commonly has a sloping configuration of sensorineural hearing loss. We conducted two experiments to examine the effects of Fcomp activation and 8-week auditory training in older adults. Before conducting Experiment I, all participants had a 4-week adaption period to Fcomp. Experiment I compared speech recognition and perceived sound-quality outcomes with and without Fcomp activation. Experiment II examined the efficacy of 8-week phoneme-based auditory training with Fcomp activation. We hypothesized that intensive auditory training would enhance adaptation to FL-processed sounds, yielding per ­ceptual improvements.

Subjects and Methods

Experiment I

Participants

Twelve listeners aged 63 to 86 years (mean age=76.50, standard deviation [SD]=6.36) participated in this study. Each listener had bilateral, symmetrical, moderate-to-profound high-frequency sensorineural hearing losses based on a World Health Organization-proposed hearing impairment grading system [20]. Unaided pure-tone air-conduction and bone-conduction thresholds were obtained using modified Hughson-Westlake method in both ears using a calibrated clinical audiometer (Model AD 629, Interacoustics, Middlefart, Denmark), a TDH-39 supra-aural earphone (Telephonics, New York, NY, USA), and a Radio-ear B-71 bone-conduction transducer (Radioear Corp., New Eagle, PA, USA). Their unaided mean pure-tone averages (PTA) across 0.5, 1, 2, and 4 kHz were 62.61 dB HL (SD=10.69) and 63.75 dB HL (SD=11.52), and the mean high-frequency pure-tone averages (HFPTA) across 2, 4, and 8 kHz were 83.47 dB HL (SD=11.56) and 85.14 dB HL (SD=11.38) for the right and left ears, respectively. The difference in hearing thresholds between 1 kHz and 8 kHz was 40.00 dB HL (SD=16.92) for the right ear and 42.92 dB HL (SD=15.73) for the left ear. Fig. 1 shows the mean and individual unaided hearing thresholds for both ears.

Fig. 1.

Group mean (thick line) and individual (thin lines) unaided hearing thresholds for right (A) and left (B) ears (error bars depict the standard deviation).

All participants were native Korean speakers and had passing scores on screening for cognitive impairment using the Korean Mini-Mental State Examination [21]. Of 12 participants, 6 had prior experience with conventional amplification for at least two years, while the others had no experience with any hearing devices. The consenting procedure and research protocol were approved by the Institutional Review Board of Hallym University of Graduate Studies (IRB no. HUGSAUD689521).

Hearing aid fitting

All participants were unilaterally fitted with Saphira 3 NANO receiver-in-the-ear (RITE) HA (Bernafon, Smørum, Denmark) (16 channels, 132 dB sound pressure level [SPL] maximum output, 73 dB maximum gain). None of the participants had used this HA before. The HAs were coupled to each participant’s comply tip. For conventional HA fitting (without Fcomp), the insertion gains were prescribed to match NAL-NL2 targets and verified using real-ear measurement (REM) with Affinity 2.0 system (Interacoustics, Middlefart, Denmark) and International Speech Test Signal [22] at 55, 65, and 80 dB SPL input levels.

For the Fcomp setting of Saphira 3 NANO HAs, three predetermined ranges (1.5–2.9 kHz, 2.9–4.6 kHz, and 4.6–6.5 kHz) are available for the destination region, and three adjustment modes (weak 6:1, medium 3:1, and strong 3:2) can be selected to control the relative level of transposed (shifted) high-frequency signal. We determined the Fcomp fitting parameters based on each participant’s hearing thresholds and individual preferences on the sound quality (Table 1). With the selected FC setting, we checked whether each participant could detect five high-frequency consonants of /s, k, f, t, tʃ/ by a live voice. The aided mean PTA across 0.5, 1, 2, and 4 kHz were 47.92 dB HL (SD=6.75) and 41.77 dB HL (SD=6.49) without and with Fcomp activation, respectively (Table 1). The mean aided HFPTA across 2, 4, and 8 kHz were 66.53 dB HL (SD=9.60) and 55.28 dB HL (SD=8.87) without and with Fcomp activation, respectively. All other HA fitting parameters (i.e., directional microphone, noise reduction, and feedback suppression features) were deactivated, and HA gains were unchanged during the entire test sessions.

Aided PTA without and with Fcomp activation and the Fcomp settigns for each subject

Outcome measures

Experiment I measured recognition of consonants, words, and sentences and perceived quality ratings of speech and music in a sound-attenuated booth. We provided a practice session for each test before the experiment and did not provide feedback during testing. The order of the listening conditions (Fcomp-off, Fcomp-on) and test materials (consonants, words, and sentences) was counter-balanced across participants.

Consonant recognition in quiet

Consonant-in-quiet recognition was assessed using the Korean Consonant Perception Test (K-CPT) [23]. The K-CPT is a closed-set four-alternative forced-choice (4AFC) test in which four alternatives differ only in the first or final consonant. The K-CPT consists of 300 sets of four meaningful monosyllabic words in the forms of consonant-vowel-consonant, consonant-vowel, and vowel-consonant. The previous study [24] showed that the K-CPT test can provide frequency-specific information on the discrimination of speech cues, and it can be clinically applicable for analyzing consonant recognition in the hearing impaired. During the test, all the words spoken by a female speaker were normalized and then presented at 70 dB SPL through a loudspeaker located 1 m directly in front of the listener.

Consonant recognition was assessed using a paper-and-pencil task in a 4AFC format, such that each listener was asked to select one word they heard among four response choices on each trial. Since the 4AFC method requires a response, the percent-correct consonant recognition score was corrected for chance level (25%) based on the percent-correct corrected score as follows: ([percent-correct raw score – chance level]/[100 – chance level])×100. Through this correction, performance at the chance level becomes a percent-correct corrected score of 0%.

Word and sentence recognition in quiet

For word and sentence recognition test, target words and sentences spoken by a female talker [25] were presented at 70 dB SPL from a loudspeaker located 1 m in front of the listener. Participants were required to repeat each word or sentence they heard and were allowed to guess if they were unsure. Sentence recognition was scored based on the number of correctly identified keywords in each sentence.

Perceived sound-quality rating

Since subjective sound quality is important when choosing or accepting HAs [26], we measured the subjective quality rating of speech and music sounds. For speech-quality rating, each participant listened to speech passages that consisted of 10 different female-spoken sentences [25] at an individual’s most comfortable level. Similar to the previous study [26], participants were asked to listen to the first five speech samples without rating. While listening to the remaining five speech samples, subjects were required to rate perceived speech quality in four dimensions (overall impression, pleasantness, intelligibility, and loudness) using a 10-point response scale (1–10). The highest rating represents the most remarkable impression, pleasantness, intelligibility, and loudest sound.

For music-quality rating, participants listened to two different music samples used in the previous study [26]. The first sample was a 1-minute instrumental piece by Mozart (Serenade No. 13 in G Major, Menuetto: Allegro), and the second one was a 1-minute vocal piece by Virginia Rodriguez titled “Adeus Batucada.” Since the vocal piece was sung in Portuguese, it was advantageous to evaluate perceived sound quality by reducing the effects of music familiarity and lyric intelligibility. Similar to speech-quality rating, participants were asked to listen to music for the first 30 seconds without rating. While listening to the remaining 30-second music samples, listeners rated perceived music quality in five specified dimensions (overall impression, pleasantness, sharpness, fullness, and loudness) using a 10-point response scale. The highest rating represents the greatest impression, pleasantness, sharpness, fullness, and loudest sound.

Statistical analyses

Statistical analyses were performed using SPSS Statistics for Windows Version 25 (IBM Corp., Armonk, NY, USA). In Experiment I, nonparametric Wilcoxon signed-ranks tests were used for analyzing paired outcomes (Fcomp-on vs. Fcomp-off). A significance level of 5% was used in all cases, and all tests were evaluated two-tailed unless specified otherwise.

Experiment II

Experiment II investigated whether an active 8-week auditory training program while activating Fcomp would improve speech-in-quiet recognition, perceived sound quality, and other self-reported questionnaires. We compared the objective and subjective auditory outcomes between trained and non-trained subjects to determine the effects of auditory training.

Participants

All the listeners of Experiment I participated as either trained or non-trained individuals in Experiment II. Of 12 listeners, 5 utilized the Fcomp and received an 8-week auditory training. As passive controls, the remaining 7 listeners continued to use Fcomp but did not participate in the training. The unaided and aided PTAs, scores on cognitive screening [21], and age did not statistically differ between trained and non-trained subjects (p>0.05). For Experiment II, the Fcomp parameters from Experiment I were maintained.

Training procedure and outcome measures

Auditory training began within two weeks of the end of the Experiment I, and trained individuals were trained once a week for eight weeks (8 sessions in total). Each session lasted about 45–50 minutes. Given that Fcomp generally aims to improve high-frequency audibility, we conducted auditory training focusing on phoneme discrimination. As training material, we used 520 monosyllabic words (130 sets of four alternative words) [27]. The four alternative words differed by only one phoneme (initial consonant, vowel, or fi­nal consonant). All the words were single-female recorded, and all normalized and presented at 70 dB SPL (average conversational level) over a loudspeaker located 1 m directly in front of the listener.

Experiments I and II conducted the same objective and subjective tests. The trained individuals were tested three times: pre-training (week 0), post-training (week 8), and retention tests (week 12, which is four weeks after the completion of the training). The non-trained individuals were tested three times (tests 1–3) with the same evaluation inter­vals.

In addition to the subjective sound-quality rating, each participant completed two other subjective questionnaires. The first questionnaire was the International Outcome Inventory for Hearing Aids (IOI-HA) [28], developed to evaluate perceived hearing abilities and the effectiveness of HAs. The IOI-HA is a seven-item questionnaire that measures listeners’ satisfaction with their HAs regarding their prostheses and environment. Listeners could respond to each question using a 5-point response scale (1: poorest outcome; 5: best outcome). A higher score indicates a better outcome, with a maximum total score of 35. The Korean version of the IOI-HA (K-IOI-HA) determined its appropriate internal consistency, reliability, and validity in the Korean population [29].

The second questionnaire was the Korean Evaluation Scale for Hearing Handicap (KESHH) [30]. The KESHH is a 24-item questionnaire with four subscales (social effect, psycho/emotional effect, interpersonal effect, and perception of HAs) that measure hearing impairment’s social and psychological effects. It can apply to HA and non-HA users, and listeners use a 5-point response scale to respond (1: very disagree; 5: very agree). A total score (sum of the four subscale scores) can range from 24 to 120, and the higher scores represent more handicaps from hearing loss. The degrees of hearing handicaps can be identified based on pre-determined classifications from category 1 (lowest handicap) to category 5 (highest handicap) based on the sum [31]. Similar to other tests, we administered the K-IOI-HA and KESHH tests three times (week 0, week 8, and week 12) to both trained and non-trained individuals.

Statistical analyses

Statistical analyses were performed using SPSS Statistics for Windows Version 25 (IBM Corp., Armonk, NY, USA). Nonparametric Friedman and Wilcoxon tests were conducted separately for trained and non-trained individuals. A significance level of 5% was used in all cases, and the p-values from nonparametric Friedman test were adjusted for multiple comparisons.

Results

Experiment I

Fig. 2 shows the mean speech-recognition scores (left panel) and sound-quality ratings (right panel) with and without Fcomp activation. The statistical results revealed no significant differences between Fcomp activation and Fcomp deactivation for all the objective and subjective outcomes (p> 0.05). It is important to note that all participants had a passive adaptation to Fcomp before the experimental tests. This suggests that 4-week Fcomp activation was insufficient to improve speech recognition or subjective sound quality for older HA users.

Fig. 2.

Mean percent-correct recognition scores of consonants, words, and sentences (A) and perceived quality rating of speech and music sounds (B) without and with Fcomp activation (Fcomp-off vs. Fcomp-on).

Experiment II

Fig. 3 displays the mean speech-recognition scores of trained (solid lines) and non-trained individuals (dashed lines). For the trained individuals, consonant and word recognition significantly improved after auditory training, and the training efficacy was maintained until 1 month after completion (consonant: χ2=7.11; word: χ2=7.60). There was no improvement in sentence recognition across three tests (χ2=2.80). Remind that the non-trained subjects were tested three times, the same as the trained subjects’ evaluations. As displayed in Fig. 3, all the speech-recognition performances of the non-trained individuals did not change across tests 1 to 3 (consonant: χ2=1.90; word: χ2=0.56; sentence: χ2=2.82).

Fig. 3.

Mean speech recognition scores across the three test sessions for trained and non-trained subjects.

Fig. 4 shows the mean scores of subjective sound-quality rating, K-IOI-HA, and KESHH questionnaires for trained (solid lines) and non-trained individuals (dashed lines). For the trained individuals, speech-quality rating was significantly improved after auditory training and then maintained until 1 month after completion (χ2=8.40), yet their music-quality rating or other subjective responses did not differ across the three tests (music quality: χ2=2.11; K-IOI-HA: χ2=2.00; KESHH: χ2=4.67, p>0.05). For the non-trained individuals, all the subjective questionnaire responses did not differ across three tests (speech quality: χ2=0.25; music quality: χ2=0.96; K-IOI-HA: χ2=5.36; KESHH: χ2=0.54, p>0.05).

Fig. 4.

Mean scores of subjective sound-quality rating and questionnaires across the three test sessions for trained and non-trained subjects. K-IOI-HA, Korean version of the International Outcome Inventory for Hearing Aids; KESHH, Korean Evaluation Scale for Hearing Handicap.

Discussion

The primary purpose of this study was to examine the effects of Fcomp activation and 8-week auditory training on objective and subjective auditory performance in older adults. Thus, the current study has determined two research questions: 1) Would the Fcomp activation influence speech recognition and perceived sound quality of older HA users? 2) Would the 8-week auditory training combined with Fcomp activation induce perceptual benefits for objective and subjective outcomes?

For the first research question, we found that Fcomp did not improve performance in objective or subjective outcomes. This finding replicates several earlier studies that, independent of the type of FL algorithm, found either no advantage or inconsistent benefits depending on the stimulus type [8-12,14,15]. Simpson, et al. [11] indicated that while NLFC and frequency transposition algorithms showed a small but significant improvement (about 2% point to 6% point) in consonant recognition, they had little influence on word- or sentence-in-noise recognition. Salorio-Corbetto, et al. [14] provided at least 8 weeks of Fcomp experience. They found a Fcomp advantage in word-final detection of /s/ and /z/ but no advantage in consonant and sentence recognition or subjective ratings. Kirby, et al. [15] observed no perceptual benefits of the Fcomp in speech recognition or listening effort; however, some listeners preferred Fcomp activation over conventional amplification. Brennan, et al. [12] found no significant Fcomp benefits to speech-in-noise performance, and there was considerable individual variability in the performances of adults and children. It indicates that FL may provide perceptual benefits for some, but not all, HA users. Considering that several studies have failed to achieve the real-world benefits of FL algorithms, it is crucial to have caution when evaluating FL technology’s benefits. In particular, older HA users might be disadvantageous in relearning disrupted HA-processed sounds [32-35], especially at more aggressive settings of FL parameters. Therefore, it is better to avoid unnecessary strong FL settings unless the HA user favors the maximum setting considerably.

For the second research question, Experiment II determined the effects of 8-week auditory training with Fcomp activation. We hypothesized that the combined effects of auditory training and increased audibility caused by Fcomp would help the elderly focus on high-frequency speech sounds. Thus, we conducted bottom-up auditory training that concentrated on high-frequency speech sounds such as initial or final consonants. Results of Experiment II demonstrated that the mean benefits from auditory training ranged between 13% points and 20% points for consonant and word recognition, respectively. This finding from Experiment II is in agreement with a few studies [18,19] that examined the impact of auditory training in conjunction with FL activation. Dickinson, et al. [18] simulated the NLFC algorithm and conducted 4-day auditory training. The authors found that the consonant-trained listeners showed better consonant recognition but little improvement in sentence recognition. Another 4-day consonant-based training program for adult HA users was carried out [19], and the results showed modest but significant benefits in consonant recognition. Our and these previous data [18,19] implies that analytical training may only provide near-transfer learning rather than generalization to overall communication abilities or overall HA satisfaction. One possible explanation for this would be that, in comparison to sentences, consonants and words have more high-frequency weighting and are irrespective of semantic contexts, making them more susceptible to changes in audibility. During the training, older HA users might have had more opportunities to attend to the fine-grained acoustic-phonetic characteristics of spectrally distorted speech signals, while putting less effort into using contextual or linguistic information for speech recognition. Given this, perceptual learning may depend on the trained material or task.

Remind that the non-trained participants in this study had 16-week (about 4 months) exposure to Fcomp from Experiments I and II in total. The results revealed no significant perceptual benefits for the non-trained older HA users after passive usage of Fcomp for about four months. The acclimatization period varied across previous studies. Some studies provided no acclimatization period to FL [36], while others included 3–6 weeks of adaptation to FL before the test [37,38]. An, et al. [39] found that older listeners could adapt to HA processing for spectral discrimination immediately after fitting, whereas more time, up to 12 months post-fitting, was needed to improve temporal envelope sensitivity. Given this, the minimum acclimatization period to show perceptual benefits might depend on the stimulus or task.

The present study has several limitations. First, we did not include noisy listening conditions, limiting real-world generalization. Second, we had a relatively small sample size. Taking into account that the older HA users typically exhibit a large individual variability in auditory performance, additional research is required with a larger sample size. Lastly, auditory training administered in this study focused on the recognition of high-frequency speech sounds. This phoneme-based analytical approach might improve the recognition of less contextual stimuli due to greater reliance on acoustic cues, but no improvement in sentence recognition.

The present study carried out two experiments. Experiment I showed that a short (4-week) passive acclimatization period to the Fcomp algorithm did not improve speech-in-quiet recognition and self-perceived sound quality in older HA users. Experiment II revealed that 8-week auditory training combined with Fcomp activation enhanced consonant and word recognition and speech-quality rating, but not other outcomes. It suggests that training efficacy in our study was stimulus- and task-specific.

Notes

Conflicts of Interest

The authors have no financial conflicts of interest.

Author Contributions

Conceptualization: Jae Hee Lee. Data curation: Mikyung Lee. Formal analysis: Mikyung Lee. Investigation: Mikyung Lee. Methodology: Jae Hee Lee. Supervision: Jae Hee Lee. Writing—original draft: Mikyung Lee. Writing—review & editing: Jae Hee Lee. Approval of final manuscript: Mikyung Lee, Jae Hee Lee.

Funding Statement

None

Acknowledgements

None

References

1. Hopkins K, Khanom M, Dickinson AM, Munro KJ. Benefit from non-linear frequency compression hearing aids in a clinical setting: the effects of duration of experience and severity of high-frequency hearing loss. Int J Audiol 2014;53:219–28.
2. McCreery RW, Alexander J, Brennan MA, Hoover B, Kopun J, Stelmachowicz PG. The influence of audibility on speech recognition with nonlinear frequency compression for children and adults with hearing loss. Ear Hear 2014;35:440–7.
3. Qi S, Chen X, Yang J, Wang X, Tian X, Huang H, et al. Effects of adaptive non-linear frequency compression in hearing aids on Mandarin speech and sound-quality perception. Front Neurosci 2021;15:722970.
4. Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T. Evaluation of nonlinear frequency compression for school-age children with moderate to moderately severe hearing loss. J Am Acad Audiol 2010;21:618–28.
5. Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T, et al. Long-term effects of non-linear frequency compression for children with moderate hearing loss. Int J Audiol 2011;50:396–404.
6. Xu L, Voss SC, Yang J, Wang X, Lu Q, Rehmann J, et al. Speech perception and sound-quality rating with an adaptive nonlinear frequency compression algorithm in Mandarin-speaking hearing aid users. J Am Acad Audiol 2020;31:590–8.
7. Brennan MA, McCreery R, Kopun J, Hoover B, Alexander J, Lewis D, et al. Paired comparisons of nonlinear frequency compression, extended bandwidth, and restricted bandwidth hearing aid processing for children and adults with hearing loss. J Am Acad Audiol 2014;25:983–98.
8. Ahn J, Choi JE, Kang JY, Choi IJ, Lee MC, Lee BC, et al. The influence of non-linear frequency compression on the perception of speech and music in patients with high frequency hearing loss. J Audiol Otol 2021;25:80–8.
9. Akinseye GA, Dickinson AM, Munro KJ. Is non-linear frequency compression amplification beneficial to adults and children with hearing loss? A systematic review. Int J Audiol 2018;57:262–73.
10. Picou EM, Marcrum SC, Ricketts TA. Evaluation of the effects of non-linear frequency compression on speech recognition and sound quality for adults with mild to moderate hearing loss. Int J Audiol 2015;54:162–9.
11. Simpson A, Bond A, Loeliger M, Clarke S. Speech intelligibility benefits of frequency-lowering algorithms in adult hearing aid users: a systematic review and meta-analysis. Int J Audiol 2018;57:249–61.
12. Brennan MA, Browning JM, Spratford M, Kirby BJ, McCreery RW. Influence of aided audibility on speech recognition performance with frequency composition for children and adults. Int J Audiol 2021;60:849–57.
13. Glista D, Scollie S. The use of frequency lowering technology in the treatment of severe-to-profound hearing loss: a review of the literature and candidacy considerations for clinical application. Semin Hear 2018;39:377–89.
14. Salorio-Corbetto M, Baer T, Moore BCJ. Evaluation of a frequency-lowering algorithm for adults with high-frequency hearing loss. Trends Hear 2017;21:2331216517734455.
15. Kirby BJ, Kopun JG, Spratford M, Mollak CM, Brennan MA, McCreery RW. Listener performance with a novel hearing aid frequency lowering technique. J Am Acad Audiol 2017;28:810–22.
16. McDermott HJ. A technical comparison of digital frequency-lowering algorithms available in two current hearing aids. PLoS One 2011;6e22358.
17. Souza PE, Arehart KH, Kates JM, Croghan NB, Gehani N. Exploring the limits of frequency lowering. J Speech Lang Hear Res 2013;56:1349–63.
18. Dickinson AM, Baker R, Siciliano C, Munro KJ. Adaptation to non-linear frequency compression in normal-hearing adults: a comparison of training approaches. Int J Audiol 2014;53:719–29.
19. Ahmed RA, Mourad MI, El-Banna MM, Talaat MA. Effect of frequency lowering and auditory training on speech perception outcome. Egypt J Otolaryngol 2015;31:244–9.
20. Humes LE. The World Health Organization’s hearing-impairment grading system: an evaluation for unaided communication in age-related hearing loss. Int J Audiol 2019;58:12–20.
21. Kang Y, Na DL, Hahn S. [A validity study on the Korean Mini-Mental State Examination (K-MMSE) in dementia patients]. J Korean Neurol Assoc 1997;15:300–8. Korean.
22. Holube I, Fredelake S, Vlaming M, Kollmeier B. Development and analysis of an International Speech Test Signal (ISTS). Int J Audiol 2010;49:891–903.
23. Kim JS, Shin EY, Shin HW, Lee KD. [Development of Korean Consonant Perception Test]. J Acoust Soc Kr 2011;30:295–302. Korean.
24. Ryu HD, Shim HY, Kim JS. [A study of the relation between Korean Consonant Perception Test (KCPT) and hearing thresholds as a function of frequencies]. Audiol 2011;7:153–63. Korean.
25. Lee J, Cho S, Kim J, Jang H, Lim D, Lee K, et al. [Korean Speech Audiometry (KSA)]. Seoul: Hakjisa;2010. Korean.
26. Davies-Venn E, Souza P, Fabry D. Speech and music quality ratings for linear and nonlinear hearing aid circuitry. J Am Acad Audiol 2007;18:688–99.
27. No BI, Lee JH. [A comparison study of monosyllable recognition in listeners with sloping versus flat hearing loss types]. Audiol 2012;8:78–86. Korean.
28. Cox RM, Alexander GC. The International Outcome Inventory for Hearing Aids (IOI-HA): psychometric properties of the English version. Int J Audiol 2002;41:30–5.
29. Chu H, Cho YS, Park SN, Byun JY, Shin JE, Han GC, et al. [Standardization for a Korean adaptation of the International Outcome Inventory for Hearing Aids: study of validity and reliability]. Korean J Otorhinolaryngol-Head Neck Surg 2012;55:20–5. Korean.
30. Ku HL, Kim JS. [The study for standardization of the Korean Evaluation Scale for Hearing Handicap]. Audiol 2010;6:128–36. Korean.
31. Kim K, Kim S, Lee JH. [Comparison of speech recognition and subjective hearing handicap in elderly listeners as a function of degree of hearing loss]. Audiol Speech Res 2020;16:115–23. Korean.
32. Bruno R, Freni F, Portelli D, Alberti G, Gazia F, Meduri A, et al. Frequency-lowering processing to improve speech-in-noise intelligibility in patients with age-related hearing loss. Eur Arch Otorhinolaryngol 2021;278:3697–706.
33. Windle R, Dillon H, Heinrich A. A review of auditory processing and cognitive change during normal ageing, and the implications for setting hearing aids for older adults. Front Neurol 2023;14:1122420.
34. Arehart KH, Souza P, Baca R, Kates JM. Working memory, age, and hearing loss: susceptibility to hearing aid distortion. Ear Hear 2013;34:251–60.
35. Schvartz KC, Chatterjee M, Gordon-Salant S. Recognition of spectrally degraded phonemes by younger, middle-aged, and older normal-hearing listeners. J Acoust Soc Am 2008;124:3972–88.
36. Miller CW, Bates E, Brennan M. The effects of frequency lowering on speech perception in noise with adult hearing-aid users. Int J Audiol 2016;55:305–12.
37. Kokx-Ryan M, Cohen J, Cord MT, Walden TC, Makashay MJ, Sheffield BM, et al. Benefits of nonlinear frequency compression in adult hearing aid users. J Am Acad Audiol 2015;26:838–55.
38. Ellis RJ, Munro KJ. Predictors of aided speech recognition, with and without frequency compression, in older adults. Int J Audiol 2015;54:467–75.
39. An YH, Lee ES, Kim DH, Oh HS, Won JH, Shim HJ. Long-term effects of hearing aid use on auditory spectral discrimination and temporal envelope sensitivity and speech perception in noise. J Int Adv Otol 2022;18:43–50.

Article information Continued

Fig. 1.

Group mean (thick line) and individual (thin lines) unaided hearing thresholds for right (A) and left (B) ears (error bars depict the standard deviation).

Fig. 2.

Mean percent-correct recognition scores of consonants, words, and sentences (A) and perceived quality rating of speech and music sounds (B) without and with Fcomp activation (Fcomp-off vs. Fcomp-on).

Fig. 3.

Mean speech recognition scores across the three test sessions for trained and non-trained subjects.

Fig. 4.

Mean scores of subjective sound-quality rating and questionnaires across the three test sessions for trained and non-trained subjects. K-IOI-HA, Korean version of the International Outcome Inventory for Hearing Aids; KESHH, Korean Evaluation Scale for Hearing Handicap.

Table 1.

Aided PTA without and with Fcomp activation and the Fcomp settigns for each subject

ID Fcomp-off PTA (dB HL) Fcomp-on PTA (dB HL) Fcomp-off HFPTA (dB HL) Fcomp-on HFPTA (dB HL) Destination region (kHz) Adjustment mode
Sub1 50.00 41.25 66.67 60.00 2.9-4.6 Strong 3:2
Sub2 42.50 40.00 65.00 53.33 2.9-4.6 Strong 3:2
Sub3 45.00 38.75 65.00 46.67 2.9-4.6 Medium 3:1
Sub4 35.00 28.75 46.67 38.33 1.5-2.9 Medium 3:1
Sub5 47.50 42.50 70.00 58.33 1.5-2.9 Medium 3:1
Sub6 50.00 42.50 68.33 56.67 2.9-4.6 Medium 3:1
Sub7 45.00 40.00 73.33 56.67 2.9-4.6 Strong 3:2
Sub8 45.00 40.00 61.67 51.67 1.5-2.9 Medium 3:1
Sub9 50.00 42.50 66.67 53.33 2.9-4.6 Medium 3:1
Sub10 46.25 38.75 56.67 50.00 2.9-4.6 Medium 3:1
Sub11 58.75 51.25 86.67 73.33 1.5-2.9 Strong 3:2
Sub12 60.00 55.00 71.67 65.00 1.5-2.9 Medium 3:1

Fcomp, frequency composition; PTA, pure-tone average across 0.5, 1, 2, and 4 kHz; HFPTA, high-frequency pure-tone average across 2, 4, and 8 kHz