Abstract
Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Studies using speech intelligibility as an outcome measure have shown mixed results in terms of the benefits of compression over linear amplification (e.g. Davies-Venn et al. 2009; Goedegebure et al. 2001, 2002; Kates 2010; Olsen et al. 20005; Souza et al. 1999, 2012; Yund and Buckles 1995a,b). Compression provides increased audibility of speech components, but at the same time introduces distortion of spectral and temporal envelopes of speech. The two effects may offset each other, depending on what cues the individual hearing-impaired listeners rely on. Therefore, it is difficult to disentangle them when speech recognition is used as an outcome measure. Edwards (2002) suggested using a set of relatively simple outcome measures, based on narrowband signals, for the evaluation of hearing-aid signal processing. We present a compression design that has been optimized, within the framework of a computational model, for improving the performance of (aided) hearing impaired listeners in temporal and spectral resolution-related tasks