Abstract
Nowadays, the pure-tone audiogram is the main tool used to characterizehearing loss and to fit hearing aids. However, the perceptual consequencesof hearing loss are typically not only associated with a loss of sensitivity, butalso with a clarity loss that is not captured by the audiogram. A detailedcharacterization of hearing loss has to be simplified to efficiently explore thespecific compensation needs of the individual listener. We hypothesized thatany listener’s hearing can be characterized along two dimensions of distortion:type I and type II. While type I can be linked to factors affecting audibility,type II reflects non-audibility-related distortions. To test our hypothesis,the individual performance data from two previous studies were re-analyzedusing an archetypal analysis. Unsupervised learning was used to identifyextreme patterns in the data which form the basis for different auditoryprofiles. Next, a decision tree was determined to classify the listeners intoone of the profiles. The new analysis provides evidence for the existenceof four profiles in the data. The most significant predictors for profileidentification were related to binaural processing, auditory non-linearity andspeech-in-noise perception. The current approach is promising for analyzingother existing data sets in order to select the most relevant tests for auditoryprofiling.