Abstract
Despite having different audiological preferences, hearing aid users are usually provided with default settings. This lack of personalization is due to a scarcity of audiological resources and to the difficulty of optimizing hearing aid settings in the clinic. Implementing a conversational agent allows to automatically gather user feedback in real-world environments, while monitoring the soundscape, in order to recommend personalized settings. We outline a conversational agent model that interprets user utterances as audiological intents and fuses user feedback and soundscape features to predict the most likely preferred hearing aid setting. Subsequently, we propose two use cases for a conversational agent, that envisage two different interactions to address distinct user needs: troubleshooting and contextual personalization.