Monitoring and modelling of behavioural changes using smartphone and wearable sensing
In DTU Compute PHD-2018, 2018
Abstract
Increase of sedentary behaviour and obesity has been on the rise for a score of years or more, despite public information campaigns and even the incursion of the latest fad, fitness trackers. The latter were by industry heralded as life–changers that by simple mechanisms would change the behaviour of the wearer to be more active and more healthy. Studies have since shown that they may have an initial positive effect on activity levels and reduced weight, but that it quickly falters and people stop using the trackers altogether. A reoccurring observation seem to be a misunderstanding of what drives human motivation and what it takes to change human behaviour with respect to physical activity. Being reminded of your weight or steps taken throughout the day, is for most people but a mere observation, not an intervention. This misunderstanding, or naïvety, probably stems from conclusions that are drawn from data that are too thin to support them. We propose a paradigm that relies on massive amounts of data, pervasively sampled from smartphones. We show that smartphone data is able to estimate plausible intervention effects from a randomized controlled trial, and through higher sampling frequency and additional modalities, is able to break up the estimated effects into contextual pieces that can be used to better understand behavioural aspects. We further show that by using a model that adapts to each individual, we can predict a persons total energy expenditure accurately from the same data. A novel model to recognise human activity semi–supervised and from multiple datasets, is presented. The model combines convolutional neural networks to extract hierarchical features and recurrent neural networks to model temporal dependencies. This is combined with recent developments in domain adaptation where domain separation is penalised through adversarial training of an auxiliary classifier. Lastly a model is presented that fully unsupervised is able to learn latent states that naturally decompose into static and dynamic representations. The static representations are learnt as a function that maps a high–dimensional observation into a low–dimensional code that is dependent on a structured prior distribution that governs the dynamical system