À la Une

Soutenance de thèse Chen Wang

CW.jpg  

Mme Chen Wang soutiendra en anglais, en vue de l'obtention du grade de docteur ès sciences, mention informatique, sa thèse intitulée:

Dyadic Impression Recognition from Multimodal Bodily Responses

Date: Mardi 25 mai 2021 à 15h00

Lieu: zoom sur demande

 Jury :

  • Dr Guillaume Chanel, Département d’Informatique, Thesis Director.
  • Professeur Sviatoslav Voloshynovskiy, Département d’Informatique.
  • Professeure Elisabeth André, Department of Computer Sciences, Augsburg University.
  • Professeur Alessandro Vinciarelli, School of Computing Science, University of Glasgow.
Summary:

Impression plays an important role in human-human interaction and human-computer interaction. Impressions can be quantified in the warmth and competence dimensions, which are the fundamental dimensions of social cognition. This thesis aims to recognize impressions automatically through multimodal cues of dyadic interactions and subsequently adapt the behaviors of a virtual agent in real-time.

This work proposes definitions of impression prediction and impression detection to illustrate impression formation. Impression prediction uses bodily behavior from the emitter, to predict the impression that the emitter will leave on the receivers. Impression detection uses the bodily responses from the receiver to detect the receiver's formed impression. We also combine the information from both receivers and emitters to better recognize impressions through multimodal bodily responses. The bodily signals/responses include facial expressions, utterances, eye movements, gestures, and physiological signals. Two dyadic multimodal impression corpora are developed during this study, one for human-human interaction and the other for human-virtual agent interaction.

The analysis and evaluations of this thesis are conducted in two parts: firstly, the methodology and results of impression recognition in human-human interaction are presented; secondly, potential applications of impression recognition are investigated by including our models in an adaptive embodied virtual agent and by conducting a comparative study on the inclusion of remote heart rate sensing.

In the first part, we present how annotation processes affect emotion recognition performance.  An annotation processing method is proposed to extract a reliable ground truth from multiple annotators for affective and impression recognition. With the validated annotation method, an experiment is designed to collect multimodal human-human interaction data with self-report/annotation on impression formation. We develop and explore several approaches to detect the receivers formed an impression with the collected dataset and predict the impression emitters could leave from multimodal bodily responses. Different regression models are tested with features from various modalities. Dyadic features from physiological and behavioral signals are extracted, which combine information from both the receiver and the emitter.

In the second part, we present an embodied conversational agent embedded with a trained impression model and a reinforcement learning algorithm. The agent can learn in real-time the behaviours that lead to the receiver's best detected impression, according to its goal. Three experimental scenarios are tested to see whether the adaptive agent could leave a good impression. Given that physiological signals are often difficult to employ out of the laboratory, remote sensing of heart rate could help study interactions. A remote method is tested to extract instantaneous heart rate information from frontal face videos and compared with wearable sensors.

Overall, promising results have been obtained in impression recognition with multimodal dyadic features. The use case shows the efficiency of adapting a virtual agent's behaviors based on impression recognition.

CW_illustration.jpg