Automatic Social Behavior Analysis in Face to Face Interaction

Presented by Dr. Oya Aran, PhD, Bogazici University, Turkey, 2008


Automatic Social Behavior Analysis in Face to Face Interaction


Social interaction is a fundamental aspect of human life. Social psychologists have been researching the dimensions of social interaction for decades and found out that a variety of social communicative cues strongly determine social behavior and interaction outcomes. Many of these cues are consciously produced, in the form of spoken language. However, besides the spoken words, human interaction also involves nonverbal elements, which are extensively and often unconsciously used in human communication. Nonverbal communication is conveyed as wordless messages, in parallel to the spoken words, through aural cues (voice quality, speaking style, rhythm, intonation) and also through visual cues (gestures; body language and posture; facial expression and gaze). These nonverbal cues are used by all of us every day to infer the mood and personality of others, as well as to make sense of social relations, in a very wide range of situations.

Computational analysis of social interaction focuses on developing computational systems that can automatically analyze human social behavior by observing a conversation via sensing devices such as cameras and microphones. Close connection with other disciplines including psychology and linguistics also exist in order to understand what kind of signals are used in diverse social situations to infer human behavior.

In this talk, i will present an overview of my research on developing computational models of social constructs that define the social behavior of individuals and groups in face to face conversations, perceived via audio and visual sensors. I will be presenting key research tasks, including the automatic estimation of dominance in groups, the emergence of leadership, and the prediction of personality. For each task, i will first talk about the methods that are used for the automatic detection of the audio-visual nonverbal cues that are displayed during interaction, particularly a visual descriptor based on a spatio-temporal representation of videos as a fast and robust feature extraction method. Second, i will talk about the multimodal approaches that integrate the multimodal nonverbal cues to infer dominance, leadership, or personality. I will also discuss several domain adaptation approaches that enable transferring the knowledge learned through the data collected from social media to small group settings for the prediction of personality. Unlike the limited amount of small group interaction data, which is mainly collected in controlled and experimental settings, the social media sites provide an excellent and a vast amount of data for human behavior. Our findings show that this vast amount of data from social media can be used to train computational models to predict the extraversion trait in small group settings. Finally, I will discuss the future trends of the field.

Short bio

Dr. Oya Aran (PhD, Bogazici University, Turkey, 2008) is a research fellow at Idiap Research Institute working on multimodal computational modeling of nonverbal social behavior in face to face interactions. Her research focuses on the analysis of audio-visual human nonverbal behavior, integrating fields including social computing, pattern recognition, and machine learning. In 2011, she was awarded with the Swiss National Science Foundation (SNSF) Ambizione grant. Between 2009 and 2011, she was a Marie Curie Intra-European Postdoctoral Fellow with NOVICOM project (Automatic Analysis of Group Conversations via Visual Cues in Non-Verbal Communication). She has published papers in leading computer vision and pattern recognition journals and conferences. She is the Guest Editor of Special Issue on Behavior Understanding for Arts and Entertainment for ACM Transactions on Interactive Intelligent Systems. She is Program Chair of the ACM International Conference on Multimodal Interaction (ICMI) in 2014.

Date: Tuesday February 18th, 2014

Time: 2.00 pm

Place: Battelle bât. A, room 432-433 (3rd floor)

4 février 2014

À la Une