Automatic Social Behavior Analysis in Face to Face Interaction

Presented by Dr. Oya Aran, PhD, Bogazici University, Turkey, 2008

OA

Automatic Social Behavior Analysis in Face to Face Interaction

Abstract

Social interaction is a fundamental aspect of human life. Social psychologists have been researching the dimensions of social interaction for decades and found out that a variety of social communicative cues strongly determine social behavior and interaction outcomes. Many of these cues are consciously produced, in the form of spoken language. However, besides the spoken words, human interaction also involves nonverbal elements, which are extensively and often unconsciously used in human communication. Nonverbal communication is conveyed as wordless messages, in parallel to the spoken words, through aural cues (voice quality, speaking style, rhythm, intonation) and also through visual cues (gestures; body language and posture; facial expression and gaze). These nonverbal cues are used by all of us every day to infer the mood and personality of others, as well as to make sense of social relations, in a very wide range of situations.

Computational analysis of social interaction focuses on developing computational systems that can automatically analyze human social behavior by observing a conversation via sensing devices such as cameras and microphones. Close connection with other disciplines including psychology and linguistics also exist in order to understand what kind of signals are used in diverse social situations to infer human behavior.

In this talk, i will present an overview of my research on developing computational models of social constructs that define the social behavior of individuals and groups in face to face conversations, perceived via audio and visual sensors. I will be presenting key research tasks, including the automatic estimation of dominance in groups, the emergence of leadership, and the prediction of personality. For each task, i will first talk about the methods that are used for the automatic detection of the audio-visual nonverbal cues that are displayed during interaction, particularly a visual descriptor based on a spatio-temporal representation of videos as a fast and robust feature extraction method. Second, i will talk about the multimodal approaches that integrate the multimodal nonverbal cues to infer dominance, leadership, or personality. I will also discuss several domain adaptation approaches that enable transferring the knowledge learned through the data collected from social media to small group settings for the prediction of personality. Unlike the limited amount of small group interaction data, which is mainly collected in controlled and experimental settings, the social media sites provide an excellent and a vast amount of data for human behavior. Our findings show that this vast amount of data from social media can be used to train computational models to predict the extraversion trait in small group settings. Finally, I will discuss the future trends of the field.

Short bio

Dr. Oya Aran (PhD, Bogazici University, Turkey, 2008) is a research fellow at Idiap Research Institute working on multimodal computational modeling of nonverbal social behavior in face to face interactions. Her research focuses on the analysis of audio-visual human nonverbal behavior, integrating fields including social computing, pattern recognition, and machine learning. In 2011, she was awarded with the Swiss National Science Foundation (SNSF) Ambizione grant. Between 2009 and 2011, she was a Marie Curie Intra-European Postdoctoral Fellow with NOVICOM project (Automatic Analysis of Group Conversations via Visual Cues in Non-Verbal Communication). She has published papers in leading computer vision and pattern recognition journals and conferences. She is the Guest Editor of Special Issue on Behavior Understanding for Arts and Entertainment for ACM Transactions on Interactive Intelligent Systems. She is Program Chair of the ACM International Conference on Multimodal Interaction (ICMI) in 2014.

Date: Tuesday February 18th, 2014

Time: 2.00 pm

Place: Battelle bât. A, room 432-433 (3rd floor)

4 février 2014

À la Une

separation line
Elphel
Soutenance de thèse Wanda Opprecht
Sampling the light field: computational imaging techniques with applications to artworks rendering
Information Theoretic and Statistical Approaches to Sparse Communication
Soutenance de thèse Xavier Titi
Alumni
Tango
Soutenance de thèse Konstantinos Chantzis
Chasseurs de rumeurs
Automatic Social Behavior Analysis in Face to Face Interaction
Soutenance de thèse Juan Diego Gomez
Sensor Hints for Sensornets Routing
ISS Invited Talk
Soutenance de thèse Niels Nijdam
Soutenance de thèse Farzad Farhadzadeh
MoveYourStory
Concours TPG
Soutenance de thèse Andrea Gesmundo
ERCIM affiliation
The 4th Wave of Smart Revolution: Government 3.0 in Smart Society
Cadmos Days 2013
Top-Down vs. Bottom-Up Strategies in Citizen Science
Quality of Life Technologies Research Area: Are We There Yet?
You Make IT Smart: App development
EverdreamSoft
OK Con 2013 Legal Hackathon
e-medecine
Film, Emotions, and Affective Computing
wifi
5ème Conférence internationale pour l'informatique affective et l'intéraction intelligente
Master CUI
Soutenance de thèse Alfredo Villalba Castro
L'UNIGE accueille l'élite de la linguistique mondiale
EDLAH
Internet Hall of Fame
Soutenance de thèse Dejan Munjin
Soutenance de thèse Alexis Marechal
Soutenance de thèse Alexis Kauffmann
Activity Report 2012
Un bracelet qui mesure tout
Soutenance de thèse Hakob Aslanyan
TEDxCERN
Auto-protection des données
You Make IT Smart: Windows Phone 8
Un nouveau supercalculateur pour la recherche lémanique
FAVOR: Frequency Allocation for Versatile Occupancy of spectRum in Wireless Sensor Networks
ThinkData 2013
top