Soutenance de thèse Niels Nijdam
M. Niels Nijdam soutiendra, en vue de l'obtention du grade de docteur ès sciences, mention informatique, sa thèse intitulée
Context-Aware 3D Rendering for User-Centric Pervasive Collaborative Computing Environments
- Professor Nadia Magnenat-Thalmann (University of Geneva), Director
- Professor Jose Rolim (University of Geneva), co-Director
- Professor Franz-Erich Wolter (Leibniz University of Hannover)
- Professor Marius Preda (Institut TELECOM Sud Paris)
In the last decade technology has improved rapidly, in terms of computing power and mobility, and has changed the way how we interact with media in general. Mobile devices are capable of performing decent 3D renderings and with the increase in wireless communications these devices are centralized portals to a wealth of shared data always available on demand. Albeit the progress, most social media is still reduced to exchange of textual messages, video/audio buffered streaming and documents that need to be downloaded first. Especially when we focus on 3D simulations we see that this barely exists let alone in a collaborative manner. For mobile devices especially the games are showing the current 3D capabilities, but are often greatly compressed and all kinds of tricks are utilized in order to provide a balance between frame rate and image quality. This is logical since a mobile device is still limited and in terms of computational power greatly lacking in comparison to a stationary machine. With a focus on complex 3D virtual environments and collaborative aspects it is impractical not only to manually copy 3D content from one device to another whenever a user decides to switch from device or to share the environment with others, but also to render the complex 3D data locally on resource-limited devices such as mobile phones and tablets. The problem becomes more apparent when running a 3D virtual environment that is driven by a complex simulation as often this is tightly coupled, e.g. deformation of a 3D model. In addition the simulation might have several dependencies such as compiler output for a specific platform and hardware, e.g. when using some computing language such as OpenCL or CUDA. Depending on the simulation and rendering technology used, not only mobile devices but also regular workstations can be overwhelmed and become unusable (often leading to higher costs in hardware).
In order to overcome the limitations for such devices we are looking at remote solutions, specifically for 3D virtual environments, involving one or more simulation driven 3D entities and in addition provide support for collaborative aspects. We strive to enabling user-centric pervasive computing environments where users can utilize nearby heterogeneous devices any-time and anywhere. Providing cloud-like services to which one or more users can connect and interact directly with the 3D environment, without the burden of any direct dependencies of the provided service.
We propose a context-aware adaptive rendering architecture which visualizes 3D content with customized user interfaces, dynamically adapting to current device contexts such as processing power, memory size, display size, and network condition at runtime, while preserving the interactive performance of the 3D content. To increase the responsiveness of remote 3D rendering, we use a mechanism which temporally adjusts the quality of visualization, adapting to the current device context. By adapting the quality of visualization in terms of image quality, the overall responsiveness and frame-rate are maintained no matter the resource status. In order to overcome inevitable physical limitations of display capabilities and input controls on client devices, we provide a user interface adaptation mechanism, utilizing an interface mark-up language, which dynamically binds operations provided by the 3D application and user interfaces with predefined device and application profiles.
A generalized data sharing mechanism based on publish/subscribe methodology is used to handle data between service-service and service-user allowing for peer to peer or server-client structured communications. Providing easy binding to different kinds of data models that need to be synchronized, using a multi-threaded approach for local and remote updates to the data model. An extra layer between the sharing of the data and the local data is applied to provide conversions and update on demand capabilities (e.g. a data model in Graphics memory needs to be pre processed to adhere to the constraints of the rendering pipeline).
The framework is divided into several layers and relies on the Presentation Semantics Split Application Model, providing distinct layers with each clear defined functionalities. The functionalities from each layer are isolated and encapsulated into a uniform model for implementation, resulting into a single minimalistic kernel that can execute at runtime the requested functionalities, which are called nodes. These nodes then can be concatenated in parent/child relationships, or be executed as stand-alone processes, providing easy deployment and scalability. The context-aware adaptive rendering framework is used in several use-case scenarios based on different domains, User Centric Media, Telemedicine and E-Commerce. User Centric Media focuses on the adaptation rendering and support for heterogeneous devices. Telemedicine has a focus on collaboration and access to a diverse set of data, 3D as well as non 3D data such as 2D extracts from volumetric MRI data. E-Commerce explores the possibilities for augmented reality on mobile devices as a service and overall deployment of 3D rendering services with user management.
Date: Mardi 7 janvier 2014 à 14h00
Lieu: Battelle bât. A - Auditoire rez-de-chaussée
13 janvier 2014