MMVE '22: Proceedings of the 14th International Workshop on Immersive Mixed and Virtual Environment Systems
MMVE '22: Proceedings of the 14th International Workshop on Immersive Mixed and Virtual Environment Systems
Does having a virtual body make a difference during cinematic vr experiences?
- Elena Dzardanova
- Vlasios Kasapakis
Cinematic VR, either as an in-game cutscene or a standalone 360° 3D movie, is a widespread narration tool for stories unfolding through Immersive Virtual Reality (IVR) experiences. Commonly, users experience cinematics from a 3rd or 1st person perspective, while firmly positioned in specific proxemics zones of the characters or events they are meant to take in, with no interactivity options provided. This study examines whether having a virtual body (VB) amplifies users' emotional response, specifically their anxiety levels, and how dependent those are to the interpersonal distances between the VB and non-player characters.
Rhythmic stimuli effects on subjective time perception in immersive virtual environments
- Stéven Picard
- Jean Botev
Time perception is an essential component of a user's experience and interaction in immersive virtual environments. This paper explores the performance and subjective time perception when carrying out a cognitive task in a virtual environment while being exposed to unrelated rhythmic stimuli. To this end, we devised an experiment comprising a simple object sorting task with varying rhythmic stimuli, investigating time experience in the form of time estimation and time judgment. The results imply varying effects depending on the usage of single stimuli compared to synchronized audio-visual effects. Single stimuli can lead to more pronounced time perception variations regardless of tempo, but these variations are not specifically compression or dilation. Synchronized stimuli, in turn, can lead to time compression or dilation, depending on the tempo. The results further imply that time being judged as fast or slow correlates to stimuli' tempo, while the presence of visual stimuli can negatively impact task performance. An active, purposeful rhythmic stimuli modulation to tailor individual time experiences can open up exciting opportunities in virtual environment design.
Real-time gaze prediction in virtual reality
- Gazi Karam Illahi
- Matti Siekkinen
- Teemu Kämäräinen
- Antti Ylä-Jääski
Gaze is an important indicator of visual attention and knowledge of gaze location can be used to improve and augment Virtual Reality (VR) experiences. This has led to the development of VR Head Mounted Displays (HMD) with inbuilt gaze trackers. Given the latency constraints of VR, foreknowledge of gaze, i.e., before it is reported by the gaze tracker, can similarly be leveraged to preemptively apply gaze-based improvements and augmentations to a VR experience, especially in distributed VR architectures. In this paper, we propose a light weight neural network based method utilizing only past HMD pose and gaze data to predict future gaze locations, forgoing computationally heavy saliency computation. Most work in this domain has focused on either 360°or ego-centric video or synthetic VR content with rather naive interaction dynamics like free viewing or supervised visual search tasks. Our solution considers data from the exhaustive OpenNEEDs dataset which contains 6 Degrees of Freedom (6DoF) data captured in VR experiences with subjects given the freedom to explore the VR scene and/or to engage in tasks. Our solution outperforms the very strict baseline: current gaze to predict gaze in real-time for sub 150ms prediction horizons for VR use-cases.
The development of a machine learning/augmented reality immersive training system for performance monitoring in athletes
- Mauricio Costa Cordeiro
- Ciarán Ó Catháin
- Thiago Braga Rodrigues
As technology advances in computer graphics, augmented reality has become an increasingly popular tool for entertainment and learning purposes, especially in the sports sector. Examples can be found in different sports such as rugby, baseball, and soccer, among others. This paper proposes an AR-based training system that can be used as a self-learning tool to improve athletes' decision-making process. The system will contain a feedback module that will offer users challenges and, based on user performance, it will be possible to track and assess athletes' progress. The user will learn about their limits during the challenges while practicing different activities. As the user becomes physically fatigued, a score will be shown for performance improvement. Therefore, this work aims to develop a performance attenuation monitoring system for athletes, consequently, contributing to mental and physical improvement of athletes' performance related to the sport practiced.
Subjective evaluation of group user QoE in collaborative virtual environment (CVE)
- Bhagyabati Moharana
- Conor Keighrey
- David Scott
- Niall Murray
Interest in the applications of Extended Reality is growing across many different domains. Collaborative or shared experiences are seen as a primary use case. However, there is a surprisingly little research efforts on collaborative design tasks not considering social experiences using Virtual Reality (VR). In addition, there are very few research studies that have focused on the Quality of Experience (QoE) of small user groups working together on collaborative tasks. In this paper, the authors present the results of an experimental study conducted to understand user experience of collaborative tasks using Virtual Reality. The paper presents some initial analysis from self-reported questionnaire data. Two users allocated different roles (Describer and Finder) join remotely to perform the design task collaboratively in immersive VR. The results presented compare user QoE between the two groups (Describer Group vs Finder Group) and considers how different roles and position produces different levels of immersion, interaction, collaboration, post-usage acceptability and system-related consequences. Self-reported measures via post-test questionnaire (15-questions) shows statistically-significant differences in terms of the perceived QoE aspects between the two groups.
Perceptually enhanced shadows for OST AR
- Chun Wei Oio
- John Dingliana
Shadows are important spatial cues for virtual objects in augmented reality (AR). The common assumption is that a shadow should be darker than its surroundings as it is the result of something obscuring the light source. However, darkening pixels is difficult in contemporary optical see-through (OST) head-mounted displays (HMDs) due to the additive display technologies that they employ. To address this issue, some previous methods create the illusion of darker shadow regions by brightening the surrounding pixels around a shadow, but the visibility of shadows created by such methods is limited in well-illuminated scenes and on surfaces with complex features. This paper presents a method that imbues cast shadows with specific hues to improve the perceptual contrast and, thus, the visibility of shadows in OST AR displays. The color of the cast shadow is computed as a complement of the perceptually weighted aggregate of the colors captured in the surrounding real scene. This approach is implemented on the Microsoft Hololens OST AR HMD, and coupled with a method for adjusting colors to compensate for the tint of the visor. The improvement in visibility is demonstrated with various sample scenes.
Effects of emotions on head motion predictability in 360° videos
- Quentin Guimard
- Lucile Sassatelli
While 360° videos watched in a VR headset are gaining in popularity, it is necessary to lower the required bandwidth to stream these immersive videos and obtain a satisfying quality of experience. Doing so requires predicting the user's head motion in advance, which has been tackled by a number of recent prediction methods considering the video content and the user's past motion. However, human motion is a complex process that can depend on many more parameters, including the type of attentional phase the user is currently in, and their emotions, which can be difficult to capture. This is the first article to investigate the effects of user emotions on the predictability of head motion, in connection with video-centric parameters. We formulate and verify hypotheses, and construct a structural equation model of emotion, motion and predictability. We show that the prediction error is higher for higher valence ratings, and that this relationship is mediated by head speed. We also show that the prediction error is lower for higher arousal, but that spatial information moderates the effect of arousal on predictability. This work opens the path to better capture important factors in human motion, to help improve the training process of head motion predictors.
The development of a glove-like controller interface for VR applications: a low-cost concept application with haptic and resistive feedback
- Eduardo Pereira Salgado
- Eurico Marques Salgado
- Débora Pereira Salgado
- Thiago Braga Rodrigues
Smart gloves are wearable interfaces which provide detection of movements and gestures, kinesthetic, and even tactile feedback. These capabilities are important for immersive applications as it can bring more realistic features. The presented paper shows a concept of a working in progress application of a smart glove. The proposed hardware uses resistive flexible and electromagnets sensors to provide tactile and haptic feedback to be used in immersive applications such as the ones that use Virtual Reality (VR). The project will be done in a way that can provide user-centric requirement such as: the glove must be lightweight, comfortable, adjustable to different hands and finger shapes, and low cost to be manufactured. The intend is to use this glove as part of a biggest framework and immersive application.
Exploring non-verbal cues and user attention in IVR with eye tracking technologies
- Vlasios Kasapakis
- Elena Dzardanova
- Vasiliki Nikolakopoulou
- Spyros Vosinakis
- Ioannis Xenakis
- Damianos Gavalas
This study is a between-groups preliminary evaluation of how Non-Verbal Cues (NVCs) impact participant attention and degree of social presence with an NPC. Participants are divided in two groups and witness an agent's monologue who features high-fidelity NVCs (motion-captured gaze, blinking, and lower facial expressions) in one group, but who has those "turned off" for the other. This study aims at establishing appropriate data collection methodology and scenario-related guidelines for follow-up experimentation with a higher volume of interaction variables, particularly real-time interaction between remotely located users. Initial results indicate that real-time tracked NVCs enhance engagement and social presence to variant degrees compared to low-fidelity automated cues.