In connection with Niels Nilsson’s Phd defense, we organize a Workshop on Walking in Virtual and Augmented Reality
on October 27th, 9AM Aalborg University Copenhagen – Canteen auditorium – A.C. Meyers Vænge 15, 2450 Copenhagen
11:15-11:35 Cumur Erkut: Sonic Interaction Design Extensions
11:35-12:05 Georgios Tryantafillidis: Computer vision techniques for virtual and augmented walking
12:05-13 Lunch break
13-13:20 Dan Overholt: Experiments in Augmented Reality Audio-Haptics
13:20-13:40 Justyna Maculewicz:Rhytmic walking interactions with auditory feedback
13:40-14:00 Paolo Burelli: Interaction and Discourse in Game Cinematography
14:00-14:20 Erik Sikstrom: Auditory feedback for immersive virtual reality and self actions
14:20-14:40 Francesco Grani: Wavefield synthesis and sound for virtual reality
9: Frank Steinicke: Perception and Cognition during Redirected Walking
In his wonderful essay from 1965, Ivan Sutherland described his vision of a future immersion into a computer-generated environment via novel types of multimodal input and output devices. He concludes with the following vision: “With appropriate programming such a display could literally be the Wonderland into which Alice walked.“ Unfortunately, even today real walking through virtual environment (VEs) is often not possible due to space constrains in the real world as well as the technological underdevelopments in this sector.
However, redirected walking provides a solution to this problem by allowing users to walk through a large-scale immersive VE while physically remaining in a reasonably small workspace. Therefore, manipulations are applied to virtual camera motions so that the user’s self- motion in the virtual world differs from movements in the real world. Previous work found that the human perceptual system tolerates a certain amount of inconsistency between proprioceptive, vestibular and visual sensation in IVEs, and even compensates for slight discrepancies with recalibrated motor commands.
In this talk I will summarize the previous work on redirected walking and present the results of several experiments, which we performed to identify how much users can be tricked. Furthermore, we will consider if such manipulations impose cognitive demands on the user, which may compete with other tasks in VEs for finite cognitive resources.
10 Mary Whitton: Making Walking-in-Place feel like Really Walking
Since 1998 when the EVE team replicated and extended the work reported in Slater, Usoh, and Steed’s 1995 paper Taking Steps: The Influence of a Walking Technique on Presence in Virtual Reality that compares sense of presence evoked by different locomotion techniques, a component of our research has focused on building, evaluating, and comparing the effectiveness of a variety of virtual locomotion techniques. One of the primary threads of our work has been improvements to walking-in-place (WIP) interfaces to make them feel more like real walking.
Detecting the user’s footsteps and converting them into appropriate viewpoint motion is the central challenge in WIP. Techniques for footstep detection dominated WIP research in the early years and we, and others, tried a wide variety of techniques as sensors improved. Once footsteps could be reliably detected, we began to concern ourselves with the speed profile of how the virtual viewpoint moved through the scene and the optic flow that resulted from that movement. The focus changed from being able to move through the environment at all to being able to move like human’s really move when they walk. The team worked to enable patterns of walking speed that match the model for human walking provided in the biomechanics literature.
In this talk I’ll review the WIP systems the EVE team has developed and used, report how the techniques performed when used in studies comparing locomotion techniques, and describe a system that uses a customized footstep –to-speed function that is our most natural feeling technique to date.
Work reported in this talk was performed by members of the Effective Virtual Environments (EVE) research team co-led by Whitton and Dr. Fred Brooks.
11.15 Cumhur Erkut: Sonic Interaction Design Extensions
Despite their advantages in sound quality and interaction fidelity, physically-based sound synthesis and spatialization techniques are less frequently used for sonic interactions in virtual environments, compared to the perceptual techniques. This is partly because of the high computational demands of physical techniques, and partly due to a lack of associated analysis and parameter estimation methods. If physical techniques of sound synthesis and spatialization can be combined and consolidated, computational savings could be achieved. Object-based rendering can be then computed in real-time with high quality, and the undesired artifacts of the individual techniques can be jointly minimized. Currently we investigate combination of block-based physical models (BBPMs) with spatial sound, specficially wave field synthesis (WFS), and plan to apply this combination in scenarios that require dynamic synthesis of volumetric virtual sound sources. This practically will enable the construction of malleable virtual sound sources that surround and move around a listener, while changing their shape based on listener’s actions. Visualization of malleable and dynamic objects are currently being explored by new frameworks such as the holographic computers and head-mounted displays, but there is no associated sound design technology or practice to get their sounds right. We hope to extend the state of the art in the field of real-time interactive audio, with applications in virtual and augmented reality, multimedia technologies, and computer games.
Cumhur Erkut (M.Sc. 1997, D.Sc. 2002) is an assistant professor at Aalborg University Copenhagen. His research interests include physical modeling, sonic interaction design, embodied multi sensory interaction, and more recently interactive sound field synthesis. Dr. Erkut has received a PhD in acoustics and audio signal processing from Helsinki University of Technology, Finland. He currently applies his expertise in physics-based digital sound synthesis on sonic interaction design, rhythmic interactions, product sound design, mobile audio programming, and game audio, by combining the theory and the methods of human computer interaction and audio signal processing. Dr. Erkut has about 50 peer-reviewed publications and three international patents.
11.35: Georgios Tryantafillidis: Computer vision techniques for virtual and augmented walking
Computer vision (CV) is when we have the artificial vision to sense the real world. Virtual Reality/Augmented Reality (VR/AR) is when we have the real vision to sense the artificial world.
The connection between these two disciplines is obvious: CV techniques can be used effectively to detect hands and fingers movements. So, the VR devices could just see the hands and provide accurately located representations of them in the VR space. This way, physical controllers can be replaced and control suddenly becomes far more intuitive. Objects in the virtual world can be picked up and manipulated just as you would in the real world, removing the barrier of artificial interaction that would take you out of the moment, shattering the VR illusion.
But this is only one example how CV can be used in VR/AR applications. And there are so many more cases (scene/object analysis, face recognition, behaviour detection, etc) where CV can add value to such VR/AR applications. Actually, poor CV can be the limiting-factor for most, if not all AR and VR devices: so, the ultimate winners in this industry will, quite simply, be those with access to the best vision technology. This is the reason why companies like Oculus Rift and Sony are now acquiring computer vision startups.
In this short presentation, we will investigate and present the current use and the future of using CV to enrich the VR/AR experience.
13. Dan Overholt: Experiments in Augmented Reality Audio-Haptics
Despite the clear advantage of personal sound and/or haptic feedback being harder to miss (and not requiring users to look directly at a screen), most interaction within today’s augmented reality systems occurs visually. Sound spatialization techniques are less frequently used, or commonly given a ‘back seat’ to visual elements, yet sonic interactions can be powerful – especially when integrating cross-modal audio-haptic perceptual techniques, for example – in navigation tasks. If these techniques – sound synthesis with spatialization, and directional haptic feedback – can be combined and leveraged, screen-less interaction can be achieved at a higher level than prior work. Currently we investigate combination of bone-conduction headsets with spatial sound, both via psychoacoustic Head-Related Transfer Functions (HRTF), and direct Vector-Based Amplitude Panning (VBAP) methods for rendering navigation cues in the context of the EU-project ‘Culturally Enhanced Augmented Realities’ (CultAR – http://www.cultar.eu/). We hope to extend the research and practice within real-time interactive augmented-reality audio-haptics, and plan to apply this combination in scenarios that require dynamic rendering of enhanced sonic ‘scenes’. This will enable the construction of malleable augmented reality soundscapes that surround and move around a listener, while maintaining sounds perceived locations based on listener’s changing orientation. Both static and dynamic sound objects can be perceived as having locations within the real world by tracking user’s movements (head-mounted IMU, GPS when outdoor, other localisation systems when indoors) and rendered via a variety of experimental audio-haptic headset designs.
Dan Overholt is an Associate Professor at Aalborg University Copenhagen. His research interests include advanced technologies for interactive interfaces and novel audio signal processing algorithms, with a focus on new techniques for creating music and interactive sound. He is involved in the development of tangible interfaces and control strategies for processing human gestural inputs that allow interaction with of a variety of audiovisual systems. Dan is also a composer, improviser, inventor and instrument builder who performs internationally with his new musical instruments and custom sound synthesis and processing algorithms. Dr. Overholt received a PhD in Media Arts and Technology from the University of California, Santa Barbara, and a M.S. in Media Arts and Sciences from the Massachusetts Institute of Technology. He has about 40 peer-reviewed publications and two patents (one provisional).
13.20 Justyna Maculewicz – Rhytmic walking interactions with auditory feedback
Walking is an activity that plays an important part in our daily lives. In addition to
being a natural means of transportation, walking is also characterized by the
resulting sound, which can provide information about the surface, type of shoe, and
movement speed as well as the person’s age, weight, and physical condition. Until
now, few studies have shown the role of interactive auditory feedback produced by
walkers in affecting walking pace.
General goal of my research is to test rhythmic interactive walking with auditory
feedback from several perspectives. Quantitative perspective: to see how different
types of auditory cues influence tempo stability within different pace range and how
the feedback can influence preferred pace of participants. Qualitative – to see how
different feedback and cues shift perception of naturalness and perceived ease of
synchronization. I am mostly focused on ecological sounds, which are seen as the
signals, which convey richer and more useful information than just simple
metronome. There are several interesting effects emerging from my hitherto
research. When participants are asked to walk in their preferred pace and different
feedback sounds are presented, their pace also changes. Until now I have been
testing gravel and wood sound as ecological ones and a tone as a non-ecological
sound. Results show that participants have the slowest pace with gravel feedback,
then wood and a tone motivates to the fastest walking. When they are asked to
synchronize with above-mentioned sounds their results are similar with a slight worse
performance with gravel cues. Even though this feedback produces the highest
synchronizing error it is perceived as the one, which is the easiest to follow. For the
explanation to these results I am looking in neurological data. The exploratory
electroencephalographic (EEG) study was conducted to test neural activity for
interactive rhythmic walking in place with three different tempi (80, 105 and 130
BPM) in the presence of four auditory cues: two ecological (gravel, wood) and two
non – ecological (artificial aggregate, artificial solid). By analysing power spectrum
density in alpha, beta and gamma oscillations wave bands we studied brain
activations correlated with attention, motoric behaviour and semantic information
enclosed in the ecological and non – ecological auditory signals. The results revealed
the highest alpha activation for solid artificial sounds, significantly different than
gravel sound. The beta activity was the highest for both non – ecological signal, and
gamma for both aggregate sounds. We believe that our results are encouraging for
further exploration of the topic of ecological vs. non – ecological auditory cues in
rhythmic walking task, especially in a context of gait rehabilitation in Parkinson’s
Knowledge acquired through these experiments will be useful in the feedback and
cues design for gait rehabilitation (motivation, pace control, balance control,
movement continuation), exercise (motivation, perceived exertion, vigor) and
entertainment (virtual reality applications).
Justyna Maculewicz is a PhD fellow at Aalborg University Copenhagen. Her research
interests include rhythmic motoric tasks with auditory and haptic feedback.
Maculewicz received a BS in acoustics and MS in cognitive science from Adam
Mickiewicz University in Poznan. She is focused on research on ecologically valid
audio and haptic feedback and its influence on tempo-based exercise for
entertainment and rehabilitation.
13.40 Paolo Burelli: Interaction and Discourse in Game Cinematography
Building on the definition of cinematography from the Oxford Dictionary Of English, game cinematography can be defined as the art of visualising the content of a computer game.
The relationship between game cinematography and its traditional counterpart is extremely tight as, in both cases, the aim of cinematography is to control the viewer’s perspective and affect his or her perception of the events represented.
However, game events are not necessarily pre-scripted and player interaction has a major role on the quality of a game experience; therefore, the role of the camera and the challenges connected to it are different in game cinematography as the virtual camera has to both dynamically react to unexpected events to correctly convey the game story and take into consideration player actions and desires to support her interaction with the virtual world.
Furthermore, in computer games, the virtual camera is also responsible for supporting player interaction and, while traditionally the virtual camera supports narration and interaction in different phases of a game, there are a number of commercial examples of games in which these two aspects overlap.
In this presentation we will discuss how the contrast between the freedom of interaction of the player and the designer’s control of the drama is at the core of interactive immersive experiences, what are the consequences in terms of player experience and design process and how artificial intelligence can help overcoming the current limitations.
Paolo Burelli is assistant professor at the Department of Architecture Design and Media Technology of Aalbrog University Copenhagen. He is co-founder of the Augmented Cognition Lab and member of the IEEE Computational Intelligence Society Games Technical Committee.
Dr. Burelli’s current primary research interest lies in the investigation of the interplay between virtual cinematography and the user’s cognitive and affective state. More specifically, he designs intelligent mechanisms that drive the game view point to optimise the player experience. He has published broadly in the areas of computer graphics, virtual camera control, optimisation and artificial intelligence.
14. Erik Sikstrom: Auditory feedback for immersive virtual reality and self actions
This presentation covers briefly some aspects of the technology and user experience of auditory feedback in immersive virtual reality (IVR), with particular focus on self-actions such as walking. Methods for producing and controlling feedback events, as well the generation of automated Foley sounds are discussed along with experimental results from a few user studies related to the experience and perception of auditory feedback from locomotion in IVR. The studies are related to subjects’ perception of avatar weight from footstep sounds as well as the influence of presentation formats (IVR, and two passive formats; non-interactive audio visual, non-interactive audio only) in the evaluation of avatar weight based on the first experiment. Additionally some results from an evaluation of auditory feedback of a non-conventional type of locomotion in IVR (flying) will also be presented and discussed. The presentation is rounded off with some thoughts on the current and encountered issues associated with the conducted research.
Erik Sikström is a PhD stipend at Aalborg University Copenhagen working with sonic interaction design in multimodal environments. His work currently focused on the user experience and interaction design related to self-produced sounds in immersive virtual reality. Earlier, he has studied at Luleå University of Technology, the Hong Kong Polytechnic University and the Academy of Contemporary Music in Guildford, UK.