at IEEE Virtual Reality 2017 Los Angeles, CA, USA, March 19, 2017 Program - Sunday 19th March 2017
1.30-2.00 Welcome, Chanel Summers and Mary Jesse (Keynote speech). Creating Immersive and Aesthetic Auditory Spaces in Virtual Reality
2.00-2.15 Daniel Finnegan, Eamonn O'Neill and Michael Proulx. An Approach to Reducing Distance Compression in Audiovisual Virtual Environments.
2.15- 2.30 Cumhur Erkut. Rhythmic Interaction in VR: Interplay Between Sound Design and Editing
2.30 -2.45 Rongkai Guo, Haiqi Wang and Miao Dong . Towards Understanding the differences of Using 3D Auditory Feedback in Virtual Environments between People With and Without Visual Impairments.
2.45-3.15 break, discussion
3.15-4.00 Vangelis Lympouridis and Ilya Rostovtsev (keynote). Interactive sound demo with VIVE and discussion
4.00-4.15 Lynda Joy Gerry, Emil Rosenlund Hoeg, Lui Thomsen, Stefania Serafin and Niels Christian Nilsson. Binaural Sound Reduces Reaction Time in a Virtual Reality Search Task
4.15-4.30 Reuben Hunter-Mchardy, Alejandro Sauri Suarez, Sam Kalsoum Sernavski, Luis Silva Vieira, Jason-Yves Tissières and Stefania Serafin A Comparison between Measured and Modelled Head-Related Transfer Functions for an Enhancement of Real-time 3D Audio Processing for Virtual Reality Environments
4.30 Discussion and closing.
Sonic interaction design is defined as the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. This field lies at the intersection of interaction design and sound and music computing.
In the virtual reality community, the focus on research in topics related to auditory feedback has been rather limited when compared, for example, to the focus placed on visual feedback or even on haptic feedback. However, in communities such as the film community or the product sound design community it is well known that sound is a powerful way to communicate meaning and emotion to a scene or a product.
The main goal of this workshop is to increase among the virtual reality community the awareness of the importance of sonic elements when designing virtual environments. We will also discuss how research in other related fields such as film sound theory, product sound design, sound and music computing, game sound design and computer music can inform designers of virtual reality environments. Moreover, the workshop will feature state of the art research on the field of sound for virtual environments.
Call for Papers
Call for Papers
We expect participants to submit a research paper outlining their current research in the field of interactive sound for virtual environments.
Topics can include, but are not limited to:
1) Sound synthesis and design for virtual environments
2) Sound modelling and rendering for virtual environments
3) Sound spatialisation
4) Headphones and speakers reproduction
5) Binaural sound and head-related transfer functions
6) Gestural control of sound in virtual reality
7) Multisensory (audio-visual), (audio-haptics) interactions
8) Personalisation and customisation of virtual auditory displays
9) Navigation and way-finding through sonification
10) Evaluation of user experience and sound quality
11)Sound rendering for 360 videos
The submission website is: https://www.easychair.org/conferences/?conf=sive17
Papers should be 4-6 pages in length and prepared using the IEEE Computer Society conference style format described at: http://www.cs.sfu.ca/~vis/Tasks/camera.html
Posters should be 2-4 pages in length and prepared according to the same template.Accepted papers will be included in the Workshop Proceedings and will be published on IEEExplore.
For accepted oral presentations, authors must prepare a 15-20 minute oral presentation to be delivered during the workshop.
For accepted poster presentations, authors must prepare a short 2 minute oral presentation to be delivered during the workshop, plus a poster.
Abstract submission:January 26th
Paper submission: February 2nd
Notification of acceptance: February 18th
Submission of final paper: March 1st
Workshop: March 19th, 9:00-15:00
ProgramTo appear February 2017.
Stefania Serafin is currently Full professor in sound for multimodal environments at Aalborg University Copenhagen. She received a PhD degree in computer-based music theory and acoustics from Stanford University, in 2004, and a Master in Acoustics, computer science and signal processing applied to music from Ircam (Paris), in 1997. She has been a visiting professor at the University of Virginia (2003), and a visiting scholar at Stanford University (1999), Cambridge University (2002), and KTH Stockholm (2003). She was principal investigator for the EU funded project Natural Interactive Walking, and Danish delegate for the EU COST Action on Sonic Interaction Design. Her main research interests include sound models for interactive systems and multimodal interfaces, and sonic interaction design.
Rolf Nordahl is currently associate professor in Medialogy at Aalborg University Copenhagen. He is principal investigator for the EU funded project Natural Interactive Walking, and has earlier done seminal work on the EU-project BENOGO (the project was focused on HMD- based photo-realistic VR). Likewise, he is a recognized member of the expert panel under the Danish Evaluation Institute as well as being member of various steering committees. He frequently publishes both journal and conference papers and is also giving talks at international level. Lately, he was invited to run a special series of lectures at among other places Yale University (Connecticut). His research interests lie within VR, (Tele)-Presence, Sonic Interaction Design, developing new methods and evaluation-techniques for VR, and Presence and Games.
Amalia de Götzen is currently associate professor at Aalborg University in Copenhagen. She graduated in Electronic Engineering at the University of Padova in 2002 and got a PhD in Computer Science from the University of Verona in 2007. She also carried out musical studies obtaining a diploma in pianoforte in 1996 and a diploma in Electronic Music in 2003 at the Conservatorio C. Pollini of Padova.
Since 2002 she is working on the field of Sound and Music Computing. She has been the coordinator of the Sound and Music Processing Lab SAMPL of the Conservatorio of Padova in collaboration with the Department of Information Engineering of the University of Padova.
Cumhur Erkut has received his Dr.Sc.(Tech.) degree in Acoustics and Audio DSP (EE) from the Helsinki University of Technology (TKK), Espoo, Finland, in 2002. Between 1998 and 2002, he has worked as a researcher, and between 2002 and 2007 as a postdoctoral researcher at the Laboratory of Acoustics and Audio Signal Processing of the TKK, where he has contributed to various national and international research projects. Between 2007 and 2012, as an Academy Research Fellow, Dr. Erkut has conducted his research project and team Schema-SID [Academy of Finland, 120583], and has contributed to the COST IC0601 Action Sonic Interaction Design (SID). In 2013, he has joined the Institute of Inclusive Science and Solutions at the University of Eastern Finland, and contributed to research on developing interactive technologies for special-need children and elderly. From July 2013 onwards, he has been appointed as an assistant and then associate professor at the Medialogy, Aalborg University Copenhagen.
Niels Christian Nilsson is currently Assistant Professor in Medialogy at Aalborg University Copenhagen. He received a PhD degree in natural interaction in virtual reality from Aalborg University Copenhagen in 2015. His research interests include user experience evaluation, presence studies and consumer virtual reality systems.
Francesco Grani is currently Assistant Professor at Aalborg University Copenhagen. He graduated in Engineering at the University of Padova in 2006, completed his PhD in 2011 and subsequently graduated in Electronic Music in 2012. Since 2011 he also works as external consultant for private companies designing software and hardware systems for spatial sound, mostly in realtime environments and lately for VR. His research interests include Spatial Audio, Sonic Interaction Design and User Experience. He also works in Interaction Design with a focus on the design and prototyping of novel physical interfaces and connected devices.
Federico Avanzini is currently Associate Professor at the University of Padova. He received his Ph.D. degree in computer science from the University of Padova in 2002. During 2001 he has been visiting researcher at the Helsinki University of Technology. Since 2002 he has worked at the University of Padova, first as a post-doc researcher and then (2005) as Assistant Professor. His main research interests concern algorithms for sound synthesis and processing, non-speech sound in human-computer interfaces, multimodal interaction.
Dr. Avanzini has been key researcher and principal investigator in several national and international research projects. He has authored more than 100 publications on peer-reviewed international journals and conferences, and has served in several program and editorial committees. He was the General Chair of the 2011 International Conference on Sound and Music Computing, and is currently Associate Editor for the international journal Acta Acustica united with Acustica.
Michele Geronazzo received his M.S. degree in Computer Engineering and his Ph.D. degree in Information & Communication Technology from the University of Padova, in 2009 and 2014, respectively. He is currently postdoctoral research assistant at the Dept. of Neurosciences, Biomedicine and Movement Sciences of the University of Verona where he is with the "Laboratory Action Perception" and he is involved in the AASSCI project with Cochlear Ltd. (main topics: virtual acoustics, cochlear implants, and human motor control). His main research interests involve binaural spatial audio modeling and synthesis, multimodal virtual/augmented reality, and sound design for human-computer interaction.
For more information, contact sts at create.aau.dk
With the support of:
Multisensory Experience Lab
Sound and Music Computing Network
Personal Auditory Displays for Virtual Acoustics