Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 3011 2023-07-20 10:48:24 |
2 layout Meta information modification 3011 2023-07-21 03:10:20 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Young, G.W.; O’dwyer, N.; Vargas, M.F.; Donnell, R.M.; Smolic, A. Audio–Tactile Feedback in Volumetric Music Video. Encyclopedia. Available online: https://encyclopedia.pub/entry/47026 (accessed on 06 May 2024).
Young GW, O’dwyer N, Vargas MF, Donnell RM, Smolic A. Audio–Tactile Feedback in Volumetric Music Video. Encyclopedia. Available at: https://encyclopedia.pub/entry/47026. Accessed May 06, 2024.
Young, Gareth W., Néill O’dwyer, Mauricio Flores Vargas, Rachel Mc Donnell, Aljosa Smolic. "Audio–Tactile Feedback in Volumetric Music Video" Encyclopedia, https://encyclopedia.pub/entry/47026 (accessed May 06, 2024).
Young, G.W., O’dwyer, N., Vargas, M.F., Donnell, R.M., & Smolic, A. (2023, July 20). Audio–Tactile Feedback in Volumetric Music Video. In Encyclopedia. https://encyclopedia.pub/entry/47026
Young, Gareth W., et al. "Audio–Tactile Feedback in Volumetric Music Video." Encyclopedia. Web. 20 July, 2023.
Audio–Tactile Feedback in Volumetric Music Video
Edit

With the reinvigoration of XR technology in general, the current market offers several innovative modes of music creativity in 3D computer-generated imagery (CGI) production environments that can be accessed via virtual reality (VR) head-mounted displays (HMDs). To facilitate multimodality in VR, audio-tactile haptic feedback in volumetric music videos can have a positive impact on user experience.

volumetric video virtual reality music user experience audio–tactile feedback

1. Introduction

The use of haptic technology, which mimics the sense of touch through force and vibration, has raised questions about its relevance in contemporary artistic practices of the 21st century. In music, the concept of musical haptics has long explored the connection between auditory experiences and somatosensory stimulation using acoustic sound-generating musical interfaces (Papetti and Saitis 2018). In today's digital music landscape, musicians can utilize multimodal and 3D interactive platforms to interact with digital sound generators, giving them greater control over their musical creations. Moreover, the resurgence of extended-reality (XR) technology (Evans 2018), such as affordable computational ambisonics and volumography, offers the next generation of musicians unique control over audience perspectives, making it a promising field of creative media research.
Beyond musicians, the concept of a 21st-century musical performance has evolved, moving beyond traditional static setups to embrace novel immersive technologies like augmented and virtual reality (AR/VR). Digital performances on XR devices can now reach wider audiences through contemporary multimodal AR/VR head-mounted displays (HMDs). By incorporating multimodality in music production, new interaction paradigms are emerging. Audiences can experience stimulating and cognitively challenging performances through immersive installations or recitals, with musical haptics becoming a distinct and captivating element. Using haptic technology in virtual performances also indirectly affects the audience's presence and engagement.

In the VR domain, digital audio workstations (DAWs) come in various forms. With the rise of XR technology, there are innovative modes of music creativity in 3D computer-generated imagery (CGI) production environments accessible via VR HMDs. Haptic technology has gained recognition as a crucial component of XR, bringing the sense of touch into what was previously an audiovisual-focused technology. As new paradigms for home media consumption emerge, artists can now interact with their audiences in novel and exciting ways. Consequently, emergent 3D capture and display systems are becoming essential instruments for new audiovisual production techniques, continually shaping music performance.

For audiences, the experience of a live musical performance is momentary and inherently shared with others, making it difficult to replicate the same version without context. While traditional digital technology can capture the audiovisual elements effectively, it often fails to convey the feeling and intimacy of a live audience. Even when the audience is unaware of vibrations, they can influence recognizable features such as presence (Cerdá et al. 2012). In the realm of VR, the exploration of soloistic performance experiences and feelings of presence has been a subject of research interest for many years.

2. Audience Experiences of Audio–Tactile Feedback in a Novel Virtual Reality Volumetric Music Video

Several studies have explored how to augment an audience’s experience in live performances, such as theater, dance, and music (Sparacino et al. 1999; Hödl 2016). Researchers have explored new ways for seated audiences to experience embedded actuators in chairs to provide audio-based vibrotactile stimuli (Merchel and Altinsoy 2009; Nanayakkara et al. 2009; Karam et al. 2010). While seating is available in most drama and dance performances, standing is often required for live pop, rock, or dance music concerts. Still, relatively few haptic interfaces are developed for standing-only audiences, with notable exceptions providing free-standing capabilities (Gunther and O’Modhrain 2003; West et al. 2019; Turchet et al. 2021). This factor is significant when considering the contemporary application of immersive technology in musical performance.
VR technology is hardware that harnesses multimodal human–computer interaction to create the feeling of presence in a virtual world (Seth et al. 2011). Thus, contemporary VR employs numerous advanced digital technologies to immerse users in imaginary digital worlds. VR, as technology, is nascent; however, virtual realities, in general, have existed as immersive media entertainment experiences for millennia—as books (Saler 2012; Ryan 1999), films (Visch et al. 2010), theatre (Reaney 1999; Laurel 2013), and games (Jennett et al. 2008). The immersive qualities of such works are often attributed to the quality of the work and not their ability to stimulate multiple senses at once, for example, in the case of vision with film and audio with music. VR experiences are not necessarily modally locked in the same way as other media and can stimulate audiences’ senses differently from traditional immersive media.
Haptic cues in music performance and their perception have been observed to affect user experiences—including usability, functionality, and the perceived quality of the musical instruments being used (Young and Murphy 2015b). Haptics can also render and exploit controlled feedback for digital musical instruments (DMIs) (Young and Murphy 2015b). This creative application space highlights the multidisciplinary power of musical haptics from the perspective of computer science, human–computer interaction, engineering, psychology, interaction design, musical performance, and theatre. Therefore, it is hoped that the presented study will contribute to developing a multidisciplinary understanding of musical haptics in 21st-century artistic practices. The role of supplementary senses in immersive media is often undervalued or misrepresented in reductive, single-sensory approaches to lab-based research. In the wild, audiences do not experience a single stimulus while consuming art; they use all their senses to experience the world of live music performance holistically. A notable example would be the severely deaf percussionist Evelyn Glennie, who has used vibrotactile cues in their musical performance to recognize pitch based on where the vibrations were felt on the body (Glennie 2015).

2.1. Immersive Virtual Environments and Presence

Psychologically, virtual realities are presented as 3D immersive virtual environments (IVEs), digitally providing sensory stimuli that encapsulate the user’s senses and creating the perception that the IVE is genuine and not synthetic (Blascovich et al. 2002). IVEs have been used for years to convey virtual realities via CAVE and HMD systems (Mestre 2017). Today, VR technology can be used as an erudite psychological platform for cultural heritage (Zerman et al. 2020), theatre performance (O’Dwyer et al. 2022), teaching (Wang et al. 2021), and empathy building (Young et al. 2021).
The most common concepts in discussions about virtual realities are immersion, presence, co-presence, flow, and simulation realism. Immersion is “the degree of involvement with a game” (Brown and Cairns 2004, p. 1298). Immersion is also a deep engagement when people “enter a make-believe world” (Coomans and Timmermans 1997, p. 6). While some research points to experiencing virtual engagement or disassociation from reality in virtual worlds (Brown and Cairns 2004; Coomans and Timmermans 1997; Haywood and Cairns 2006; Jennett et al. 2008), others consider immersion as a substitution for reality by virtuality and becoming part of the virtual experience (Grimshaw 2007; Pine and Gilmore 1999). Immersion also includes a lack of awareness of time and the physical world, feeling present within a virtual world, and a sense of real-world dissociation (Haywood and Cairns 2006; Jennett et al. 2008). While broad, these definitions of immersion are universally applicable to VR technology. Moreover, it should also be noted that measures of immersion target the technology and not the user’s experience of the IVE.
Factors of presence, on the other hand, can be classified as subjective experiences (Witmer and Singer 1998). As an aspect of immersion, presence can indicate if a “state of deep involvement with technology” has been achieved (Zhang et al. 2006, p. 2). Therefore, presence can be defined as a “state of consciousness, the (psychological) sense of being in the virtual environment” (Slater and Wilbur 1997, p. 605). Whether directly or indirectly, immersion is required to induce presence. Furthermore, the social aspect of a virtual experience, as co-presence, is also a factor for consideration (Slater and Wilbur 1997) and a state of “flow.” Flow describes the feeling of full engagement and enjoyment of an activity (Csikszentmihalyi et al. 2016; Csikszentmihalyi and Larson 2014) and is strongly linked to feeling present and increased task performance in IVEs (Weibel et al. 2008). VR is driven to pursue simulation realism (Bowman and McMahan 2007). The conscious sense of presence is modeled by presenting bodily actions as possible actions in the IVE and suppressing incompatible sensory input (Schubert et al. 2001). However, a digital representation does not require perfect rendering to be perceived as physically accurate (Witmer and Singer 1998). Furthermore, objective and subjective realism does not always balance when an audience experiences esthetic art practices.
In creative media practices, the connection between presence and visual esthetics is relatively unknown and could be assessed from an immersive arts perspective on realism as an art movement. The relationship between IVEs and esthetics may imply other consequences, as esthetics is associated with pleasure and positive emotions (Reber et al. 2004; Hekkert 2006). Therefore, assessing the feeling of presence in VR experiences as immersive technologies may induce satisfaction and positive affect. As such, presence measures can be effectively applied in user experience studies for evaluating different artistic virtual realities when presented in IVEs without relying on visual realism for immersion.
Using haptics in VR experiences can help increase feelings of perceived presence (Sallnäs 2010), and the effect of haptics on the presence of virtual objects has also been observed (Gall and Latoschik 2018). Moreover, multimodal IVEs, consisting of video, audio, and haptic feedback, have impacted user expectations and satisfaction levels of professional and conventional users (García-Valle et al. 2017). Therefore, evaluating a haptic experience’s design can be taken from an audience, performer/composer, instrument designer, and manufacturer perspective (Barbosa et al. 2015). The goal of each stakeholder is different, and their means of assessment vary accordingly.

2.1.1. Volumetric Video

Volumetric video (VV) is a media format representing 3D content captured and reconstructed from the real world by cameras and other sensors similarly commonly used in computer graphics (Smolic et al. 2022). VV enables the visualization of such content with full six degrees of freedom (6DoF). Over the last decades, VV has seen interest from researchers in computer vision, computer graphics, multimedia, and related fields, often under other terms such as free viewpoint video (FVV), 3D video, and others. However, the commercial application has been limited to a few special effects and game design cases. Recent years have seen significant interest in VV, including research, industry, and media streaming standardization. On the one hand, this reinvigoration is driven by the maturation of VV content creation technology, which has reached acceptable quality today for various commercial applications. On the other hand, current interest in extended reality (XR) also drives the importance of VV because VV facilitates bringing real people into immersive XR experiences.
Traditionally, VV content creation starts with synchronized multiview video capture in a specifically designed studio. Figure 1 shows an affordable setup used in the V-SENSE lab in Dublin, which only uses 12 conventional cameras. Larger, more complex, and expensive studios can have up to a hundred cameras and additional depth sensors (Collet et al. 2015). The captured video and other data are typically passed to a dedicated 3D reconstruction process. Classical VV content creation approaches mainly rely on structure-from-motion (SfM) or shape-from-silhouette (SfS) approaches. While SfM relies on features and matching and results in a dynamic 3D point cloud in the first place, SfS computes a volume populated by the object of interest in the first place. Both approaches have their advantages and drawbacks. Pagés et al. (2018) presented a system that combines benefits and addresses the creation of affordable capture setups.
Figure 1. Musical performance VV capture by New Pagans1 Cahir O’Doherty (Left) and Lyndsey McDougall (Right) at the V-SENSE studio in Trinity College Dublin, Ireland.
Recently, powerful deep-learning approaches have been presented for 3D geometry processing and reconstruction (Valenzise et al. 2022). For instance, the first examples of deep learning VV reconstruction algorithms were able to recreate 3D shapes of an object from a particular class of objects, such as a chair, from a single 2D image. A 3D reconstruction of human faces from monocular images or video is another area that has received much attention. PIFu (Habermann et al. 2019) is a single-image 3D reconstruction method of human bodies, representing a milestone in this area. The resulting VV, a dynamic 3D graphics model, can be rendered and visualized for any viewpoint and viewing direction (6DoF). As such, it can be used as an asset in XR content and other media.

2.1.2. Spatial Sound

The success of a VR experience relies on effectively replacing real-world sensory feedback with a virtual representation (Slater and Sanchez-Vives 2016). Since sounds convey multiple types of information, such as emotional expression, localization information, and environmental cues, auditory feedback is an essential component in the perception of an IVE. The purpose of auditory feedback in immersive media is to replace the existing sounds and the acoustic response of the environment with virtual ones (Schutze 2018). Furthermore, presence, immersion, and interaction are essential for a successful experience in VR development. The more accurate or plausible the auditory representation, the higher the sense of presence, immersion, and place illusion is felt by users (Avanzini 2022).
Spatial audio, often referred to as immersive audio, is any audio production technique that allows rendering sounds with the necessary perceptual properties to be perceived as having a distinct direction and distance from the user (Begault 2000; Yang and Chan 2019). Sound localization lets us recognize a sound source’s presence, distribution, and interaction (Letowski and Letowski 2012). It is defined as the collection of perceptual characteristics of audio signals that allow the auditory system to determine a sound source’s specific distance and angular position using a combination of amplitude, monoaural cues, inter-aural level differences (ILDs), and inter-aural time differences (ITDs) (Bates et al. 2019). Sound auralization is crucial for creating a plausible auditory scene and increasing the user’s spatial perception and the VR environment’s overall immersiveness. Utilizing a range of acoustic phenomena, such as early reflections and reverberation, produces a realistic auditory response and helps place audio sources in the virtual space (Geronazzo and Serafin 2022; Yang and Chan 2019).

2.1.3. Haptics

The sense of touch in humans is often categorized as cutaneous, kinesthetic and proprioceptive, or haptic perception. Haptic perception is achieved through actively exploring surfaces and objects using the forces experienced during contact with mechanical stimuli, including pressure and vibration. In human physiology and psychology, haptic stimuli and their perception by the brain relate to the actions of the somatosensory system and the sensory gathering of force and tactile information immediately affecting a person, all highlighting the existence of corresponding external stimuli sources. Contact with haptic stimuli is usually made via the skin, explicitly stimulating cutaneous receptors in the dermis, epidermis, and ligament tissue. Cutaneous receptors are found in the skin for touch, and proprioceptors are located in the muscles for kinesthetic and proprioceptive awareness. Cutaneous receptors include mechanoreceptors (pressure or distortion), nociceptors (pain), and thermoreceptors (temperature). Mechanoreceptors need to be stimulated to experience the touch of a vibration.
In physics, vibrations are a mechanical phenomenon whereby oscillations occur around an equilibrium point (Papetti and Saitis 2018). On the one hand, “sound” is a vibration that spreads as an “acoustic wave” via some medium and stimulates the auditory system. On the other, for haptics, the perception of vibration is a measure of vibration as cutaneous stimuli, and this somatosensory information then allows humans to explore their immediate world. Direct physical contact is often required; this is not the case for auditory perception. The radiated sound can also stimulate the surface of the human body. Airborne vibrations, such as sound, can also be perceived by the skin if they are of sufficient amplitude to displace the receptors under the skin, as is often experienced in live concerts.
When an acoustic or digital musical instrument produces a sound, that sound is created by some vibrating element of the instrument’s design or an amplified speaker. Therefore, haptics and music can be innately connected through multimodal vibration, where the biological systems of the somatosensory and auditory systems are engaged simultaneously. The combination of haptic and auditory stimuli can be multimodal and experienced by a performer and audience alike, creating new practices that can be mixed and analyzed in multiple contemporary use-case scenarios. The musician and the audience are reached by vibration through the air and solid media, for example, the floor or the seats of a concert space or stage. However, in the case of the audience, vibrotactile and audio stimuli are experienced passively, as no physical contact is made between the instrument and listener.

2.1.4. VR Performance

The permeation of XR technologies into the hands of creative artists has provoked varied and innovative technological employments toward aesthetic ends (Young et al. 2023). The arrival of these technologies has been proposed by several theorists and critics (Bailenson 2018?) as analogous to the advent of film technologies at the beginning of the 20th century, which (arguably) gave rise to the wealthiest epoch of modern, avant-garde, inventive art in the 20th century. Even within the more focused subcategory of the performing arts, there are many creative techniques, styles, and strategies, as well as opinions and views on the most effective solutions for harnessing these technologies and captivating audiences. To date, VR (as a subsection of the totality of platforms offered on the spectrum of XR technologies) has enjoyed the most significant level of investigation by performing artists.
Even within the more focused purview of VR performance, several taxonomies still have to be negotiated, for example, live versus prerecorded material and the creative techniques employed. Within the scope of this manuscript, it is suitable to focus the discussion on VR performance content created using VV, yet even within this narrowed category, there are varying techniques: those that purely use computer vision (V-SENSE 2019; O’Dwyer et al. 2021) and those that include the use of depth camera data (Wise and Neal 2020). Focusing specifically on offline VV content generated purely through the computer vision techniques outlined above, it is essential to note that, in the context of the presented research, there is currently no possibility of generating a live (real-time) representation of a 3D character. Leaving aside consumer bandwidth, the postproduction processes are currently too slow and memory-intensive; however, as processing capabilities increase and algorithms and pipelines become more refined, it is possible that, in the next few years, the latency between capture and representation may be reduced to less than a minute, which is not that far off the latency associated with straightforward video webcasting.

References

  1. Papetti, Stefano, and Charalampos Saitis. 2018. Musical Haptics. Cham: Springer Nature, p. 285.
  2. Evans, Leighton. 2018. The Re-Emergence of Virtual Reality. Abingdon-on-Thames: Routledge.
  3. Cerdá, Salvador, Alicia Giménez, and Rosa M. Cibrián. 2012. An objective scheme for ranking halls and obtaining criteria for improvements and design. Journal of the Audio Engineering Society 60: 419–30.
  4. Sparacino, Flavia, Christopher Wren, Glorianna Davenport, and Alex Pentland. 1999. Augmented performance in dance and theater. International Dance and Technology 99: 25–28.
  5. Hödl, Oliver. 2016. The Design of Technology-Mediated Audience Participation in Live Music. Ph.D. dissertation, Technische Universität Wien, Wien, Vienna.
  6. Merchel, Sebastian, and M. Ercan Altinsoy. 2009. Vibratory and acoustical factors in multimodal reproduction of concert DVDs. In Haptic and Audio Interaction Design: 4th International Conference, HAID 2009 Dresden, Germany, 10–11 September 2009 Proceedings 4. Berlin/Heidelberg: Springer, pp. 119–27.
  7. Nanayakkara, Suranga, Elizabeth Taylor, Lonce Wyse, and S. H. Ong. 2009. An enhanced musical experience for the deaf: Design and evaluation of a music display and a haptic chair. In Sigchi Conference on Human Factors in Computing Systems. New York: Association for Computing Machinery, pp. 337–46.
  8. Karam, Maria, Carmen Branje, Gabe Nespoli, Norma Thompson, Frank A. Russo, and Deborah I. Fels. 2010. The emoti-chair: An interactive tactile music exhibit. In CHI’10 Extended Abstracts on Human Factors in Computing Systems. New York: Association for Computing Machinery, pp. 3069–74.
  9. Gunther, Eric, and Sile O’Modhrain. 2003. Cutaneous grooves: Composing for the sense of touch. Journal of New Music Research 32: 369–81.
  10. West, Travis J., Alexandra Bachmayer, Sandeep Bhagwati, Joanna Berzowska, and Marcelo M. Wanderley. 2019. The design of the body: Suit: Score, a full-body vibrotactile musical score. In Human Interface and the Management of Information. Information in Intelligent Systems: Thematic Area, HIMI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, 26–31 July 2019, Proceedings, Part II 21. Cham: Springer International Publishing, pp. 70–89.
  11. Turchet, Luca, Travis West, and Marcelo M. Wanderley. 2021. Touching the audience: Musical haptic wearables for augmented and participatory live music performances. Personal and Ubiquitous Computing 25: 749–69.
  12. Seth, Abhishek, Judy M. Vance, and James H. Oliver. 2011. Virtual reality for assembly methods prototyping: A review. Virtual Reality 15: 5–20.
  13. Saler, Michael. 2012. As If: Modern Enchantment and the Literary Prehistory of Virtual Reality. Oxford: Oxford University Press.
  14. Ryan, Marie-Laure. 1999. Immersion vs. interactivity: Virtual reality and literary theory. SubStance 28: 110–37.
  15. Visch, Valentijn T., Ed S. Tan, and Dylan Molenaar. 2010. The emotional and cognitive effect of immersion in film viewing. Cognition and Emotion 24: 1439–45.
  16. Reaney, Mark. 1999. Virtual reality and the theatre: Immersion in virtual worlds. Digital Creativity 10: 183–88.
  17. Laurel, Brenda. 2013. Computers as Theatre. Boston: Addison-Wesley.
  18. Jennett, Charlene, Anna L. Cox, Paul Cairns, Samira Dhoparee, Andrew Epps, Tim Tijs, and Alison Walton. 2008. Measuring and defining the experience of immersion in games. International Journal of Human-Computer STUDIES 66: 641–61.
  19. Young, Gareth W., and Dave Murphy. 2015b. HCI Models for Digital Musical Instruments: Methodologies for Rigorous Testing of Digital Musical Instruments. Paper presented at the Computer Music Multidisciplinary Research Conference, Plymouth, UK, November 15–19; pp. 534–544.
  20. Glennie, Evelyn. 2015. Hearing Essay. Available online: https://www.evelyn.co.uk/hearing-essay/ (accessed on 27 January 2023).
  21. Blascovich, Jim, Jack Loomis, Andrew C. Beall, Kimberly R. Swinth, Crystal L. Hoyt, and Jeremy N. Bailenson. 2002. Immersive virtual environment technology as a methodological tool for social psychology. Psychological Inquiry 13: 103–24.
  22. Mestre, Daniel R. 2017. CAVE versus Head-Mounted Displays: Ongoing thoughts. In IS&T International Symposium on Electronic Imaging: The Engineering Reality of Virtual Reality. Springfield: Society for Imaging Science and Technology, pp. 31–35.
  23. Zerman, Emin, Néill O’Dwyer, Gareth W. Young, and Aljosa Smolic. 2020. A case study on the use of volumetric video in augmented reality for cultural heritage. Paper presented at the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, Tallinn, Estonia, October 25–29; pp. 1–5.
  24. O’Dwyer, Néill, Gareth W. Young, and Aljosa Smolic. 2022. XR Ulysses: Addressing the disappointment of cancelled site-specific re-enactments of Joycean literary cultural heritage on Bloomsday. International Journal of Performance Arts and Digital Media 18: 29–47.
  25. Wang, Xining, Gareth W. Young, Conor Mc Guckin, and Aljosa Smolic. 2021. A Systematic Review of Virtual Reality Interventions for Children with Social Skills Deficits. Paper presented at the 2021 IEEE International Conference on Engineering, Technology & Education (TALE), Wuhan, China, December 5–8; Piscataway: IEEE, pp. 436–40.
  26. Young, Gareth W., Néill O’Dwyer, and Aljosa Smolic. 2021. Exploring virtual reality for quality immersive empathy building experiences. Behaviour & Information Technology 41: 3415–31.
  27. Brown, Emily, and Paul Cairns. 2004. A grounded investigation of game immersion. In CHI’04 Extended Abstracts on Human Factors in Computing Systems. New York: ACM, pp. 1297–300.
  28. Coomans, Marc K., and Harry J. Timmermans. 1997. Towards a taxonomy of virtual reality user interfaces. Paper presented at the 1997 IEEE Conference on Information Visualization (Cat. No. 97TB100165), London, UK, August 27–29; pp. 279–84.
  29. Haywood, Naomi, and Paul Cairns. 2006. Engagement with an interactive museum exhibit. In People and Computers XIX—The Bigger Picture: Proceedings of HCI 2005. London: Springer, pp. 113–29.
  30. Grimshaw, Mark. 2007. Sound and immersion in the first-person shooter. Paper presented at the 11th International Conference on Computer Games: AI, Animation, Mobile, Educational and Serious Games, Rochelle, France, November 21–23.
  31. Pine, B. Joseph, and James H. Gilmore. 1999. The Experience Economy: Work Is Theatre & Every Business a Stage. Brighton: Harvard Business Press.
  32. Witmer, Bob G., and Michael J. Singer. 1998. Measuring presence in virtual environments: A presence questionnaire. Presence 7: 225–40.
  33. Zhang, Ping, Na Li, and Heshan Sun. 2006. Affective quality and cognitive absorption: Extending technology acceptance research. Paper presented at the 39th Annual Hawaii International Conference on System Sciences (HICSS’06), Kauai, HI, USA, January 4–7; vol. 8, p. 207a.
  34. Slater, Mel, and Sylvia Wilbur. 1997. A framework for immersive virtual environments (FIVE): Speculations on the role of presence in virtual environments. Presence: Teleoperators & Virtual Environments 6: 603–16.
  35. Csikszentmihalyi, Mihaly, Sonal Khosla, and Jeanne Nakamura. 2016. Flow at Work. The Wiley Blackwell Handbook of the Psychology of Positivity and Strengths-Based Approaches at Work. Hoboken: Wiley, pp. 99–109.
  36. Csikszentmihalyi, Mihaly, and Reed Larson. 2014. Flow and the Foundations of Positive Psychology. Dordrecht: Springer, vol. 10, pp. 978–94.
  37. Weibel, David, Bartholomäus Wissmath, Stephan Habegger, Yves Steiner, and Rudolf Groner. 2008. Playing online games against computer vs. Human-controlled opponents: Effects on presence, flow, and enjoyment. Computers in Human Behavior 24: 2274–91.
  38. Bowman, Doug A., and Ryan P. McMahan. 2007. Virtual reality: How much immersion is enough? Computer 40: 36–43.
  39. Schubert, Thomas, Frank Friedmann, and Holger Regenbrecht. 2001. The experience of presence: Factor analytic insights. Presence: Teleoperators & Virtual Environments 10: 266–81.
  40. Reber, Rolf, Norbert Schwarz, and Piotr Winkielman. 2004. Processing fluency and aesthetic pleasure: Is beauty in the perceiver’s processing experience? Personality and Social Psychology Review 8: 364–82.
  41. Hekkert, Paul. 2006. Design aesthetics: Principles of pleasure in design. Psychology Science 48: 157.
  42. Sallnäs, Eva-Lotta. 2010. Haptic feedback increases perceived social presence. Paper presented at the International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, Amsterdam, The Netherlands, July 8–10; Berlin/Heidelberg: Springer, pp. 178–85.
  43. Gall, Dominik, and Marc Erich Latoschik. 2018. The effect of haptic prediction accuracy on presence. Paper presented at the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Atlanta, GA, USA, March 22–26; pp. 73–80.
  44. García-Valle, Gonzalo, Manuel Ferre, Jose Breñosa, and David Vargas. 2017. Evaluation of presence in virtual environments: Haptic vest and user’s haptic skills. IEEE Access 6: 7224–33.
  45. Barbosa, Jeronimo, Joseph Malloch, Marcelo M. Wanderley, and Stéphane Huot. 2015. What does “Evaluation” mean for the NIME community? Paper presented at the 15th International Conference on New Interfaces for Musical Expression, Baton Rouge, LA, USA, May 31–June 3.
  46. Smolic, Aljosa, Konstantinos Amplianitis, Matthew Moynihan, Neill O’Dwyer, Jan Ondrej, Rafael Pagés, Gareth W. Young, and Emin Zerman. 2022. Volumetric Video Content Creation for Immersive XR Experiences. In London Imaging Meeting 2022. Springfield: Society for Imaging Science and Technology.
  47. Collet, Alvaro, Ming Chuang, Pat Sweeney, Don Gillett, Dennis Evseev, David Calabrese, Hugues Hoppe, Adam Kirk, and Steve Sullivan. 2015. High-quality streamable free-viewpoint video. ACM Transactions on Graphics (ToG) 34: 1–13.
  48. Pagés, Rafael, Konstantinos Amplianitis, David Monaghan, Jan Ondřej, and Alijosa Smolić. 2018. Affordable content creation for free-viewpoint video and VR/AR applications. Journal of Visual Communication and Image Representation 53: 192–201.
  49. Valenzise, Giuseppe, Martin Alain, Emin Zerman, and Cagri Ozcinar, eds. 2022. Immersive Video Technologies. Cambridge: Academic Press.
  50. Habermann, Marc, Weipeng Xu, Michael Zollhöfer, Gerard Pons-Moll, and Christian Theobalt. 2019. Livecap: Real-time human performance capture from monocular video. ACM Transactions on Graphics (TOG) 38: 1–17.
  51. Slater, Mel, and Maria V. Sanchez-Vives. 2016. Enhancing Our Lives with Immersive Virtual Reality. Frontiers in Robotics and AI 3: 74.
  52. Schutze, Stephan. 2018. Virtual Sound: A practical Guide to Audio, Dialogue and Music in VR and AR. Boca Raton: Taylor & Francis.
  53. Avanzini, Federico. 2022. Procedural modeling of interactive sound sources in virtual reality. In Sonic Interactions in Virtual Environments. Cham: Springer, pp. 49–76.
  54. Begault, Durand R. 2000. 3-D Sound for Virtual Reality and Multimedia; Washington, DC: National Aeronautics and Space Administration NASA.
  55. Yang, Jing, and Cheuk Yu Chan. 2019. Audio-augmented museum experiences with gaze tracking. Paper presented at the 18th International Conference on Mobile and Ubiquitous Multimedia, Pisa, Italy, November 27–29; pp. 1–5.
  56. Letowski, Tomasz R., and Szymon T. Letowski. 2012. Auditory spatial perception: Auditory localization. In Army Research Lab Aberdeen Proving Ground Md Human Research and Engineering Directorate. Adelphi: Army Research Laboratory.
  57. Bates, Enda, Brian Bridges, and Adam Melvin. 2019. Sound Spatialization. In Foundations in Sound Design for Interactive Media: A Multidisciplinary Approach. Abingdon-on-Thames: Routledge, pp. 141–60.
  58. Geronazzo, Michele, and Stefania Serafin. 2022. Sonic Interactions in Virtual Environments: The Egocentric Audio Perspective of the Digital Twin. In Sonic Interactions in Virtual Environments. Cham: Springer International Publishing, pp. 3–45.
  59. Young, Gareth W., Néill O’Dwyer, and Aljosa Smolic. 2023. Volumetric video as a novel medium for creative storytelling. In Immersive Video Technologies. Cambridge: Academic Press, pp. 591–607.
  60. Bailenson, Jeremy. 2018. Experience on Demand: What Virtual Reality Is, How It Works, and What It Can Do. New York: WW Norton & Company.
  61. V-SENSE. 2019. ‘XR Play Trilogy (V-SENSE).’ Research Portfolio. V-SENSE: Creative Experiments (Blog). Available online: https://v-sense.scss.tcd.ie/research/mr-play-trilogy/ (accessed on 1 January 2023).
  62. O’Dwyer, Neill, Gareth W. Young, Aljosa Smolic, Matthew Moynihan, and Paul O’Hanrahan. 2021. Mixed Reality Ulysses. In SIGGRAPH Asia 2021 Art Gallery, 1. SA’21. New York: Association for Computing Machinery.
  63. Wise, Kerryn, and Ben Neal. 2020. Facades. Virtual Reality. Available online: http://facades.info/ (accessed on 1 January 2023).
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 172
Revisions: 2 times (View History)
Update Date: 21 Jul 2023
1000/1000