Technologies and AI for Human-Centered Digital Experiences: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Human–computer interaction (HCI) and information and communication technologies (ICT) are experiencing a transition due to the emergence of innovative design paradigms that are affecting the way we interact with digital information. Immersive technologies, such as augmented reality (AR), virtual reality (VR), and mixed reality (MR), are considered the building blocks of modern digital experiences. Knowledge representation in a machine-interpretable format supports reasoning on knowledge sources to shape intelligent, user-centric information systems. From a human-centered perspective, these systems have the potential to dynamically adapt to the diverse and evolving needs of users. Simultaneously, the integration of artificial intelligence (AI) into ICT creates the foundation for new interaction technologies, information processing, and information visualization.

  • human–computer interaction
  • interaction technologies
  • user interface design
  • human-centered design
  • web-based information systems
  • semantic knowledge representation
  • X-reality applications
  • human motion
  • 3D digitization
  • serious games

1. Advances in User Interface Design, Development, and Evaluation, including New Approaches for Explicit and Implicit Interaction

UI design, development, and evaluation made possible the facilitation of new technologies and supported a deeper understanding of user behavior. This has been manifested through a combination of explicit and implicit interaction approaches.
Explicit interaction can be perceived as a form of communication with a computing device where explicit user input is directly connected to the issuing of a command. Traditionally, explicit interaction involved direct user inputs, such as with a mouse, a keyboard, or touch gestures. However, late advancements have elevated explicit interaction by introducing gesture-based interfaces (e.g., [1][2][3][4][5][6]) and kinesthetic interaction paradigms (e.g., [7][8]). These forms use technologies like computer vision, depth sensing, IR sensing, and tracking devices (RGB-D sensors, RGB cameras, infrared cameras) and enable users to communicate with systems through intuitive hand movements [9][10]. The advancements discussed in these works include tracking hand movements, instead of just static poses [1], optimizing gesture recognition from a monocular camera to support gaming in public spaces [3], using specialized wearable devices for gesture recognition [5], optimizing traditional RGB-D approached with AI [6][9], etc. Having these novel forms of interaction allows for a more immersive experience, and natural interactions can be supported. Especially in VR, new interaction paradigms can replace classic VR controllers and support more natural interaction through gestures and real-time hand tracking to augment the feeling of presence in a virtual space (e.g., [11][12]). Similar effects can be achieved in AR by enhancing interaction with digitally augmented physical objects (e.g., [13][14]).
Voice-activated interfaces [15][16] represent another advancement in explicit interaction, implemented through the integration of advanced natural language processing (NLP) algorithms, thus supporting more than traditional voice command recognition [17]. Today, intelligent voice assistants are capable of doing much more, such as comprehending context, discerning user intent, and being of assistance in typical daily tasks (e.g., [18][19]), making digital systems more inclusive for users with diverse abilities.
Implicit interaction is defined as “an action performed by the user that is not primarily aimed to interact with a computerized system but which such a system understands as input” [20]. In this domain, machine learning algorithms empower systems to discern user preferences, anticipate actions, and adapt interfaces and robots in real time [21][22][23]. Predictive modeling, driven by user behavior analysis, enables interfaces to become more personalized, offering a tailored experience that aligns with individual needs and preferences [24][25].
The fusion of explicit and implicit interaction is evident in the rise of anticipatory design [26]. Interfaces are becoming increasingly adept at predicting user actions, streamlining workflows, and minimizing decision fatigue [27]. Through the seamless integration of explicit inputs and implicit learning, systems can offer a more fluid and intuitive user experience.
As UI paradigms evolve, so too must the methods for evaluating their effectiveness. Traditional usability testing [28][29] and heuristic evaluations [30] are now complemented by sophisticated analytics and user feedback mechanisms [31][32][33][34]. A holistic understanding of user experience requires a multidimensional approach that considers not only task completion efficiency but also emotional engagement, accessibility, and inclusivity [35]. Eye-tracking technology and neuroscientific methods are emerging as powerful tools for evaluating implicit interactions [36]. By examining gaze patterns and neural responses, designers gain insights into user attention, emotional responses, and cognitive load, providing valuable feedback for refining UI designs [37][38].

2. Human-Centered Web-Based Information Systems

Today, web-based information systems try to integrate the principles of human-centered design [39]. To this end, a combination of ICT advances is being integrated into such systems, including knowledge representation approaches, data visualization paradigms, data mining methodologies, and big data analytics technologies [40][41]. This evolution supports more dynamic information delivery on the one hand and user-centric experiences that generate insights from large datasets on the other hand. The main objective is to enhance the visualization capacity over immersive amounts of information to render large datasets more readable for humans to understand and work with. The core of such approaches is the deployment of knowledge representation techniques [42][43]. Semantic web technologies, ontologies, and graph databases organize and structure information in a manner that is both human- and machine-interpretable. At the same time, these advancements further pose the need to introduce cognitive approaches in the semantic web, enabling systems to not only store and retrieve data but also to infer relationships, fostering a more accurate understanding of context [44][45]. Visual interfaces should help users locate information based on meaning while keeping the complexity of the semantic implementation hidden. For example, using similarities and comparisons can make it easier to navigate through a lot of information. Instead of having fixed representations of data, cognitive approaches should support choosing information based on user needs.
Such approaches are a prerequisite to reducing the threat of information overload addressed by effective data visualization. Modern web-based systems deploy interactive and immersive visualizations to present complex datasets via usable representations [46]. From interactive charts and graphs to VR-enhanced visualizations, the emphasis is on empowering users to explore and understand information intuitively, enhancing the overall user experience [47][48].
To achieve intuitive big data visualization, sophisticated tools for analysis and interpretation are needed. Data mining techniques augment web-based information systems, facilitating the discovery of patterns, trends, and anomalies within large datasets [49]. Machine learning algorithms enable systems to autonomously uncover hidden knowledge [50], providing users with recommendations and insights. Human-centered web-based systems are thus capable of employing distributed computing frameworks and cloud technologies to process datasets in real time. The synergy of big data analysis and visualization empowers users to promptly gain meaningful insights, fostering informed decision-making [51].
From a design perspective today, information systems’ interfaces are crafted with a deep understanding of user needs, preferences, and cognitive processes [52][53]. Personalization algorithms, informed by user interactions and feedback, ensure that the information presented is not only relevant but also delivered in a format that resonates with the user’s mental model. At the same time, continuous evaluation and iterative design are integral components of human-centered web-based information systems [54][55]. Analytics tools track user interactions, enabling designers to refine interfaces and functionalities based on real-world usage patterns [56][57][58]. This iterative process ensures that systems remain adaptive, responsive, and aligned with evolving user expectations [59].

3. Semantic Knowledge to Enhance User Interaction with Information, User Participation in Information Processing, and User Experience

Semantic knowledge representation provides meaning to the data processed by an information system and can thus support a more intelligent and intuitive interaction between users and information [60][61]. In this context, semantic technologies are used to represent knowledge in a machine-understandable format. This format can make various information systems semantically interoperable [62]. Ontologies, linked data, and semantic graphs provide a rich framework for expressing relationships between concepts [63], allowing systems to infer and connect pieces of data, creating a web of contextual relevance. Semantic knowledge representation lays the groundwork for a more intuitive and context-aware service provision [64]. NLP algorithms, powered by semantic models, enable systems to comprehend user queries in a more human-like manner [65]. Conversational interfaces, driven by semantic understanding, facilitate seamless interactions, allowing users to communicate with systems more naturally and dynamically [66]. A key advancement is the empowerment of users in the information processing chain. Collaborative knowledge creation and annotation, supported by semantic frameworks, enable users to contribute to the refinement and enrichment of data. This participatory approach not only enhances the accuracy of information but also fosters a sense of ownership and engagement among users in the information ecosystem [67][68].
Beyond representation, the presentation of information is an important part of user experience. Semantic technologies should influence how information is visually and contextually communicated to users [69][70], ensuring that users receive information in a format that aligns with their cognitive processes and preferences. In the same context, personalization algorithms, leveraging semantic understanding, deliver content that is not only relevant but also anticipates user needs [71][72]. The seamless integration of diverse datasets, facilitated by semantic frameworks, has the potential to provide a more coherent and holistic user experience, reducing information overload and enhancing overall satisfaction. The iterative nature of semantic knowledge representation ensures continuous improvement through feedback loops, driven by user interactions and system analytics, to enable adaptive learning and refinement.

4. X-Reality Applications (AR, VR, MR) for Immersive Human-Centered Experiences

4.1. X-Reality Applications in Cultural Heritage

The utilization of virtual reality (VR) in cultural heritage (CH) is not a novel concept. Initial approaches, such as CAVE-based VR, integrated immersive presentations and haptic-based manipulations of heritage objects [73][74]. A synergy of 3D reconstruction technologies with VR emerged, creating realistic digital replicas of CH objects [75][76]. In earlier methods, where digitization was constrained by technology immaturity, scenes from archaeological sites were manually modeled in 3D [77][78]. While resulting in lower-quality models, this allowed researchers to digitally restore monuments by complementing structural remains with digitally manufactured structures [79][80]. Advancements extended to simulating weather and daily life in ancient CH sites through a graphics-based rendering of nature and autonomous virtual humans.
The evolution of VR devices, particularly commercial VR headsets and controllers [81][82], simplified the implementation of VR-based experiences. Concurrently, the advent of 360 photography and videos enabled a different VR approach with inexpensive headsets, augmenting experiences with information points and interactive spots [83][84][85]. Studies have addressed resource-demanding tasks like streaming 360 videos in these headsets [86][87]. From a sustainability perspective, VR was proposed to divert visits from endangered CH sites to digital media [88].
In the domain of augmented reality (AR) and cultural heritage, ongoing research has shown its potential to enhance learning by providing a more comprehensive educational experience [89]. AR applications have been explored in school subjects like chemistry and cultural heritage sites [90][91]. Stakeholder studies indicate perceived value dimensions of AR in cultural heritage tourism, encompassing economic, experiential, social, epistemic, historical, cultural, and educational aspects [92].
Mobile AR research began with feature extraction in mobile phones for image acquisition [93]. More advanced mobile devices incorporated virtual humans [94], while modern mobile phones empowered various AR forms, such as the augmentation of camera images with information [95][96] and the interpolation of 3D digitization with camera input [97]. Some approaches replace physical remains with digitally enhanced versions from the time of creation [98], and physical objects aid visualization and interaction with archaeological artifacts in AR [99][100].
The fusion of augmented and virtual reality has given rise to “AR Portals”. Mobile devices, supporting larger AR scenes, allow users to spawn portals to alternate worlds [100]. “The Historical Figures AR” application exemplifies this, enabling users to walk through portals to historically themed sites [101]. Other approaches augment physical places with digital information, supporting alternative interactions through the manipulation of physical objects [102].

4.2. X-Reality Applications in Vocational Training and Education

Vocational training and education are challenging research topics due to the fact that the nature of the subjects to be taught integrates several aspects of human perception and is closely bound to human sense and skillful interaction with tools and materials. This is highlighted through mapping sequences and networks of physical and cognitive activities and working materials in design and workmanship, involving stages of perception, problem understanding, thinking, acting, planning, executing plans, and reflecting upon collected experiences [103]. In process training, part of thinking and planning is implemented by the mind, using mental simulation that produces mental imagery [104]. This modeling approach is also found in cognitive robotics (e.g., [105][106]). In [107], crafting processes are modeled to have schemas or plans, and their execution is modeled as individual events. Studies on the negotiation between the maker and the material have provided interesting data regarding how makers think between things, people, space, and time and develop their practices accordingly [108].
Due to their nature, in vocational education and training, X-reality (XR) applications have emerged as transformative tools, redefining the way individuals acquire and apply skills, based on the fact that these technologies can mimic, enhance, or alter the physical environment, seamlessly integrating physical and digital realms [109][110]. At the same time, these technologies can engage with the complex cognitive aspects described below by being capable of simulating reality and integrating cognitive cues in process training.
VR in training provides the capability of immersing learners in simulated environments that are digital twins of real-world environments, offering training scenarios in a controlled, risk-free setting [111][112]. Vocational training programs using VR have been employed from pilot training to surgical procedures. The ability to practice and refine skills in VR enhances muscle memory and improves confidence. Examples of such training environments include woodworking and blacksmithing simulators [113][114][115].
AR in vocational education has the form of overlaying digital information onto the physical environment, offering learners real-time, context-sensitive aid. From hands-on equipment maintenance simulations to interactive instructional overlays, AR facilitates learning by doing, enriching the educational experience. By extending reality through informative digital overlays, AR provides a bridge between theoretical knowledge and practical application, unifying the physical and digital learning domains. For example, a mobile traditional craft presentation system using AR technology has been proposed to superimpose 3D objects of traditional crafts on the real room space reflected by the camera of the mobile terminal [116][117][118].
MR blends virtual and physical elements, allowing learners to interact with physical and real-world objects simultaneously [119]. This is beneficial in vocational education where practical, hands-on experience is important [120] MR can be employed to integrate virtual equipment into physical training spaces, providing a hybrid learning experience [121]. The learner interacts with digital elements as an extension of their physical surroundings, bridging the gap between the two realities.
The added value of XR applications for vocational education lies in their ability to create immersive, human-centered learning experiences. By simulating authentic workplace scenarios, XR technologies engage learners on a deeper level, promoting active participation and knowledge retention [122]. XR applications also introduce adaptive learning environments, tailoring experiences to individual learner needs. Machine learning algorithms analyze user interactions and performance, allowing the system to dynamically adjust the difficulty and content of simulations [123]. This personalized approach ensures that learners progress at their own pace, addressing diverse skill levels and learning styles while seamlessly blending physical and digital realities.
Assessment in vocational education extends beyond traditional exams to immersive, performance-based evaluations. XR applications, by extending reality, enable instructors to assess not just theoretical knowledge but also the application of skills in realistic scenarios. Real-time feedback mechanisms enhance the learning loop, providing constructive insights to learners and facilitating continuous improvement within the extended reality of vocational training [124].

5. Human Motion and 3D Digitization for Enhanced Interactive Digital Experiences

Human motion capture and 3D digitization are reshaping how users interact with and perceive digital content in digital environments. More specifically, advancements in human motion capture technologies have altered the way digital systems interpret and respond to user movements [125][126]. High-fidelity sensors, wearable devices (e.g., [127][128]), computer vision, and AI enable the precise tracking of body movements, transforming them to explicit or implicit input (e.g., [129][130][131][132][133]). This has wide applicability for fields such as gaming, VR, and AR, where users can navigate, control, and manipulate digital environments through natural movements.
At the same time, sophisticated 3D digitization techniques make possible the seamless transition between real and digital world objects and spaces. From 3D scanning technologies that capture intricate details of physical objects [134][135][136] to depth-sensing cameras that create digital representations of physical spaces, the process of digitization extends beyond traditional boundaries [137][138][139]. This capability lays the foundation for more immersive and realistic digital experiences. In VR environments, for example, users can not only see but also physically interact with digital objects by leveraging natural hand gestures, body movements, and haptic interfaces [140][141][142]. This enhanced level of interaction fosters a deeper sense of presence and engagement, making digital experiences more lifelike and compelling [143][144].
Additionally, human motion and 3D digitization contribute to the emergence of novel interaction metaphors. Gesture-based interfaces, where a wave of the hand or a nod of the head translates into meaningful digital commands, exemplify the shift towards more intuitive interactions. This departure from conventional input methods introduces a new language of interaction, bridging the gap between the physical and digital worlds and giving rise to the concept of virtual embodiment [145][146]. Users can now project themselves into digital avatars or representations that mimic their real-world movements [147]. This not only adds a layer of personalization to digital interactions but also enables a more immersive and empathetic form of virtual presence [148].
The discussed impact may affect various industries such as healthcare, for instance, where surgeons can practice complex procedures in a VR setting that replicates real-world conditions [149][150]. In education, students can engage with historical artifacts through detailed 3D models. In the entertainment industry, these technologies can be used to create interactive storytelling experiences, blurring the lines between the observer and the observed [151][152].

6. Serious Game Design and Development

Serious games are transforming learning into an immersive and interactive experience [153]. Virtual scenarios, historical recreations, and problem-solving challenges become dynamic lessons, allowing students to explore, experiment, and learn through experience. The introduction of adaptability into these games caters to diverse learning styles, making education more accessible and engaging [154]. For occupational training, serious games offers a dynamic platform for skill development and scenario-based learning [155]. Simulations designed for various industries, such as healthcare, aviation, and emergency response, provide trainees with realistic environments to test their skills [156][157][158][159][160]. The interactive nature of these games fosters hands-on experience, allowing individuals to practice and refine their abilities in a controlled, risk-free setting.
Serious games extend their influence beyond traditional education and training contexts, addressing broader societal challenges. Games designed for public awareness campaigns, health promotion, and social issues provide a unique avenue for communication [161][162]. These games leverage storytelling, empathy-building narratives, and decision-making scenarios to raise awareness and prompt action on critical societal topics such as environmental conservation, public health, and social justice.
The design and development of serious games often involve collaboration across disciplines, educational psychologists, game designers, subject matter experts, and technologists collaborate to create holistic learning experiences. This interdisciplinary approach ensures that serious games not only convey information effectively but also align with pedagogical principles, maximizing their educational impact.
The landscape of serious game design has been significantly shaped by technological advancements. VR, AR, and advancements in graphics rendering have elevated the level of immersion and realism in these games [163][164][165]. This not only enhances the overall gaming experience but also contributes to the effectiveness of the learning and training outcomes. The integration of adaptive learning algorithms and analytics further personalizes the experience, tailoring content to individual needs and tracking progress.
These games, often delivered through digital platforms, have the potential to reach a global audience, making education and training more accessible across geographical boundaries. This democratization of learning resources addresses disparities in educational opportunities and ensures that individuals, regardless of their location, can benefit from engaging and purposeful learning experiences.

7. AI Approaches in User Interfaces, Information Processing, and Information Visualization

In the rapidly evolving landscape of technology, the integration of AI is manifested through intelligent systems that, in the future, will be able to understand user intent and enhance the extraction and presentation of valuable insights from vast datasets. AI-driven user interfaces have redefined the way individuals interact with digital systems. Language model-based AI and NLP technologies enable interfaces to comprehend and respond to user inputs in a more human-like fashion [166]. Chatbots and virtual assistants leverage these advancements, providing users with intuitive and conversational interactions. Intelligent user interfaces in the AI era promise personalized user experiences based on historical interactions, preferences, and context [167].

This entry is adapted from the peer-reviewed paper 10.3390/electronics13020269

References

  1. Roccetti, M.; Marfia, G.; Semeraro, A. Playing into the wild: A gesture-based interface for gaming in public spaces. J. Vis. Commun. Image Represent. 2012, 23, 426–440.
  2. Bhuiyan, M.; Picking, R. Gesture-controlled user interfaces, what have we done and what’s next. In Proceedings of the Fifth Collaborative Research Symposium on Security, E-Learning, Internet and Networking (SEIN 2009), Darmstadt, Germany, 26–27 November 2009; pp. 26–27.
  3. Kim, J.; He, J.; Lyons, K.; Starner, T. The gesture watch: A wireless contact-free gesture based wrist interface. In Proceedings of the 2007 11th IEEE International Symposium on Wearable Computers, IEEE, Boston, MA, USA, 11–13 October 2007; pp. 15–22.
  4. Shin, S.; Kim, W.Y. Skeleton-based dynamic hand gesture recognition using a part-based GRU-RNN for gesture-based interface. IEEE Access 2020, 8, 50236–50243.
  5. Fogtmann, M.H.; Fritsch, J.; Kortbek, K.J. Kinesthetic interaction: Revealing the bodily potential in interaction design. In Proceedings of the 20th Australasian Conference on Computer-Human Interaction: Designing for Habitus and Habitat, Cairns, Australia, 8–12 December 2008; pp. 89–96.
  6. Koutsabasis, P.; Vosinakis, S. Kinesthetic interactions in museums: Conveying cultural heritage by making use of ancient tools and (re-) constructing artworks. Virtual Real. 2018, 22, 103–118.
  7. Tran, D.S.; Ho, N.H.; Yang, H.J.; Baek, E.T.; Kim, S.H.; Lee, G. Real-time hand gesture spotting and recognition using RGB-D camera and 3D convolutional neural network. Appl. Sci. 2020, 10, 722.
  8. Oudah, M.; Al-Naji, A.; Chahl, J. Hand gesture recognition based on computer vision: A review of techniques. J. Imaging 2020, 6, 73.
  9. Sarma, D.; Bhuyan, M.K. Methods, databases and recent advancement of vision-based hand gesture recognition for hci systems: A review. SN Comput. Sci. 2021, 2, 436.
  10. Vosinakis, S.; Koutsabasis, P. Evaluation of visual feedback techniques for virtual grasping with bare hands using Leap Motion and Oculus Rift. Virtual Real. 2018, 22, 47–62.
  11. Kim, H.I.; Woo, W. Smartwatch-assisted robust 6-DOF hand tracker for object manipulation in HMD-based augmented reality. In Proceedings of the 2016 IEEE Symposium on 3D User Interfaces (3DUI), IEEE, Greenville, SC, USA, 19–20 March 2016; pp. 251–252.
  12. Leibe, B.; Starner, T.; Ribarsky, W.; Wartell, Z.; Krum, D.; Singletary, B.; Hodges, L. The perceptive workbench: Toward spontaneous and natural interaction in semi-immersive virtual environments. In Proceedings of the IEEE Virtual Reality 2000 (Cat. No. 00CB37048), New Brunswick, NJ, USA, 18–22 March 2000; pp. 13–20.
  13. Monteiro, P.; Gonçalves, G.; Coelho, H.; Melo, M.; Bessa, M. Hands-free interaction in immersive virtual reality: A systematic review. IEEE Trans. Vis. Comput. Graph. 2021, 27, 2702–2713.
  14. Cohen, M.H.; Giangola, J.P.; Balogh, J. Voice User Interface Design; Addison-Wesley Professional: Boston, MA, USA, 2004.
  15. Pearl, C. Designing Voice User Interfaces: Principles of Conversational Experiences; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2016.
  16. Terzopoulos, G.; Satratzemi, M. Voice assistants and smart speakers in everyday life and in education. Inform. Educ. 2020, 19, 473–490.
  17. McLean, G.; Osei-Frimpong, K. Hey Alexa… examine the variables influencing the use of artificial intelligent in-home voice assistants. Comput. Hum. Behav. 2019, 99, 28–37.
  18. Natale, S.; Cooke, H. Browsing with Alexa: Interrogating the impact of voice assistants as web interfaces. Media Cult. Soc. 2021, 43, 1000–1016.
  19. Rzepka, C. Examining the use of voice assistants: A value-focused thinking approach. In Proceedings of the Twenty-fifth Americas Conference on Information Systems, Cancún, Mexico, 15–17 August 2019.
  20. Schmidt, A. Implicit human computer interaction through context. Pers. Technol. 2000, 4, 191–199.
  21. Rani, P.; Liu, C.; Sarkar, N.; Vanman, E. An empirical study of machine learning techniques for affect recognition in human–robot interaction. Pattern Anal. Appl. 2006, 9, 58–69.
  22. Papatheocharous, E.; Belk, M.; Germanakos, P.; Samaras, G. Towards implicit user modeling based on artificial intelligence, cognitive styles and web interaction data. Int. J. Artif. Intell. Tools 2014, 23, 1440009.
  23. Ju, W.; Leifer, L. The design of implicit interactions: Making interactive systems less obnoxious. Des. Issues 2008, 24, 72–84.
  24. Agichtein, E.; Brill, E.; Dumais, S.; Ragno, R. Learning user interaction models for predicting web search result preferences. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA, USA, 6–11 August 2006; pp. 3–10.
  25. Ntalianis, K.S.; Doulamis, A.D.; Tsapatsoulis, N.; Doulamis, N. Human action annotation, modeling and analysis based on implicit user interaction. Multimed. Tools Appl. 2010, 50, 199–225.
  26. Zamenopoulos, T.; Alexiou, K. Towards an anticipatory view of design. Des. Stud. 2007, 28, 411–436.
  27. van Bodegraven, J. How anticipatory design will challenge our relationship with technology. In Proceedings of the 2017 AAAI Spring Symposium Series, Stanford, CA, USA, 27–29 March 2017.
  28. Dumas, J.F.; Redish, J.C. A Practical Guide to Usability Testing; Greenwood Publishing Group Inc.: Westport, CT, USA, 1993.
  29. Lewis, J.R. Usability testing. In Handbook of Human Factors and Ergonomics; Wiley: Hoboken, NJ, USA, 2012; pp. 1267–1312.
  30. Nielsen, J.; Molich, R. Heuristic evaluation of user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Gaithersburg, MD, USA, 15–17 March 1990; pp. 249–256.
  31. González, M.P.; Lorés, J.; Granollers, A. Enhancing usability testing through datamining techniques: A novel approach to detecting usability problem patterns for a context of use. Inf. Softw. Technol. 2008, 50, 547–568.
  32. Eloff, J.H.; De Bruin, J.A.; Malan, K.M. Semi-automated usability analysis through eye tracking. South Afr. Comput. J. 2018, 30, 66–84.
  33. Vargas, A.; Weffers, H.; da Rocha, H.V. A method for remote and semi-automatic usability evaluation of web-based applications through users behavior analysis. In Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research, Eindhoven, The Netherlands, 24–27 August 2010; pp. 1–5.
  34. Muhi, K.; Szőke, G.; Fülöp, L.J.; Ferenc, R.; Berger, Á. A semi-automatic usability evaluation framework. In Computational Science and Its Applications—ICCSA 2013, Proceedings of the 13th International Conference on Computational Science and Its Applications, Ho Chi Minh City, Vietnam, 24–27 June 2013; Proceedings, Part II 13; Springer: Berlin/Heidelberg, Germany, 2013; pp. 529–542.
  35. Petrie, H.; Bevan, N. The evaluation of accessibility, usability, and user experience. In The Universal Access Handbook; CRC Press: Boca Raton, FL, USA, 2009; Volume 1, pp. 1–16.
  36. Wang, J.; Antonenko, P.; Celepkolu, M.; Jimenez, Y.; Fieldman, E.; Fieldman, A. Exploring relationships between eye tracking and traditional usability testing data. Int. J. Hum.-Comput. Interact. 2019, 35, 483–494.
  37. Brocke, J.V.; Riedl, R.; Léger, P.M. Application strategies for neuroscience in information systems design science research. J. Comput. Inf. Syst. 2013, 53, 1–13.
  38. Alfimtsev, A.N.; Basarab, M.A.; Devyatkov, V.V.; Levanov, A.A. A new methodology of usability testing on the base of the analysis of user’s electroencephalogram. J. Comput. Sci. Appl. 2015, 3, 105–111.
  39. Gasson, S. Human-centered vs. user-centered approaches to information system design. J. Inf. Technol. Theory Appl. (JITTA) 2003, 5, 5.
  40. Zhang, J.; Johnson, K.A.; Malin, J.T.; Smith, J.W. Human-centered information visualization. In Proceedings of the International Workshop on Dynamic Visualizations and Learning, Tubingen, Germany, 18–19 July 2002.
  41. Aragon, C.; Guha, S.; Kogan, M.; Muller, M.; Neff, G. Human-Centered Data Science: An Introduction; MIT Press: Cambridge, MA, USA, 2022.
  42. Hall, D.L.; Jordan, J.M. Human-Centered Information Fusion; Artech House: Norwood, MA, USA, 2010.
  43. Rinkus, S.; Walji, M.; Johnson-Throop, K.A.; Malin, J.T.; Turley, J.P.; Smith, J.W.; Zhang, J. Human-centered design of a distributed knowledge management system. J. Biomed. Inform. 2005, 38, 4–17.
  44. Gentner, D.; van Harmelen, F.; Hitzler, P.; Janowicz, K.; Kuhnberger, K.U. Cognitive approaches for the semantic web. Dagstuhl Rep. 2012, 2, 93–116.
  45. Raubal, M.; Adams, B. The semantic web needs more cognition. Semant. Web 2010, 1, 69–74.
  46. McCosker, A.; Wilken, R. Rethinking ‘big data’as visual knowledge: The sublime and the diagrammatic in data visualisation. Vis. Stud. 2014, 29, 155–164.
  47. Donalek, C.; Djorgovski, S.G.; Cioc, A.; Wang, A.; Zhang, J.; Lawler, E.; Yeh, S.; Mahabal, A.; Graham, M.; Drake, A.; et al. Immersive and collaborative data visualization using virtual reality platforms. In Proceedings of the 2014 IEEE International Conference on Big Data (Big Data), IEEE, Washington, DC, USA, 27–30 October 2014; pp. 609–614.
  48. Olshannikova, E.; Ometov, A.; Koucheryavy, Y.; Olsson, T. Visualizing Big Data with augmented and virtual reality: Challenges and research agenda. J. Big Data 2015, 2, 1–27.
  49. Abbasi, A.; Sarker, S.; Chiang, R.H. Big data research in information systems: Toward an inclusive research agenda. J. Assoc. Inf. Syst. 2016, 17, 3.
  50. Franke, B.; Plante, J.F.; Roscher, R.; Lee, E.S.A.; Smyth, C.; Hatefi, A.; Chen, F.; Gil, E.; Schwing, A.; Selvitella, A.; et al. Statistical inference, learning and models in big data. Int. Stat. Rev. 2016, 84, 371–389.
  51. De Bra, P.M.E.; Aroyo, L.M.; Chepegin, V. The next big thing: Adaptive web-based systems. J. Digit. Inf. 2004, 5, No-247.
  52. Clarke, S.; Lehaney, B. (Eds.) Human Centered Methods in Information Systems: Current Research and Practice; IGI Global: Hershey, PA, USA, 1999.
  53. Rahmayani, M.T.I.; Firdaus, R.; Tekwana, P. Implementation of human centered design (hcd) Models in designing web-based information systems. J. Mantik 2023, 6, 3818–3826.
  54. van Velsen, L.S.; van der Geest, T.M.; Klaassen, R.F. User-centered evaluation of adaptive and adaptable systems. In Proceedings of the Fifth Workshop on User-Centred Design and Evaluation of Adaptive Systems, Dublin, Ireland, 20 June 2006.
  55. Chen, H.M.; Cooper, M.D. Using clustering techniques to detect usage patterns in a Web-based information system. J. Am. Soc. Inf. Sci. Technol. 2001, 52, 888–904.
  56. Chen, H.M.; Cooper, M.D. Stochastic modeling of usage patterns in a web-based information system. J. Am. Soc. Inf. Sci. Technol. 2002, 53, 536–548.
  57. De Guinea, A.O.; Webster, J. An investigation of information systems use patterns: Technological events as triggers, the effect of time, and consequences for performance. MIS Q. 2013, 37, 1165–1188.
  58. Ramirez, A.J.; Cheng, B.H. Design patterns for developing dynamically adaptive systems. In Proceedings of the 2010 ICSE Workshop on Software Engineering for Adaptive and Self-Managing Systems, New York, NY, USA, 3–4 May 2010; pp. 49–58.
  59. Suchanek, F.M.; Kasneci, G.; Weikum, G. Yago: A core of semantic knowledge. In Proceedings of the 16th International Conference on World Wide Web, Banff, AB, Canada, 8–12 May 2007; pp. 697–706.
  60. Kabir, N. The Impact of Semantic Knowledge Management System on Firms’ Innovation and Competitiveness. Doctoral Dissertation, Newcastle University, Newcastle, UK, 2017.
  61. Guido, A.L.; Paiano, R. Semantic integration of information systems. Int. J. Comput. Netw. Commun. (IJCNC) 2010, 2, 48–64.
  62. Tummarello, G.; Delbru, R.; Oren, E. Sindice.com: Weaving the open linked data. In Proceedings of the Semantic Web: 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007+ ASWC 2007, Busan, Korea, 11–15 November 2007; Proceedings. Springer: Berlin/Heidelberg, Germany, 2007; pp. 552–565.
  63. Patkos, T.; Bikakis, A.; Antoniou, G.; Papadopouli, M.; Plexousakis, D. A semantics-based framework for context-aware services: Lessons learned and challenges. In Proceedings of the International Conference on Ubiquitous Intelligence and Computing, Hong Kong, China, 11–13 July 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 839–848.
  64. Sangers, J.; Frasincar, F.; Hogenboom, F.; Chepegin, V. Semantic web service discovery using natural language processing techniques. Expert Syst. Appl. 2013, 40, 4660–4671.
  65. Kocaballi, A.B.; Laranjo, L.; Coiera, E. Understanding and measuring user experience in conversational interfaces. Interact. Comput. 2019, 31, 192–207.
  66. Gruber, T. Collective knowledge systems: Where the social web meets the semantic web. J. Web Semant. 2008, 6, 4–13.
  67. Grassi, M.; Morbidoni, C.; Nucci, M. A collaborative video annotation system based on semantic web technologies. Cogn. Comput. 2012, 4, 497–514.
  68. Albertoni, R.; Bertone, A.; De Martino, M. Information Search: The Challenge of Integrating Information Visualization and Semantic Web. In Proceedings of the 16th International Workshop on Database and Expert Systems Applications (DEXA’05), Copenhagen, Denmark, 22–26 August 2005; pp. 529–533.
  69. Benjamins, R.; Contreras, J.; Corcho, O.; Gómez-Pérez, A. The six challenges of the Semantic Web. In Proceedings of the Eighth International Conference on Principles of Knowledge Representation and Reasoning, KR2002, Toulouse, France, 22–25 April 2002; ISBN 9781558608474.
  70. Baldoni, M.; Baroglio, C.; Henze, N. Personalization for the semantic web. In Reasoning Web: First International Summer School 2005, Msida, Malta, July 25–29, 2005, Revised Lectures; Springer: Berlin/Heidelberg, Germany, 2005; pp. 173–212.
  71. Lilis, Y.; Zidianakis, E.; Partarakis, N.; Antona, M.; Stephanidis, C. Personalizing HMI elements in ADAS using ontology meta-models and rule based reasoning. In Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods, Proceedings of the 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, 9–14 July 2017; Proceedings, Part I 11; Springer International Publishing: Berlin/Heidelberg, Germany, 2017; pp. 383–401.
  72. Christou, C.; Angus, C.; Loscos, C.; Dettori, A.; Roussou, M. A versatile large-scale multimodal VR system for cultural heritage visualization. In Proceedings of the ACM Symposium on VR Software and Technology, Limassol, Cyprus, 1–3 November 2006; pp. 133–140.
  73. Gaitatzes, A.; Christopoulos, D.; Roussou, M. Reviving the past: Cultural heritage meets VR. In Proceedings of the 2001 Conference on VR, Archeology, and Cultural Heritage, Glyfada, Greece, 28–30 November 2001; pp. 103–110.
  74. Bruno, F.; Bruno, S.; De Sensi, G.; Luchi, M.L.; Mancuso, S.; Muzzupappa, M. From 3D reconstruction to VR: A complete methodology for digital archaeological exhibition. J. Cult. Herit. 2010, 11, 42–49.
  75. Gonizzi Barsanti, S.; Caruso, G.; Micoli, L.L.; Covarrubias Rodriguez, M.; Guidi, G. 3D visualization of cultural heritage artefacts with VR devices. In Proceedings of the 25th International CIPA Symposium 2015, Taipei, Taiwan, 31 August–4 September 2015; Copernicus Gesellschaft mbH: Göttingen, Germany, 2015; Volume 40, pp. 165–172.
  76. Foni, A.; Papagiannakis, G.; Magnenat-Thalmann, N. A Virtual Heritage Case Study: A Modern Approach to the Revival of Ancient Historical or Archeological Sites through Application of 3D Real-Time Computer Graphics. Proc. A VIR 3 2003. Available online: https://api.semanticscholar.org/CorpusID:12528723 (accessed on 5 January 2024).
  77. Papagiannakis, G.; Ponder, M.; Molet, T.; Kshirsagar, S.; Cordier, F.; Magnenat-Thalmann, M.; Thalmann, D. LIFEPLUS: Revival of life in ancient Pompeii, virtual systems and multimedia (No. CONF). 2002. Available online: https://www.researchgate.net/publication/37444098_LIFEPLUS_Revival_of_life_in_ancient_Pompeii_Virtual_Systems_and_Multimedia (accessed on 5 January 2024).
  78. Magnenat-Thalmann, N.; Foni, A.E.; Papagiannakis, G.; Cadi-Yazli, N. Real Time Animation and Illumination in Ancient Roman Sites. Int. J. Virtual Real. 2007, 6, 11–24.
  79. Foni, A.E.; Papagiannakis, G.; Cadi-Yazli, N.; Magnenat-Thalmann, N. Time-dependent illumination and animation of virtual Hagia-Sophia. Int. J. Archit. Comput. 2007, 5, 283–301.
  80. Skovfoged, M.M.; Viktor, M.; Sokolov, M.K.; Hansen, A.; Nielsen, H.H.; Rodil, K. The tales of the Tokoloshe: Safeguarding intangible cultural heritage using VR. In Proceedings of the Second African Conference for Human Computer Interaction: Thriving Communities, New York, NY, USA, 3–7 December 2018; pp. 1–4.
  81. Cao, D.; Li, G.; Zhu, W.; Liu, Q.; Bai, S.; Li, X. VR technology applied in digitalization of cultural heritage. Clust. Comput. 2019, 22, 10063–10074.
  82. Oculus Quest. Available online: https://www.oculus.com/experiences/quest/?locale=el_GR, (accessed on 10 January 2023).
  83. Argyriou, L.; Economou, D.; Bouki, V. Design methodology for 360 immersive video applications: The case study of a cultural heritage virtual tour. Pers. Ubiquitous Comput. 2020, 24, 843–859.
  84. Argyriou, L.; Economou, D.; Bouki, V. 360-degree interactive video application for cultural heritage education. In Proceedings of the 3rd Annual International Conference of the Immersive Learning Research Network, Verlag der Technischen Universität Graz, Coimbra, Portugal, 26–29 June 2017.
  85. Škola, F.; Rizvić, S.; Cozza, M.; Barbieri, L.; Bruno, F.; Skarlatos, D.; Liarokapis, F. VR with 360-video storytelling in cultural heritage: Study of presence, engagement, and immersion. Sensors 2020, 20, 5851.
  86. Zhou, C.; Li, Z.; Liu, Y. A measurement study of oculus 360-degree video streaming. In Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017; pp. 27–37.
  87. Lo, W.C.; Fan, C.L.; Lee, J.; Huang, C.Y.; Chen, K.T.; Hsu, C.H. 360 video viewing dataset in head-mounted VR. In Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017; pp. 211–216.
  88. Hajirasouli, A.; Banihashemi, S.; Kumarasuriyar, A.; Talebi, S.; Tabadkani, A. VR-based digitization for endangered heritage sites: Theoretical framework and application. J. Cult. Herit. 2021, 49, 140–151.
  89. Pribeanu, C.; Balog, A.; Iordache, D.D. Measuring the perceived quality of an AR-based learning application: A multidimensional model. Interact. Learn. Environ. 2017, 25, 482–495.
  90. Irwansyah, F.S.; Yusuf, Y.M.; Farida, I.; Ramdhani, M.A. Augmented reality (AR) technology on the android operating system in chemistry learning. IOP Conf. Ser. Mater. Sci. Eng. 2018, 288, 012068.
  91. Moorhouse, N.; Jung, T. Augmented reality to enhance the learning experience in cultural heritage tourism: An experiential learning cycle perspective. eReview Tour. Res. 2017, 8.
  92. Dieck, M.C.T.; Jung, T.H. Value of augmented reality at cultural heritage sites: A stakeholder approach. J. Destin. Mark. Manag. 2017, 6, 110–117.
  93. Choudary, O.; Charvillat, V.; Grigoras, R.; Gurdjos, P. MARCH: Mobile augmented reality for cultural heritage. In Proceedings of the 17th ACM International Conference on Multimedia, Vancouver, BC, Canada, 19–24 October 2009; pp. 1023–1024.
  94. Vlahakis, V.; Karigiannis, J.; Tsotros, M.; Gounaris, M.; Almeida, L.; Stricker, D.; Gleue, T.; Christou, I.T.; Carlucci, R.; Ioannidis, N.; et al. Archeoguide: First results of an augmented reality, mobile computing system in cultural heritage sites. VR Archeol. Cult. Herit. 2001, 9, 584993–585015.
  95. Chung, N.; Lee, H.; Kim, J.Y.; Koo, C. The role of augmented reality for experience-influenced environments: The case of cultural heritage tourism in Korea. J. Travel Res. 2018, 57, 627–643.
  96. Deliyiannis, I.; Papaioannou, G. Augmented reality for archaeological environments on mobile devices: A novel open framework. Mediterr. Archaeol. Archaeom. 2014, 14, 1–10.
  97. Pierdicca, R.; Frontoni, E.; Zingaretti, P.; Malinverni, E.S.; Colosi, F.; Orazi, R. Making visible the invisible. augmented reality visualization for 3D reconstructions of archaeological sites. In Proceedings of the Augmented and VR: Second International Conference, AVR 2015, Lecce, Italy, 31 August–3 September 2015; Springer International Publishing: Berlin/Heidelberg, Germany, 2015. Proceedings 2. pp. 25–37.
  98. Panou, C.; Ragia, L.; Dimelli, D.; Mania, K. An architecture for mobile outdoors augmented reality for cultural heritage. ISPRS Int. J. Geo-Inf. 2018, 7, 463.
  99. Fernández-Palacios, B.J.; Nex, F.; Rizzi, A.; Remondino, F. ARCube—The Augmented Reality Cube for Archaeology. Archaeometry 2015, 1, 250–262.
  100. Fernández-Palacios, B.J.; Rizzi, A.; Nex, F. Augmented reality for archaeological finds. In Proceedings of the Cultural Heritage Preservation: 4th International Conference, EuroMed 2012, Limassol, Cyprus, 29 October–3 November 2012; Springer: Berlin/Heidelberg, Germany, 2012. Proceedings 4. pp. 181–190.
  101. The Historical Figures AR. Available online: https://play.google.com/store/apps/details?id=ca.altkey.thehistoricalfiguresar (accessed on 31 October 2022).
  102. Carre, A.L.; Dubois, A.; Partarakis, N.; Zabulis, X.; Patsiouras, N.; Mantinaki, E.; Zidianakis, E.; Cadi, N.; Baka, E.; Thalmann, N.M.; et al. Mixed-reality demonstration and training of glassblowing. Heritage 2022, 5, 103–128.
  103. Gedenryd, H. How Designers Work—Making Sense of Authentic Cognitive Activities. Ph.D. Thesis, Lund University, Lund, Sweden, 1998.
  104. Keller, C.M.; Keller, J.D. Imagery in cultural tradition and innovation. Mind Cult. Act. 1999, 6, 3–32.
  105. Di Nuovo, A.; De La Cruz, V.M.; Marocco, D. Special issue on artificial mental imagery in cognitive systems and robotics. Adapt. Behav. 2013, 21, 217–221.
  106. Di Nuovo, A.; Marocco, D.; Di Nuovo, S.; Cangelosi, A. Embodied mental imagery in cognitive robots. In Springer Handbook of Model-Based Science; Springer: Berlin/Heidelberg, Germany, 2017; pp. 619–637.
  107. Zabulis, X.; Meghini, C.; Dubois, A.; Doulgeraki, P.; Partarakis, N.; Adami, I.; Karuzaki, E.; Carre, A.; Patsiouras, N.; Kaplanidi, D.; et al. Digitisation of traditional craft processes. J. Comput. Cult. Herit. 2022, 15, 1–24.
  108. Aktas, B.; Mäkelä, M.; Laamanen, T.K. Material connections in craft making: The case of felting. In Proceedings of the Design Research Society International Conference, Brisbane, Australia, 11–14 August 2020; Design Research Society: Brisbane, Australia; pp. 2326–2343.
  109. Stefanidi, E.; Partarakis, N.; Zabulis, X.; Zikas, P.; Papagiannakis, G.; Magnenat Thalmann, N. TooltY: An approach for the combination of motion capture and 3D reconstruction to present tool usage in 3D environments. In Intelligent Scene Modeling and Human-Computer Interaction; Springer International Publishing: Cham, Switzerland, 2021; pp. 165–180.
  110. Stefanidi, E.; Partarakis, N.; Zabulis, X.; Papagiannakis, G. An approach for the visualization of crafts and machine usage in virtual environments. In Proceedings of the 13th International Conference on Advances in Computer-Human Interactions, Valencia, Spain, 21–25 November 2020; pp. 21–25.
  111. Bouloukakis, M.; Partarakis, N.; Drossis, I.; Kalaitzakis, M.; Stephanidis, C. Virtual reality for smart city visualization and monitoring. In Mediterranean Cities and Island Communities: Smart, Sustainable, Inclusive and Resilient; Springer: Cham, Switzerland, 2019; pp. 1–18.
  112. Rossau, I.G.; Skovfoged, M.M.; Czapla, J.J.; Sokolov, M.K.; Rodil, K. Dovetailing: Safeguarding traditional craftsmanship using virtual reality. Int. J. Intang. Herit. 2019, 14, 104–120.
  113. Irregularcorporation. Available online: https://theirregularcorporation.com/ (accessed on 27 November 2023).
  114. Woodwork Simulator. Available online: https://www.igdb.com/games/woodwork-simulator (accessed on 27 November 2023).
  115. Murray, J.; Sawyer, W. Virtual Crafting Simulator: Teaching Heritage through Simulation. In Proceedings of EDULEARN15; University of Lincoln: Lincoln, UK, 2015; pp. 7668–7675.
  116. Iyobe, M.; Ishida, T.; Miyakawa, A.; Shibata, Y. Kansei retrieval method by principal component analysis of Japanese traditional crafts. In Proceedings of the 23rd International Symposium on Artificial Life and Robotics, Beppu, Japan, 18–20 January 2018; pp. 588–591.
  117. Iyobe, M.; Ishida, T.; Miyakawa, A.; Sugita, K.; Uchida, N.; Shibata, Y. Development of a mobile virtual traditional crafting presentation system using augmented reality technology. Int. J. Space-Based Situated Comput. (IJSSC) 2017, 6, 239–251.
  118. Iyobe, M.; Ishida, T.; Miyakawa, A.; Shibata, Y. Implementation of a mobile traditional crafting application using kansei retrieval method. IT CoNvergence PRAct. (INPRA) 2017, 5, 15–44.
  119. Kaplan, A.D.; Cruit, J.; Endsley, M.; Beers, S.M.; Sawyer, B.D.; Hancock, P.A. The effects of virtual reality, augmented reality, and mixed reality as training enhancement methods: A meta-analysis. Hum. Factors 2021, 63, 706–726.
  120. Gonzalez-Franco, M.; Pizarro, R.; Cermeron, J.; Li, K.; Thorn, J.; Hutabarat, W.; Tiwari, A.; Bermell-Garcia, P. Immersive mixed reality for manufacturing training. Front. Robot. AI 2017, 4, 3.
  121. Verhey, J.T.; Haglin, J.M.; Verhey, E.M.; Hartigan, D.E. Virtual, augmented, and mixed reality applications in orthopedic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2020, 16, e2067.
  122. Alnagrat, A.J.A. Virtual Transformations in Human Learning Environment: An Extended Reality Approach. J. Hum. Centered Technol. 2022, 1, 116–124.
  123. Pretolesi, D.; Zechner, O. Persuasive XR Training: Improving Training with AI and Dashboards. In Proceedings of the 18th International Conference on Persuasive Technology (PERSUASIVE 2023), Eindhoven, The Netherlands, 19–21 April 2023; p. 8.
  124. Doolani, S.; Wessels, C.; Kanal, V.; Sevastopoulos, C.; Jaiswal, A.; Nambiappan, H.; Makedon, F. A review of extended reality (xr) technologies for manufacturing training. Technologies 2020, 8, 77.
  125. Moeslund, T.B.; Granum, E. A survey of computer vision-based human motion capture. Comput. Vis. Image Underst. 2001, 81, 231–268.
  126. Moeslund, T.B.; Hilton, A.; Krüger, V. A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Underst. 2006, 104, 90–126.
  127. Nansense. Available online: https://www.nansense.com, (accessed on 7 December 2023).
  128. Rokoko. Available online: https://www.rokoko.com, (accessed on 7 December 2023).
  129. Mehta, D.; Sotnychenko, O.; Mueller, F.; Xu, W.; Elgharib, M.; Fua, P.; Seidel, H.-P.; Rhodin, H.; Pons-Moll, G.; Theobalt, C. XNect: Real-time multi-person 3D motion capture with a single RGB camera. ACM Trans. Graph. (TOG) 2020, 39, 82:1–82:17.
  130. Laraba, S.; Brahimi, M.; Tilmanne, J.; Dutoit, T. 3D skeleton-based action recognition by representing motion capture sequences as 2D-RGB images. Comput. Animat. Virtual Worlds 2017, 28, e1782.
  131. Qammaz, A.; Argyros, A.A. MocapNET: Ensemble of SNN Encoders for 3D Human Pose Estimation in RGB Images. In Proceedings of the BMVC, Cardiff, UK, 9–12 September 2019; p. 46.
  132. Qammaz, A.; Argyros, A. Occlusion-tolerant and personalized 3D human pose estimation in RGB images. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, Milan, Italy, 10–15 January 2021; pp. 6904–6911.
  133. Qammaz, A.; Argyros, A.A. A Unified Approach for Occlusion Tolerant 3D Facial Pose Capture and Gaze Estimation Using MocapNETs. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 3178–3188.
  134. D’Apuzzo, N. Overview of 3D surface digitization technologies in Europe. In Three-Dimensional Image Capture and Applications VII; SPIE: Wallisellen, Switzerland, 2006; Volume 6056, pp. 42–54.
  135. Durou, J.D.; Falcone, M.; Quéau, Y.; Tozza, S. (Eds.) Advances in Photometric 3d-Reconstruction; Springer International Publishing: Cham, Switzerland, 2020; pp. 1–29.
  136. Daneshmand, M.; Helmi, A.; Avots, E.; Noroozi, F.; Alisinanoglu, F.; Arslan, H.S.; Gorbova, J.; Haamer, R.E.; Ozcinar, C.; Anbarjafari, G. 3d scanning: A comprehensive survey. arXiv 2018, preprint. arXiv:1801.08863.
  137. Xiong, Z.; Zhang, Y.; Wu, F.; Zeng, W. Computational depth sensing: Toward high-performance commodity depth cameras. IEEE Signal Process. Mag. 2017, 34, 55–68.
  138. Zhang, T.; Nakamura, Y. Hrpslam: A benchmark for rgb-d dynamic slam and humanoid vision. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), IEEE, Naples, Italy, 25–27 February 2019; pp. 110–116.
  139. Aguilar, W.G.; Rodríguez, G.A.; Álvarez, L.; Sandoval, S.; Quisaguano, F.; Limaico, A. Visual SLAM with a RGB-D camera on a quadrotor UAV using on-board processing. In Proceedings of the Advances in Computational Intelligence: 14th International Work-Conference on Artificial Neural Networks, IWANN 2017, Cadiz, Spain, 14–16 June 2017; Proceedings, Part II 14. Springer International Publishing: Berlin/Heidelberg, Germany, 2017; pp. 596–606.
  140. Benko, H.; Holz, C.; Sinclair, M.; Ofek, E. Normaltouch and texturetouch: High-fidelity 3d haptic shape rendering on handheld virtual reality controllers. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 717–728.
  141. Choi, I.; Ofek, E.; Benko, H.; Sinclair, M.; Holz, C. Claw: A multifunctional handheld haptic controller for grasping, touching, and triggering in virtual reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–13.
  142. Whitmire, E.; Benko, H.; Holz, C.; Ofek, E.; Sinclair, M. Haptic revolver: Touch, shear, texture, and shape rendering on a reconfigurable virtual reality controller. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–12.
  143. Coelho, C.; Tichon, J.; Hine, T.J.; Wallis, G.; Riva, G. Media presence and inner presence: The sense of presence in virtual reality technologies. In From Communication to Presence: Cognition, Emotions and Culture towards the Ultimate Communicative Experience; IOS Press: Amsterdam, The Netherlands, 2006; Volume 11, pp. 25–45.
  144. North, M.M.; North, S.M. A comparative study of sense of presence of traditional virtual reality and immersive environments. Australas. J. Inf. Syst. 2016, 20, 1–15.
  145. Argelaguet, F.; Hoyet, L.; Trico, M.; Lécuyer, A. The role of interaction in virtual embodiment: Effects of the virtual hand representation. In Proceedings of the 2016 IEEE Virtual Reality (VR), IEEE, Greenville, SC, USA, 19–23 March 2016; pp. 3–10.
  146. Kilteni, K.; Groten, R.; Slater, M. The sense of embodiment in virtual reality. Presence Teleoperators Virtual Environ. 2012, 21, 373–387.
  147. Genay, A.; Lécuyer, A.; Hachet, M. Being an avatar “for real”: A survey on virtual embodiment in augmented reality. IEEE Trans. Vis. Comput. Graph. 2021, 28, 5071–5090.
  148. Hassan, R. Digitality, virtual reality and the ‘empathy machine’. Digit. Journal. 2020, 8, 195–212.
  149. Kenanidis, E.; Boutos, P.; Voulgaris, G.; Zgouridou, A.; Gkoura, E.; Gamie, Z.; Papagiannakis, G.; Tsiridis, E. Effectiveness of virtual reality compared to video training on acetabular cup and femoral stem implantation accuracy in total hip arthroplasty among medical students: A randomised controlled trial. Int. Orthop. 2023, 1–9.
  150. Zikas, P.; Protopsaltis, A.; Lydatakis, N.; Kentros, M.; Geronikolakis, S.; Kateros, S.; Kamarianakis, M.; Evangelou, G.; Filippidis, A.; Grigoriou, E.; et al. MAGES 4.0: Accelerating the world’s transition to VR training and democratizing the authoring of the medical metaverse. IEEE Comput. Graph. Appl. 2023, 43, 43–56.
  151. Saccoccio, S. Towards Enabling Storyliving Experiences: How XR Technologies Can Enhance Brand Storytelling. Master Thesis, University of Milan, Milan, Italy, 2022.
  152. Miller, C.H. Digital Storytelling 4e: A Creator’s Guide to Interactive Entertainment; CRC Press: Boca Raton, FL, USA, 2019.
  153. Raybourn, E.M. A new paradigm for serious games: Transmedia learning for more effective training and education. J. Comput. Sci. 2014, 5, 471–481.
  154. Streicher, A.; Smeddinck, J.D. Personalized and adaptive serious games. In Proceedings of the Entertainment Computing and Serious Games: International GI-Dagstuhl Seminar 15283, Dagstuhl Castle, Germany, 5–10 July 2015; Revised Selected Papers. Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 332–377.
  155. Williams-Bell, F.M.; Kapralos, B.; Hogue, A.; Murphy, B.M.; Weckman, E.J. Using serious games and virtual simulation for training in the fire service: A review. Fire Technol. 2015, 51, 553–584.
  156. Arnab, S. (Ed.) Serious Games for Healthcare: Applications and Implications; IGI Global: Hershey, PA, USA, 2012.
  157. Chittaro, L.; Buttussi, F. Assessing knowledge retention of an immersive serious game vs. a traditional education method in aviation safety. IEEE Trans. Vis. Comput. Graph. 2015, 21, 529–538.
  158. Chittaro, L.; Sioni, R. Serious games for emergency preparedness: Evaluation of an interactive vs. a non-interactive simulation of a terror attack. Comput. Hum. Behav. 2015, 50, 508–519.
  159. Mystakidis, S.; Besharat, J.; Papantzikos, G.; Christopoulos, A.; Stylios, C.; Agorgianitis, S.; Tselentis, D. Design, development, and evaluation of a virtual reality serious game for school fire preparedness training. Educ. Sci. 2022, 12, 281.
  160. Checa, D.; Miguel-Alonso, I.; Bustillo, A. Immersive virtual-reality computer-assembly serious game to enhance autonomous learning. Virtual Real. 2021, 27, 3301–3318.
  161. Rebolledo-Mendez, G.; Avramides, K.; De Freitas, S.; Memarzia, K. Societal impact of a serious game on raising public awareness: The case of FloodSim. In Proceedings of the 2009 ACM SIGGRAPH Symposium on Video Games, New Orleans, LA, USA, 3–7 August 2009; pp. 15–22.
  162. De Jans, S.; Van Geit, K.; Cauberghe, V.; Hudders, L.; De Veirman, M. Using games to raise awareness: How to co-design serious mini-games? Comput. Educ. 2017, 110, 77–87.
  163. Zarzuela, M.M.; Pernas, F.J.D.; Martínez, L.B.; Ortega, D.G.; Rodríguez, M.A. Mobile serious game using augmented reality for supporting children’s learning about animals. Procedia Comput. Sci. 2013, 25, 375–381.
  164. Checa, D.; Bustillo, A. A review of immersive virtual reality serious games to enhance learning and training. Multimed. Tools Appl. 2020, 79, 5501–5527.
  165. Avola, D.; Cinque, L.; Foresti, G.L.; Marini, M.R. An interactive and low-cost full body rehabilitation framework based on 3D immersive serious games. J. Biomed. Inform. 2019, 89, 81–100.
  166. Schmidt, A.; Mayer, S.; Buschek, D. Introduction to Intelligent User Interfaces. In Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 8–13 May 2021; pp. 1–4.
  167. Hitzler, P.; Bianchi, F.; Ebrahimi, M.; Sarker, M.K. Neural-symbolic integration and the semantic web. Semant. Web 2020, 11, 3–11.
More
This entry is offline, you can click here to edit this entry!
Video Production Service