Technologies and AI for Human-Centered Digital Experiences: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Human–computer interaction (HCI) and information and communication technologies (ICT) are experiencing a transition due to the emergence of innovative design paradigms that are affecting the way we interact with digital information. Immersive technologies, such as augmented reality (AR), virtual reality (VR), and mixed reality (MR), are considered the building blocks of modern digital experiences. Knowledge representation in a machine-interpretable format supports reasoning on knowledge sources to shape intelligent, user-centric information systems. From a human-centered perspective, these systems have the potential to dynamically adapt to the diverse and evolving needs of users. Simultaneously, the integration of artificial intelligence (AI) into ICT creates the foundation for new interaction technologies, information processing, and information visualization.

  • human–computer interaction
  • interaction technologies
  • user interface design
  • human-centered design
  • web-based information systems
  • semantic knowledge representation
  • X-reality applications
  • human motion
  • 3D digitization
  • serious games

1. Advances in User Interface Design, Development, and Evaluation, including New Approaches for Explicit and Implicit Interaction

UI design, development, and evaluation made possible the facilitation of new technologies and supported a deeper understanding of user behavior. This has been manifested through a combination of explicit and implicit interaction approaches.
Explicit interaction can be perceived as a form of communication with a computing device where explicit user input is directly connected to the issuing of a command. Traditionally, explicit interaction involved direct user inputs, such as with a mouse, a keyboard, or touch gestures. However, late advancements have elevated explicit interaction by introducing gesture-based interfaces (e.g., [53,54,55,56,57,58]) and kinesthetic interaction paradigms (e.g., [59,60]). These forms use technologies like computer vision, depth sensing, IR sensing, and tracking devices (RGB-D sensors, RGB cameras, infrared cameras) and enable users to communicate with systems through intuitive hand movements [61,62]. The advancements discussed in these works include tracking hand movements, instead of just static poses [53], optimizing gesture recognition from a monocular camera to support gaming in public spaces [55], using specialized wearable devices for gesture recognition [57], optimizing traditional RGB-D approached with AI [58,61], etc. Having these novel forms of interaction allows for a more immersive experience, and natural interactions can be supported. Especially in VR, new interaction paradigms can replace classic VR controllers and support more natural interaction through gestures and real-time hand tracking to augment the feeling of presence in a virtual space (e.g., [63,64]). Similar effects can be achieved in AR by enhancing interaction with digitally augmented physical objects (e.g., [65,66]).
Voice-activated interfaces [67,68] represent another advancement in explicit interaction, implemented through the integration of advanced natural language processing (NLP) algorithms, thus supporting more than traditional voice command recognition [69]. Today, intelligent voice assistants are capable of doing much more, such as comprehending context, discerning user intent, and being of assistance in typical daily tasks (e.g., [70,71]), making digital systems more inclusive for users with diverse abilities.
Implicit interaction is defined as “an action performed by the user that is not primarily aimed to interact with a computerized system but which such a system understands as input” [72]. In this domain, machine learning algorithms empower systems to discern user preferences, anticipate actions, and adapt interfaces and robots in real time [73,74,75]. Predictive modeling, driven by user behavior analysis, enables interfaces to become more personalized, offering a tailored experience that aligns with individual needs and preferences [76,77].
The fusion of explicit and implicit interaction is evident in the rise of anticipatory design [78]. Interfaces are becoming increasingly adept at predicting user actions, streamlining workflows, and minimizing decision fatigue [79]. Through the seamless integration of explicit inputs and implicit learning, systems can offer a more fluid and intuitive user experience.
As UI paradigms evolve, so too must the methods for evaluating their effectiveness. Traditional usability testing [80,81] and heuristic evaluations [82] are now complemented by sophisticated analytics and user feedback mechanisms [83,84,85,86]. A holistic understanding of user experience requires a multidimensional approach that considers not only task completion efficiency but also emotional engagement, accessibility, and inclusivity [87]. Eye-tracking technology and neuroscientific methods are emerging as powerful tools for evaluating implicit interactions [88]. By examining gaze patterns and neural responses, designers gain insights into user attention, emotional responses, and cognitive load, providing valuable feedback for refining UI designs [89,90].

2. Human-Centered Web-Based Information Systems

Today, web-based information systems try to integrate the principles of human-centered design [91]. To this end, a combination of ICT advances is being integrated into such systems, including knowledge representation approaches, data visualization paradigms, data mining methodologies, and big data analytics technologies [92,93]. This evolution supports more dynamic information delivery on the one hand and user-centric experiences that generate insights from large datasets on the other hand. The main objective is to enhance the visualization capacity over immersive amounts of information to render large datasets more readable for humans to understand and work with. The core of such approaches is the deployment of knowledge representation techniques [94,95]. Semantic web technologies, ontologies, and graph databases organize and structure information in a manner that is both human- and machine-interpretable. At the same time, these advancements further pose the need to introduce cognitive approaches in the semantic web, enabling systems to not only store and retrieve data but also to infer relationships, fostering a more accurate understanding of context [96,97]. Visual interfaces should help users locate information based on meaning while keeping the complexity of the semantic implementation hidden. For example, using similarities and comparisons can make it easier to navigate through a lot of information. Instead of having fixed representations of data, cognitive approaches should support choosing information based on user needs.
Such approaches are a prerequisite to reducing the threat of information overload addressed by effective data visualization. Modern web-based systems deploy interactive and immersive visualizations to present complex datasets via usable representations [98]. From interactive charts and graphs to VR-enhanced visualizations, the emphasis is on empowering users to explore and understand information intuitively, enhancing the overall user experience [99,100].
To achieve intuitive big data visualization, sophisticated tools for analysis and interpretation are needed. Data mining techniques augment web-based information systems, facilitating the discovery of patterns, trends, and anomalies within large datasets [101]. Machine learning algorithms enable systems to autonomously uncover hidden knowledge [102], providing users with recommendations and insights. Human-centered web-based systems are thus capable of employing distributed computing frameworks and cloud technologies to process datasets in real time. The synergy of big data analysis and visualization empowers users to promptly gain meaningful insights, fostering informed decision-making [103].
From a design perspective today, information systems’ interfaces are crafted with a deep understanding of user needs, preferences, and cognitive processes [104,105]. Personalization algorithms, informed by user interactions and feedback, ensure that the information presented is not only relevant but also delivered in a format that resonates with the user’s mental model. At the same time, continuous evaluation and iterative design are integral components of human-centered web-based information systems [106,107]. Analytics tools track user interactions, enabling designers to refine interfaces and functionalities based on real-world usage patterns [108,109,110]. This iterative process ensures that systems remain adaptive, responsive, and aligned with evolving user expectations [111].

3. Semantic Knowledge to Enhance User Interaction with Information, User Participation in Information Processing, and User Experience

Semantic knowledge representation provides meaning to the data processed by an information system and can thus support a more intelligent and intuitive interaction between users and information [112,113]. In this context, semantic technologies are used to represent knowledge in a machine-understandable format. This format can make various information systems semantically interoperable [114]. Ontologies, linked data, and semantic graphs provide a rich framework for expressing relationships between concepts [115], allowing systems to infer and connect pieces of data, creating a web of contextual relevance. Semantic knowledge representation lays the groundwork for a more intuitive and context-aware service provision [116]. NLP algorithms, powered by semantic models, enable systems to comprehend user queries in a more human-like manner [117]. Conversational interfaces, driven by semantic understanding, facilitate seamless interactions, allowing users to communicate with systems more naturally and dynamically [118]. A key advancement is the empowerment of users in the information processing chain. Collaborative knowledge creation and annotation, supported by semantic frameworks, enable users to contribute to the refinement and enrichment of data. This participatory approach not only enhances the accuracy of information but also fosters a sense of ownership and engagement among users in the information ecosystem [119,120].
Beyond representation, the presentation of information is an important part of user experience. Semantic technologies should influence how information is visually and contextually communicated to users [121,122], ensuring that users receive information in a format that aligns with their cognitive processes and preferences. In the same context, personalization algorithms, leveraging semantic understanding, deliver content that is not only relevant but also anticipates user needs [123,124]. The seamless integration of diverse datasets, facilitated by semantic frameworks, has the potential to provide a more coherent and holistic user experience, reducing information overload and enhancing overall satisfaction. The iterative nature of semantic knowledge representation ensures continuous improvement through feedback loops, driven by user interactions and system analytics, to enable adaptive learning and refinement.

4. X-Reality Applications (AR, VR, MR) for Immersive Human-Centered Experiences

4.1. X-Reality Applications in Cultural Heritage

The utilization of virtual reality (VR) in cultural heritage (CH) is not a novel concept. Initial approaches, such as CAVE-based VR, integrated immersive presentations and haptic-based manipulations of heritage objects [125,126]. A synergy of 3D reconstruction technologies with VR emerged, creating realistic digital replicas of CH objects [127,128]. In earlier methods, where digitization was constrained by technology immaturity, scenes from archaeological sites were manually modeled in 3D [129,130]. While resulting in lower-quality models, this allowed researchers to digitally restore monuments by complementing structural remains with digitally manufactured structures [131,132]. Advancements extended to simulating weather and daily life in ancient CH sites through a graphics-based rendering of nature and autonomous virtual humans.
The evolution of VR devices, particularly commercial VR headsets and controllers [133,134], simplified the implementation of VR-based experiences. Concurrently, the advent of 360 photography and videos enabled a different VR approach with inexpensive headsets, augmenting experiences with information points and interactive spots [135,136,137]. Studies have addressed resource-demanding tasks like streaming 360 videos in these headsets [138,139]. From a sustainability perspective, VR was proposed to divert visits from endangered CH sites to digital media [140].
In the domain of augmented reality (AR) and cultural heritage, ongoing research has shown its potential to enhance learning by providing a more comprehensive educational experience [141]. AR applications have been explored in school subjects like chemistry and cultural heritage sites [142,143]. Stakeholder studies indicate perceived value dimensions of AR in cultural heritage tourism, encompassing economic, experiential, social, epistemic, historical, cultural, and educational aspects [144].
Mobile AR research began with feature extraction in mobile phones for image acquisition [145]. More advanced mobile devices incorporated virtual humans [146], while modern mobile phones empowered various AR forms, such as the augmentation of camera images with information [147,148] and the interpolation of 3D digitization with camera input [149]. Some approaches replace physical remains with digitally enhanced versions from the time of creation [150], and physical objects aid visualization and interaction with archaeological artifacts in AR [151,152].
The fusion of augmented and virtual reality has given rise to “AR Portals”. Mobile devices, supporting larger AR scenes, allow users to spawn portals to alternate worlds [152]. “The Historical Figures AR” application exemplifies this, enabling users to walk through portals to historically themed sites [153]. Other approaches augment physical places with digital information, supporting alternative interactions through the manipulation of physical objects [154].

4.2. X-Reality Applications in Vocational Training and Education

Vocational training and education are challenging research topics due to the fact that the nature of the subjects to be taught integrates several aspects of human perception and is closely bound to human sense and skillful interaction with tools and materials. This is highlighted through mapping sequences and networks of physical and cognitive activities and working materials in design and workmanship, involving stages of perception, problem understanding, thinking, acting, planning, executing plans, and reflecting upon collected experiences [155]. In process training, part of thinking and planning is implemented by the mind, using mental simulation that produces mental imagery [156]. This modeling approach is also found in cognitive robotics (e.g., [157,158]). In [159], crafting processes are modeled to have schemas or plans, and their execution is modeled as individual events. Studies on the negotiation between the maker and the material have provided interesting data regarding how makers think between things, people, space, and time and develop their practices accordingly [160].
Due to their nature, in vocational education and training, X-reality (XR) applications have emerged as transformative tools, redefining the way individuals acquire and apply skills, based on the fact that these technologies can mimic, enhance, or alter the physical environment, seamlessly integrating physical and digital realms [161,162]. At the same time, these technologies can engage with the complex cognitive aspects described below by being capable of simulating reality and integrating cognitive cues in process training.
VR in training provides the capability of immersing learners in simulated environments that are digital twins of real-world environments, offering training scenarios in a controlled, risk-free setting [163,164]. Vocational training programs using VR have been employed from pilot training to surgical procedures. The ability to practice and refine skills in VR enhances muscle memory and improves confidence. Examples of such training environments include woodworking and blacksmithing simulators [165,166,167].
AR in vocational education has the form of overlaying digital information onto the physical environment, offering learners real-time, context-sensitive aid. From hands-on equipment maintenance simulations to interactive instructional overlays, AR facilitates learning by doing, enriching the educational experience. By extending reality through informative digital overlays, AR provides a bridge between theoretical knowledge and practical application, unifying the physical and digital learning domains. For example, a mobile traditional craft presentation system using AR technology has been proposed to superimpose 3D objects of traditional crafts on the real room space reflected by the camera of the mobile terminal [168,169,170].
MR blends virtual and physical elements, allowing learners to interact with physical and real-world objects simultaneously [171]. This is beneficial in vocational education where practical, hands-on experience is important [172] MR can be employed to integrate virtual equipment into physical training spaces, providing a hybrid learning experience [173]. The learner interacts with digital elements as an extension of their physical surroundings, bridging the gap between the two realities.
The added value of XR applications for vocational education lies in their ability to create immersive, human-centered learning experiences. By simulating authentic workplace scenarios, XR technologies engage learners on a deeper level, promoting active participation and knowledge retention [174]. XR applications also introduce adaptive learning environments, tailoring experiences to individual learner needs. Machine learning algorithms analyze user interactions and performance, allowing the system to dynamically adjust the difficulty and content of simulations [175]. This personalized approach ensures that learners progress at their own pace, addressing diverse skill levels and learning styles while seamlessly blending physical and digital realities.
Assessment in vocational education extends beyond traditional exams to immersive, performance-based evaluations. XR applications, by extending reality, enable instructors to assess not just theoretical knowledge but also the application of skills in realistic scenarios. Real-time feedback mechanisms enhance the learning loop, providing constructive insights to learners and facilitating continuous improvement within the extended reality of vocational training [32].

5. Human Motion and 3D Digitization for Enhanced Interactive Digital Experiences

Human motion capture and 3D digitization are reshaping how users interact with and perceive digital content in digital environments. More specifically, advancements in human motion capture technologies have altered the way digital systems interpret and respond to user movements [35,36]. High-fidelity sensors, wearable devices (e.g., [176,177]), computer vision, and AI enable the precise tracking of body movements, transforming them to explicit or implicit input (e.g., [178,179,180,181,182]). This has wide applicability for fields such as gaming, VR, and AR, where users can navigate, control, and manipulate digital environments through natural movements.
At the same time, sophisticated 3D digitization techniques make possible the seamless transition between real and digital world objects and spaces. From 3D scanning technologies that capture intricate details of physical objects [183,184,185] to depth-sensing cameras that create digital representations of physical spaces, the process of digitization extends beyond traditional boundaries [186,187,188]. This capability lays the foundation for more immersive and realistic digital experiences. In VR environments, for example, users can not only see but also physically interact with digital objects by leveraging natural hand gestures, body movements, and haptic interfaces [189,190,191]. This enhanced level of interaction fosters a deeper sense of presence and engagement, making digital experiences more lifelike and compelling [192,193].
Additionally, human motion and 3D digitization contribute to the emergence of novel interaction metaphors. Gesture-based interfaces, where a wave of the hand or a nod of the head translates into meaningful digital commands, exemplify the shift towards more intuitive interactions. This departure from conventional input methods introduces a new language of interaction, bridging the gap between the physical and digital worlds and giving rise to the concept of virtual embodiment [194,195]. Users can now project themselves into digital avatars or representations that mimic their real-world movements [196]. This not only adds a layer of personalization to digital interactions but also enables a more immersive and empathetic form of virtual presence [197].
The discussed impact may affect various industries such as healthcare, for instance, where surgeons can practice complex procedures in a VR setting that replicates real-world conditions [198,199]. In education, students can engage with historical artifacts through detailed 3D models. In the entertainment industry, these technologies can be used to create interactive storytelling experiences, blurring the lines between the observer and the observed [200,201].

6. Serious Game Design and Development

Serious games are transforming learning into an immersive and interactive experience [202]. Virtual scenarios, historical recreations, and problem-solving challenges become dynamic lessons, allowing students to explore, experiment, and learn through experience. The introduction of adaptability into these games caters to diverse learning styles, making education more accessible and engaging [203]. For occupational training, serious games offers a dynamic platform for skill development and scenario-based learning [204]. Simulations designed for various industries, such as healthcare, aviation, and emergency response, provide trainees with realistic environments to test their skills [205,206,207,208,209]. The interactive nature of these games fosters hands-on experience, allowing individuals to practice and refine their abilities in a controlled, risk-free setting.
Serious games extend their influence beyond traditional education and training contexts, addressing broader societal challenges. Games designed for public awareness campaigns, health promotion, and social issues provide a unique avenue for communication [210,211]. These games leverage storytelling, empathy-building narratives, and decision-making scenarios to raise awareness and prompt action on critical societal topics such as environmental conservation, public health, and social justice.
The design and development of serious games often involve collaboration across disciplines, educational psychologists, game designers, subject matter experts, and technologists collaborate to create holistic learning experiences. This interdisciplinary approach ensures that serious games not only convey information effectively but also align with pedagogical principles, maximizing their educational impact.
The landscape of serious game design has been significantly shaped by technological advancements. VR, AR, and advancements in graphics rendering have elevated the level of immersion and realism in these games [212,213,214]. This not only enhances the overall gaming experience but also contributes to the effectiveness of the learning and training outcomes. The integration of adaptive learning algorithms and analytics further personalizes the experience, tailoring content to individual needs and tracking progress.
These games, often delivered through digital platforms, have the potential to reach a global audience, making education and training more accessible across geographical boundaries. This democratization of learning resources addresses disparities in educational opportunities and ensures that individuals, regardless of their location, can benefit from engaging and purposeful learning experiences.

7. AI Approaches in User Interfaces, Information Processing, and Information Visualization

In the rapidly evolving landscape of technology, the integration of AI is manifested through intelligent systems that, in the future, will be able to understand user intent and enhance the extraction and presentation of valuable insights from vast datasets. AI-driven user interfaces have redefined the way individuals interact with digital systems. Language model-based AI and NLP technologies enable interfaces to comprehend and respond to user inputs in a more human-like fashion [215]. Chatbots and virtual assistants leverage these advancements, providing users with intuitive and conversational interactions. Intelligent user interfaces in the AI era promise personalized user experiences based on historical interactions, preferences, and context [216].

This entry is adapted from the peer-reviewed paper 10.3390/electronics13020269

This entry is offline, you can click here to edit this entry!
Video Production Service