Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 2118 2023-01-25 23:16:08 |
2 format correction -22 word(s) 2096 2023-01-29 03:43:07 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Kettle, L.;  Lee, Y. Augmented Reality for Vehicle-Driver Communication. Encyclopedia. Available online: https://encyclopedia.pub/entry/40500 (accessed on 24 July 2024).
Kettle L,  Lee Y. Augmented Reality for Vehicle-Driver Communication. Encyclopedia. Available at: https://encyclopedia.pub/entry/40500. Accessed July 24, 2024.
Kettle, Liam, Yi-Ching Lee. "Augmented Reality for Vehicle-Driver Communication" Encyclopedia, https://encyclopedia.pub/entry/40500 (accessed July 24, 2024).
Kettle, L., & Lee, Y. (2023, January 25). Augmented Reality for Vehicle-Driver Communication. In Encyclopedia. https://encyclopedia.pub/entry/40500
Kettle, Liam and Yi-Ching Lee. "Augmented Reality for Vehicle-Driver Communication." Encyclopedia. Web. 25 January, 2023.
Augmented Reality for Vehicle-Driver Communication
Edit

Capabilities for automated driving system (ADS)-equipped vehicles have been expanding over the past decade. Research has explored integrating augmented reality (AR) interfaces in ADS-equipped vehicles to improve drivers’ situational awareness, performance, and trust. The researchers reviewed AR visualizations for in-vehicle vehicle-driver communication from 2012 to 2022. The researchers first identified meta-data and methodological trends before aggregating findings from distinct AR interfaces and corresponding subjective and objective measures. Prominent subjective measures included acceptance, trust, and user experience; objective measures comprised various driving behavior or eye-tracking metrics. Research more often evaluated simulated AR interfaces, presented through windshields, and communicated object detection or intended maneuvers, in level 2 ADS. For object detection, key visualizations included bounding shapes, highlighting, or symbols. For intended route, mixed results were found for world-fixed verse screen-fixed arrows. Regardless of the AR design, communicating the ADS’ actions or environmental elements was beneficial to drivers, though presenting clear, relevant information was more favorable. Gaps in the literature that yet to be addressed include longitudinal effects, impaired visibility, contextual user needs, system reliability, and, most notably, inclusive design. 

autonomous vehicles automated driving systems augmented reality

1. Introduction

The development of automated driving system (ADS) capabilities and technologies equipped in vehicles has been expanding over the past decade, with driving control shifting towards the ADS. Recently, [1] updated its taxonomy for the six automation levels ranging from level 0 (no driving automation) to level 5 (full driving automation). At levels 2 and 3, the automation features assist in lateral and longitudinal control (i.e., lane keeping and adaptive cruise control, respectively). When something goes wrong such as if the road condition exceeds the ADS capabilities, then vehicle operation will fallback to the human driver. In many cases, the ADS will issue a take-over request (TOR) whereby the ADS alerts the driver to fully resume manual control in a short time span. When level 3 and above ADS features are engaged, there is reduced need for constant driver monitoring of the road environment until the ADS is capable of full driving automation (level 5). Given the reduced need for driver oversight, the driver may engage in non-driving related tasks which can lead to decreased driving performance [2][3][4] and increased crash risk [5]. When a driver shifts visual attention away from the road environment and toward a non-driving related task, they lose situation awareness of critical road cues needed to update their dynamic mental model of the driving context [6]. Reduced situation awareness during TORs places the driver in potentially dangerous driving situations whereby delayed or inappropriate reactions while discerning the driving scene can lead to dangerous outcomes [7][8][9]. However, at higher levels where TORs are less prevalent (i.e., levels 4 and 5), driving performance is a less crucial factor, rather, drivers’ trust and acceptance of the ADS-equipped vehicle are more important factors for the adoption of vehicles equipped with high or full driving automation features [10].
One facet of ADSs that can mitigate reduced situation awareness as well as improve perceptions of ADS-equipped vehicles is through transparent vehicle-human communication. Past research suggests that appropriate expectations of the systems capabilities as well as understanding how the system performs and predicts future behavior can improve trust [11][12]. Ref. [13] found that drivers desired vehicle interfaces that communicate information relevant to the ADS’s situation awareness of the road environment (what the system perceives) and behavioral awareness (what actions the system will take). Similar desires are found for ADSs that clearly convey information relevant to oncoming critical situations, the ADS’s decision making and its actions [14][15][16][17][18].
Previous research has evaluated various strategies that communicate the ADS’s detection of potential hazards or its intended actions. More specifically, communication strategies have included visual [19][20], audible [21][22][23][24], olfactory [25], haptic [26][27][28], and multimodal [7][29] interfaces. Additionally, researchers have evaluated the communication strategies of embodied agents such as a NAO robot with speech features [30][31] or directional eye-gaze of three social robots to alert drivers of potential critical cues in the driving environment [32][33]. However, many of these communication avenues are ambiguous or allocate visual attention outside of the road environment which can lead to potentially fatal outcomes. Instead, augmented reality (AR) can be utilized to communicate road elements and ADS actions without allocating visual attention away from the driving environment.
AR represents a component of mixed-reality, in which the virtual and real world are merged [34]. More specifically, virtual images are superimposed on the real world, enriching an individuals’ sensory perception [35] of reality. Currently, AR applications are strongly utilized in many areas within the automotive industry including vehicle assembly, design, maintenance and manufacturing [36]. Additionally, in-car AR systems are utilized to communicate road cues to the driver through head-up displays (HUDs). HUDs convey visual information (e.g., road cues including pedestrians, vehicles, and signs) in the drivers’ field of view. Currently, two main modalities are used to present AR visualizations. First, AR visualizations can be presented through optical see-through HUDs (e.g., [37][38]) which are transparent devices that occupy a small area of the driving field of view; secondly, through windshield displays in which AR visualizations can occur anywhere on the drivers’ forward field of view (e.g., [39][40]). Typically, information is communicated to the driver by highlighting certain road cues already present in the environment or by displaying additional information onto the environment [41].
Through AR visualizations, the ADS can communicate its intention in detecting road elements and convey future ADS actions. Accordingly, communicating transparent driving-related information can improve individuals’ situation awareness (i.e., allocation of visual attention and driving performance) of the driving environment [37][38][42]. Furthermore, communicating to drivers what the ADS “perceives” can improve overall trust and acceptance [18], [43] while dynamically calibrating appropriate expectations of the ADS, which in turn can foster better adoption of ADSs. Currently, there are various in-vehicle AR designs that communicate a broad range of information; however, these diverse designs are generally evaluated independent of other visualizations making it difficult for researchers to integrate or compare results. Therefore, current AR designs should be systematically reviewed to identify which visualizations are more prominent in AV applications for information communication and understand potential gaps in the literature for future directions.

2. High-Level Descriptives

Researchers found that more articles are being published within the past five years which coincides with the increased growth of technology within this area. Within these last five years, more conference articles were published which could be explained by the generally shorter article length and less time required for peer-review and revisions in comparison to journal articles. Articles originated mainly from Germany and the United States which is in line with these countries being two of the leading supporters of ADS-equipped vehicles [44].
Most of the research occurred in safe, controlled, laboratory settings using simulations of some kind. Although similar patterns of driving behavior are seen between driving simulators and naturalistic settings (e.g., [45][46]), ref. [47] did identify a different pattern of results when implementing the AR design in real-world footage as compared to simulated footage. However, differing patterns of results between the two settings was more identified when using optical see-through HUD rather than windshield displays. This distinction could be due to the optical display communicating all information in an isolated area, possibly increasing visual clutter as the road environment becomes more complex. However, more naturalistic, or at least controlled, outdoor research is required to evaluate the real benefits of AR communication as only three articles were conducted in more natural settings and eight simulator or online studies presented real-world footage.
Regarding participant information, most articles reported gender distribution and the mean age of participants. Approximately, half the articles reported the source of recruitment, yet only one article reported participants’ ethnicity. Collectively, participants tended to be young, healthy males which is not generalizable to the whole population. Only one article reported participants who did not self-identify as male or female. Two articles recruited individuals from a vulnerable population (i.e., elderly individuals) which resulted in different driving patterns to younger individuals when interacting with AR displays. Additionally, with AR visualizations using color coding schemes, no article mentioned accessibility issues to individuals with color blindness, though one article did specifically exclude any individual with self-reported color blindness. Therefore, greater transparency is recommended when reporting participant demographics but also the recruitment of diverse individuals such as individuals who identify as non-binary, neurodivergent individuals, or individuals from vulnerable populations. Greater transparency and diverse participant recruitment is required so that future designs are accessible across a more representative inclusive sample of the population.

3. AR Designs

Overall, there is a clear trend that communicating environmental elements and the ADS’ actions is beneficial to drivers. Typically, the more favorable designs were those which presented clear, relevant information to the given context. In contrast, ambiguous or too much information led to worsened driving or situation awareness performance (see [48][49][50]). However, distinct design differences may play less strongly of a role as compared to the sole feature of presenting crucial information. Furthermore, the articles consistently found more favorable outcomes for AR displays than tablet or heads-down-displays. Research is still yet to compare optical see-through HUD displays and windshield displays. However, there is suggestion that optical HUD may have a threshold whereby too much visual clutter negates any decision-making or situation awareness benefits [48]. Across automation levels are apparent differences in why AR displays are needed. For instance, across all level features, presenting information may improve trust and acceptance of the ADS-equipped vehicle and dynamically calibrate appropriate expectations about the ADS’s capabilities; however, for features operated at levels 2 and 3, there is an additional focus on enhancing drivers’ situation awareness to improve takeover response times and safety concerns. At higher automation levels, situation awareness is less crucial due to the reduced need to resume manual control, or lack thereof, of the vehicle and can focus more on novel interactions and passenger experiences.
For object detection, bounding shapes and highlighting target cues tended to be more prominent across the research. Bounding shapes tend to be more limited compared to highlighting. For instance, visualizations bound pedestrians and vehicles, whereas, highlighting involved pedestrians and their predicted paths, vehicles, road signs, and the ADS’s predicted path. However, ref. [39] found that participants preferred bounding visualizations rather than highlighting for object detection. Across the board, researchers found that displaying bounding shapes was better for communicating the ADS’s detection of pedestrians than vehicles. Vehicles were considered highly salient in the environment, thus much easier to see regardless of the AR, but pedestrians and other targets (e.g., signs) were less salient which may be a better focus point in displays or even vehicles outside of the central point of road view.
Accordingly, AR displays should communicate relevant information rather than being overly general to improve driving behavior and crucial visual attention. Furthermore, some argued concerns that continuous presentation of information may become a negative aspect due to familiarity. Therefore, presenting only relevant information as they present into the drivers’ visual field may mitigate these potential detrimental effects. One article did suggest an AR system that is capable of dynamically alerting drivers of road hazards only when the ADS detects that the driver is not already aware of them [51]. Alternatively, presenting information that requires a clear action by drivers such as intended ADS maneuvers resulting from an upcoming construction site or system failure.
The articles that evaluated multiple AR designs against a control group generally did not find significant improvements in visual attention or driving performance between the AR groups. The lack of differences between AR design complexity indicates that more complex displays do not lead to more situation awareness; therefore, it is not necessary to pursue more eye-catching forms of AR displays. Rather, the advantages of AR communication could be due to simply presenting relevant road information which supplements drivers’ decision-making or expectations of the ADS’s capabilities.
AR cues can provide transparent communication regarding the reliability and confidence of the ADS, calibrating drivers’ expectations and trust of the ADS’ capabilities. Unfortunately, only two studies specifically included ADS reliability, though [52] communicated reliability as part of an aggregated display. These displays utilized reliability as a percentage (i.e., 85% reliable). Ref. [53] visualized reliability through blue transparent lane markings and communicated the ADS’s reliability for navigate upcoming maneuvers. Although not displaying ADS reliability, ref. [54] focused on participants’ performance when presented with inappropriate ADS maneuvers due to system error (i.e., misperceiving stop signs or objects). Additionally, ref. [55] indirectly evaluated reliability through icons that communicated pedestrian intention. Reliability was indirectly presented through the “intention unclear” icon whereby the ADS could not confidently perceive the pedestrians’ intention. Both lane marking and icons have initial support for communicating reliability for different actions with individuals identifying maneuverer errors quicker when presented with world-fixed arrows. Further research is required to garnish greater support.

References

  1. SAE International. Taxonomy and Definitions for Terms Related to Driving Automation Systems for on-Road Motor Vehicles (J3016_202104); SAE International: Warrendale, PA, USA, 2021.
  2. Drews, F.A.; Yazdani, H.; Godfrey, C.N.; Cooper, J.M.; Strayer, D. Text Messaging During Simulated Driving. Hum. Factors 2009, 51, 762–770.
  3. Strayer, D.L.; Cooper, J.M.; Goethe, R.M.; Mccarty, M.M.; Getty, D.J.; Biondi, F. Assessing the visual and cognitive demands of in-vehicle information systems. Cogn. Res. Princ. Implic. 2019, 4, 18.
  4. Turrill, J.; Coleman, J.R.; Hopman, R.; Cooper, J.M.; Strayer, D.L. The Residual Costs of Multitasking: Causing Trouble down the Road. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2016, 60, 1967–1970.
  5. NHTSA. Visual-Manual NHTSA Driver Distraction Guidelines for In-Vehicle Electronic Devices. Federal Register. 2012. Available online: https://www.federalregister.gov/documents/2012/02/24/2012-4017/visual-manual-nhtsa-driver-distraction-guidelines-for-in-vehicle-electronic-devices (accessed on 15 April 2022).
  6. Strayer, D.L.; Fisher, D.L. SPIDER: A Framework for Understanding Driver Distraction. Hum. Factors 2016, 58, 5–12.
  7. Petermeijer, S.; Bazilinskyy, P.; Bengler, K.; de Winter, J. Take-over again: Investigating multimodal and directional TORs to get the driver back into the loop. Appl. Ergon. 2017, 62, 204–215.
  8. Vlakveld, W.; van Nes, N.; de Bruin, J.; Vissers, L.; van der Kroft, M. Situation awareness increases when drivers have more time to take over the wheel in a Level 3 automated car: A simulator study. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58, 917–929.
  9. Zeeb, K.; Buchner, A.; Schrauf, M. Is take-over time all that matters? The impact of visual-cognitive load on driver take-over quality after conditionally automated driving. Accid. Anal. Prev. 2016, 92, 230–239.
  10. Nordhoff, S.; de Winter, J.; Kyriakidis, M.; van Arem, B.; Happee, R. Acceptance of Driverless Vehicles: Results from a Large Cross-National Questionnaire Study. J. Adv. Transp. 2018, 2018, e5382192.
  11. Choi, J.K.; Ji, Y.G. Investigating the Importance of Trust on Adopting an Autonomous Vehicle. Int. J. Hum. Comput. Interact. 2015, 31, 692–702.
  12. Hoff, K.A.; Bashir, M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum. Factors 2014, 57, 407–434.
  13. Diels, C.; Thompson, S. Information Expectations in Highly and Fully Automated Vehicles. In International Conference on Applied Human Factors and Ergonomics; Springer: Cham, Switzerland, 2018; pp. 742–748.
  14. Beggiato, M.; Hartwich, F.; Schleinitz, K.; Krems, J.; Othersen, I.; Petermann-Stock, I. What would drivers like to know during automated driving? Information needs at different levels of automation. In Proceedings of the 7th Conference on Driver Assistance, Munich, Germany, 25 January 2015; p. 6.
  15. Wintersberger, P.; Nicklas, H.; Martlbauer, T.; Hammer, S.; Riener, A. Explainable Automation: Personalized and Adaptive UIs to Foster Trust and Understanding of Driving Automation Systems. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Washington, DC, USA, 21–22 September 2020.
  16. Politis, I.; Langdon, P.; Adebayo, D.; Bradley, M.; Clarkson, P.J.; Skrypchuk, L.; Mouzakitis, A.; Eriksson, A.; Brown, J.W.H.; Revell, K.; et al. An Evaluation of Inclusive Dialogue-Based Interfaces for the Takeover of Control in Autonomous Cars. In Proceedings of the 23rd International Conference on Intelligent User Interfaces, Tokyo, Japan, 7–11 March 2018; pp. 601–606.
  17. Large, D.R.; Burnett, G.; Clark, L. Lessons from Oz: Design guidelines for automotive conversational user interfaces. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, Utrecht, The Netherlands, 22–25 September 2019; pp. 335–340.
  18. Du, N.; Zhou, F.; Tilbury, D.; Robert, L.P.; Yang, X.J. Designing Alert Systems in Takeover Transitions: The Effects of Display Information and Modality. In Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Leeds, UK, 9–14 September 2021.
  19. Yang, Y.; Götze, M.; Laqua, A.; Dominioni, G.C.; Kawabe, K.; Bengler, K. A method to improve driver’s situation awareness in automated driving. In Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2017 Annual Conference, Rome, Italy, 28–30 September 2017; p. 20.
  20. Wiegand, G.; Schmidmaier, M.; Weber, T.; Liu, Y.; Hussmann, H. I Drive—You Trust: Explaining Driving Behavior Of Autonomous Cars. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 4–9 May 2019; pp. 1–6.
  21. Koo, J.; Kwac, J.; Ju, W.; Steinert, M.; Leifer, L.; Nass, C. Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. Int. J. Interact. Des. Manuf. 2015, 9, 269–275.
  22. Wong, P.N.Y.; Brumby, D.P.; Babu, H.V.R.; Kobayashi, K. “Watch Out!”: Semi-Autonomous Vehicles Using Assertive Voices to Grab Distracted Drivers’ Attention. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 4–9 May 2019.
  23. Large, D.R.; Burnett, G.; Anyasodo, B.; Skrypchuk, L. Assessing Cognitive Demand during Natural Language Interactions with a Digital Driving Assistant. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; pp. 67–74.
  24. Waytz, A.; Heafner, J.; Epley, N. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 2014, 52, 113–117.
  25. Wintersberger, P.; Dmitrenko, D.; Schartmüller, C.; Frison, A.-K.; Maggioni, E.; Obrist, M.; Riener, A. S(C)ENTINEL: Monitoring automated vehicles with olfactory reliability displays. In Proceedings of the 24th International Conference on Intelligent User Interfaces, New York, NY, USA, 16–20 March 2019; pp. 538–546.
  26. Ma, Z.; Liu, Y.; Ye, D.; Zhao, L. Vibrotactile Wristband for Warning and Guiding in Automated Vehicles. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 4–9 May 2019.
  27. Cohen-Lazry, G.; Katzman, N.; Borowsky, A.; Oron-Gilad, T. Directional tactile alerts for take-over requests in highly-automated driving. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 217–226.
  28. Petermeijer, S.; Cieler, S.; de Winter, J. Comparing spatially static and dynamic vibrotactile take-over requests in the driver seat. Accid. Anal. Prev. 2017, 99, 218–227.
  29. Geitner, C.; Biondi, F.; Skrypchuk, L.; Jennings, P.; Birrell, S. The comparison of auditory, tactile, and multimodal warnings for the effective communication of unexpected events during an automated driving scenario. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 23–33.
  30. Dong, J.; Lawson, E.; Olsen, J.; Jeon, M. Female Voice Agents in Fully Autonomous Vehicles Are Not Only More Likeable and Comfortable, But Also More Competent. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2020, 64, 1033–1037.
  31. Lee, S.C.; Sanghavi, H.; Ko, S.; Jeon, M. Autonomous driving with an agent: Speech style and embodiment. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, Utrecht, The Netherlands, 22–25 September 2019; pp. 209–214.
  32. Karatas, N.; Yoshikawa, S.; Tamura, S.; Otaki, S.; Funayama, R.; Okada, M. NAMIDA: Sociable driving agents to maintain driver’s attention in autonomous driving. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28–31 August 2017; pp. 143–149.
  33. Tamura, S.; Ohshima, N.; Hasegawa, K.; Okada, M. Design and Evaluation of Attention Guidance Through Eye Gazing of “NAMIDA” Driving Agent. J. Robot. Mechatron. 2021, 33, 24–32.
  34. Milgram, P.; Kishino, F. A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. Available online: https://www.semanticscholar.org/paper/A-Taxonomy-of-Mixed-Reality-Visual-Displays-Milgram-Kishino/f78a31be8874eda176a5244c645289be9f1d4317 (accessed on 2 August 2022).
  35. Daponte, P.; De Vito, L.; Picariello, F.; Riccio, M. State of the art and future developments of the Augmented Reality for measurement applications. Measurement 2014, 57, 53–70.
  36. Boboc, R.G.; Gîrbacia, F.; Butilă, E.V. The Application of Augmented Reality in the Automotive Industry: A Systematic Literature Review. Appl. Sci. 2020, 10, 4259.
  37. Gabbard, J.L.; Smith, M.; Tanous, K.; Kim, H.; Jonas, B. AR DriveSim: An Immersive Driving Simulator for Augmented Reality Head-Up Display Research. Front. Robot. AI 2019, 6, 98.
  38. Kim, H.; Gabbard, J.L.; Anon, A.M.; Misu, T. Driver Behavior and Performance with Augmented Reality Pedestrian Collision Warning: An Outdoor User Study. IEEE Trans. Vis. Comput. Graph. 2018, 24, 1515–1524.
  39. Colley, M.; Krauss, S.; Lanzer, M.; Rukzio, E. How Should Automated Vehicles Communicate Critical Situations?: A Comparative Analysis of Visualization Concepts. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2021, 5, 1–23.
  40. Lindemann, P.; Muller, N.; Rigolll, G. Exploring the Use of Augmented Reality Interfaces for Driver Assistance in Short-Notice Takeovers. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 804–809.
  41. Eyraud, R.; Zibetti, E.; Baccino, T. Allocation of visual attention while driving with simulated augmented reality. Transp. Res. Part F Traffic Psychol. Behav. 2015, 32, 46–55.
  42. Jing, C.; Shang, C.; Yu, D.; Chen, Y.; Zhi, J. The impact of different AR-HUD virtual warning interfaces on the takeover performance and visual characteristics of autonomous vehicles. Traffic Inj. Prev. 2022, 23, 277–282.
  43. Wintersberger, P.; Frison, A.-K.; Riener, A.; von Sawitzky, T. Fostering User Acceptance and Trust in Fully Automated Vehicles: Evaluating the Potential of Augmented Reality. Presence Virtual Augment. Real. 2018, 27, 46–62.
  44. Doll, S. The top five best-equipped countries to support autonomous vehicles—Who’s leading the self-driving revolution? Electrek, 4 March 2022. Available online: https://electrek.co/2022/03/04/the-top-five-best-equipped-countries-to-support-autonomous-vehicles-whos-leading-the-self-driving-revolution/ (accessed on 3 August 2022).
  45. Klüver, M.; Herrigel, C.; Heinrich, C.; Schöner, H.-P.; Hecht, H. The behavioral validity of dual-task driving performance in fixed and moving base driving simulators. Transp. Res. Part F Traffic Psychol. Behav. 2016, 37, 78–96.
  46. Mullen, N.; Charlton, J.; Devlin, A.; Bedard, M. Simulator validity: Behaviours observed on the simulator and on the road. In Handbook of Driving Simulation for Engineering, Medicine and Psychology; CRC Press: Boca Raton, FL, USA, 2011; pp. 1–18.
  47. Colley, M.; Eder, B.; Rixen, J.O.; Rukzio, E. Effects of Semantic Segmentation Visualization on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama Japan, 8–13 May 2021; pp. 1–11.
  48. Currano, R.; Park, S.Y.; Moore, D.J.; Lyons, K.; Sirkin, D. Little Road Driving HUD: Heads-Up Display Complexity Influences Drivers’ Perceptions of Automated Vehicles. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–15.
  49. Hwang, Y.; Park, B.-J.; Kim, K.-H. The Effects of Augmented-Reality Head-Up Display System Usage on Drivers? Risk Perception and Psychological Change. Etri J. 2016, 38, 757–766.
  50. Schewe, F.; Vollrath, M. Visualizing the autonomous vehicle’s maneuvers—Does an ecological interface help to increase the hedonic quality and safety? Transp. Res. Part F Traffic Psychol. Behav. 2021, 79, 11–22.
  51. Faria, N.D.O.; Merenda, C.; Greatbatch, R.; Tanous, K.; Suga, C.; Akash, K.; Misu, T.; Gabbard, J. The Effect of Augmented Reality Cues on Glance Behavior and Driver-Initiated Takeover on SAE Level 2 Automated-Driving. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2021, 65, 1342–1346.
  52. Lindemann, P.; Lee, T.-Y.; Rigoll, G. Catch My Drift: Elevating Situation Awareness for Highly Automated Driving with an Explanatory Windshield Display User Interface. Multimodal Technol. Interact. 2018, 2, 71.
  53. Ebnali, M.; Fathi, R.; Lamb, R.; Pourfalatoun, S.; Motamedi, S. Using Augmented Holographic UIs to Communicate Automation Reliability in Partially Automated Driving. Proceeding of the CHI, Honolulu, HI, USA, 25–30 April 2020.
  54. Detjen, H.; Salini, M.; Kronenberger, J.; Geisler, S.; Schneegass, S. Towards Transparent Behavior of Automated Vehicles: Design and Evaluation of HUD Concepts to Support System Predictability Through Motion Intent Communication. In Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction, Toulouse & Virtual France, 27 September 2021; pp. 1–12.
  55. Colley, M.; Bräuner, C.; Lanzer, M.; Walch, M.; Baumann, M.; Rukzio, E. Effect of Visualization of Pedestrian Intention Recognition on Trust and Cognitive Load. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Washington, DC, USA, 21–22 September 2020.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : ,
View Times: 428
Revisions: 2 times (View History)
Update Date: 29 Jan 2023
1000/1000
Video Production Service