Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1891 2023-04-27 13:08:45 |
2 format -70 word(s) 1821 2023-04-28 03:40:33 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Alonso, F.; Faus, M.; Riera, J.V.; Fernandez-Marin, M.; Useche, S.A. Effectiveness of Driving Simulators for Drivers’ Training. Encyclopedia. Available online: https://encyclopedia.pub/entry/43574 (accessed on 30 July 2024).
Alonso F, Faus M, Riera JV, Fernandez-Marin M, Useche SA. Effectiveness of Driving Simulators for Drivers’ Training. Encyclopedia. Available at: https://encyclopedia.pub/entry/43574. Accessed July 30, 2024.
Alonso, Francisco, Mireia Faus, José V. Riera, Marcos Fernandez-Marin, Sergio A. Useche. "Effectiveness of Driving Simulators for Drivers’ Training" Encyclopedia, https://encyclopedia.pub/entry/43574 (accessed July 30, 2024).
Alonso, F., Faus, M., Riera, J.V., Fernandez-Marin, M., & Useche, S.A. (2023, April 27). Effectiveness of Driving Simulators for Drivers’ Training. In Encyclopedia. https://encyclopedia.pub/entry/43574
Alonso, Francisco, et al. "Effectiveness of Driving Simulators for Drivers’ Training." Encyclopedia. Web. 27 April, 2023.
Effectiveness of Driving Simulators for Drivers’ Training
Edit

Although driving simulators could be commonly assumed as very useful technological resources for both novel and experienced drivers’ instruction under risk control settings, the evidence addressing their actual effectiveness seems substantially limited. Therefore, researchers aimed to analyze the existing original literature on driving simulators as a tool for driver training/instruction, considering study features, their quality, and the established degree of effectiveness of simulators for these purposes. Among a considerably reduced set of original research studies assessing the effectiveness of driving simulators for training purposes, most sources assessing the issue provided reasonably good insights into their value for improving human-based road safety under risk control settings. On the other hand, there are common limitations which stand out, such as the use of very limited research samples, infrequent follow-up of the training outcomes, and reduced information about the limitations targeted during the simulator-based training processes. Despite the key shortcomings highlighted here, studies have commonly provided empirical support on the training value of simulators, as well as endorsed the need for further evaluations of their effectiveness.

driving simulators drivers technology risk training road safety

1. Simulators in Context

Since their appearance in the 1940s, simulators have been gaining ground and consolidating themselves as “efficient” and risk-respectful tools for driver training under controlled risk conditions. In the beginning, given the large outlay involved, only the Administrations were able to commission the development of simulators, and the United States Army was one of the first sponsors.
In the 1960s and 1970s, with the advent of the first digital computers and computer graphics, the U.S. Army commissioned the development of the first “full mission” simulator in history [1]. It was an aircraft flight simulator for training pilots that reproduced both the cockpit and a virtual scenario in which the aircraft flew. It was qualified as “full mission” because it allowed pilots to be trained from takeoff to landing.
With the passing of the years and the great advances in the technological field (microprocessors, screens, projection, controllers, etc.), costs grew lower and, in addition to the Administrations, large companies began to develop their own simulators. This encouraged the appearance of simulators for all types of vehicles: flight [2], ship [3], submarine [4][5], car [6][7], truck [8][9], etc. As an example of this expansion, in the 1970s, there had already been 28 driving simulators developed worldwide [10].
Initially, simulators were used only for training in areas or situations that were either too dangerous for users to practice in real life or were much more expensive to reproduce in real life than the investment needed to develop a simulator. Today, this concept has expanded, with entertainment being one of the main niches in the field of simulation. On the other hand, there are also simulators intended exclusively for scientific use.

2. Simulator Classification

Currently, there are so many companies dedicated to the design, development, and manufacture of driving simulators that there are very different criteria used to classify them [11]. Some of these criteria are:
According to their purpose [12], simulators can be classified into three groups: (i) training, when educational objectives are pursued or related to the prevention of risks and traffic accidents. These, in turn, can be divided into professional and amateur simulators. Among other uses, simulators are usually—albeit not only—applied for (ii) research (when the purpose of the simulator is to investigate a certain area of knowledge), and even (iii) entertainment (when the purpose of the simulator is to amuse and entertain).
Based on their physical characteristics, simulators are usually classified on the basis of their visualization system (e.g., field of view, projection system) [13]; pixel size resolution (i.e., better or worse feeling of user involvement and immersive experience) [13]; cockpit (having/not having a sensorized driving cab, which allows for driving as if in a real vehicle) [14]; sound system (e.g., two-way sound (stereo), surround systems, or “8.1” systems) [15]; force-feedback (small motors providing users with feedback) [16]; and motion platform (allowing the user to reproduce the accelerations that would be felt in the simulator) [17].
In addition, according to their software-related characteristics, the most common simulator uses are procedural simulation (used for formative/training purposes) [18]; full-mission (used for professional driving training) [19]; games (generally intended for entertainment) [20]; and configurable settings (situations can be live-edited) [18].

3. Simulators for Driving Training

A research covered the specific case of simulators intended to train divers, which has an extensive background principally grounded on the concept of edutainment. It can be understood as “the combination of education and entertainment in a learning process” [21][22]. Buckingham and Scanlon [23] previously stated that “edutainment is based on attracting and maintaining the attention of learners by using screens or animations to make learning fun”; in other words, it is based on the use of simulators. In fact, it has been shown [24][25][26] that, thanks to the use of these new technologies, which often include various stimuli such as images, sounds, and videos, students are more likely to pay attention to the content and end up transferring it from short-term to long-term memory, thus becoming more entrenched in their knowledge. This is especially relevant in in-vehicle driving simulators, since, on the one hand, general skills are taught, and on the other hand, participants are trained in situations that rarely occur in real life are, but for which one must be prepared.
Driving simulators are so widely used today that in some countries, individuals are even required to pass certain tests on simulators in order to obtain a driver’s license [27][28]. These countries include the United Kingdom, the Netherlands, Singapore, and Finland [27][29][30]. One of the great advantages of using simulators is that they are able to measure, in an analytical way, everything that is happening with the simulated vehicle and whether the user is reacting correctly or not to the situations that arise. Another great advantage of simulators is that they allow the same situation to be recreated reliably over and over again (replicability), which makes it possible, on the one hand, to train complex situations without putting any user at risk, and, on the other hand, to make requirements for passing a test more objective, since all users can face exactly the same situation. In other countries, such as Saudi Arabia, where women were recently allowed to apply for driving licenses, a large number of requests for new driver training were received in a short period of time, and had it not been for the use of simulators, it probably would not have been possible to meet the high demand [31].

4. Theories of Performance and Implications for Simulation-Based Training (SBT) Measurement

Salas et al., 2009 [32], provided a review of the state of the science on simulation-based training (SBT) performance measurement systems. It states that training using traditional methods, such as lectures or conferences, is insufficient to meet the demands of many modern work environments or organizations. That is why they are resorting to the use of simulators to transfer this knowledge, which is usually very practical and oriented to situations that participants will encounter in the real world.
The effectiveness of all the aforementioned types of simulator requires a set of generic/standardized actions, such as a guided training plan and a continuous measurement of staff performance, so that their aptitude can be evaluated and the training can be fed back by these measurements. If a performance measurement is not correctly adjusted for individual or teamwork use, it will certainly lead to a waste of time and money for both the trainee and the company providing the training.
There is currently a wide variety of theories on how performance should be measured and its implications in SBT-based systems, for both individual and collective learning (teamwork).
For the measurement of individual performance, the work of Campbell et al. [33] stands out; they state that performance depends on three variables: declarative knowledge, i.e., the facts and knowledge necessary to complete a task (understanding the task requirements); procedural knowledge and skills, i.e., the combination of knowing what to do and how to do it correctly; and, finally, motivation, i.e., the combination of the expenditure, level, and persistence of effort required for learning.
Regarding the measurement of team performance, there is a wide discussion [34][35] on the definitions of performance and its effectiveness. This has resulted in the formation of different frameworks depending on the specific learning context. Training a flight crew is not the same as training a marketing or human resources team. Among the different existing theoretical frameworks for measuring team performance in SBT are input–process–output (IPO) [36][37][38], shared mental models [39][40][41], adaptability [42][43], the “big five” of teamwork [43][44][45][46][47], and macrocognition/team cognition [48].
These theories require certain methods for their application in measuring SBT performance and feedback. Salas classifies these methods according to whether they are qualitative or quantitative. Those of the former type are used to define the simulation system to be developed and the measurements to be made. These include protocol analysis [49], the critical incident technique [50], and conceptual maps [51]. The latter type consists of those which, based on the former, quantify the developed processes and provide feedback to the system to correct what is necessary. These include behaviorally anchored rating scales, or BARS [52]; behavioral observation scales, or BOS [53]; communication analyses [54]; event-based measurement, or EBAT [55]; structural behavioral assessment [39]; self-report measures; and, finally, automated performance recording and measurement [55].
Simulation is a field that is increasingly incorporated into our daily lives. In recent years, research has emerged with new methods to measure simulation-based learning. Papakostas et al. [56] proposed a novel and well-established model to measure the user experience and usability of a simulation-based training application. This method combines the perceived usefulness, the perceived ease of use, the behavioral intention to use the system, and two more external variables to conclude the user acceptance of the simulation system. Some years before, the same authors proposed a similar method based on the evaluation of external variables considered to be “strong predictors” [57].
As can be seen, the variability of measurement types for performance in simulation-based coaches is enormous. Therefore, this methodological entry does not focused on any of them, but rather presents the different studies published to date, exposing the results obtained and the methods used for each of them. INTRAS (Institute of Traffic and Road Safety) and IRTIC (Institute of Robotics and Information and Communication Technologies) of the University of Valencia have been working together for many years in the area of driving simulation. They have developed dozens of simulators and campaigns aimed at both professional and novice drivers, always attempting to objectify the results each simulator transmits to its users [58][59]. Similarly, the scientific community has spent many years studying the impact of each type of simulator on each type of training, and this has been the source of a large number of publications, some of them contradicting the results of others and questioning the effectiveness of certain simulators [14].
Therefore, it is considered necessary to systematically review what has been published to date regarding the effectiveness of simulation systems for driver training. The following sections cover this need.

References

  1. Woodruff, R.; Smith, J.F.; Fuller, J.R.; Weyer, D.C. Full Mission Simulation in Undergraduate Pilot Training: An Exploratory Study; Defense Technical Information Center, Department of Defense: Fort Belvoir, VA, USA, 1976.
  2. Rolfe, J.M.; Staples, K.J. Flight Simulation; No. 1; Cambridge University Press: Cambridge, UK, 1988.
  3. Spencer, S.N. Proceedings of the 14th ACM SIGGRAPH International Conference on Virtual Reality Continuum and Its Applications in Industry, Kobe, Japan 30 October–1 November 2015; ACM: New York, NY, USA, 2015.
  4. Huang, H.M.; Hira, R.; Feldman, P. A Submarine Simulator Driven by a Hierarchical Real-Time Control System Architecture; National Institute of Standards and Technology: Gaithersburg, MD, USA, 1992; NIST IR 4875.
  5. Lin, Z.; Feng, S.; Ying, L. The design of a submarine voyage training simulator. In Proceedings of the SMC’98 Conference Proceedings, Proceedings 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 98CH36218), San Diego, CA, USA, 14 October 1998; IEEE: Piscataway, NJ, USA, 1998; Volume 4, pp. 3720–3724.
  6. Allen, R.W.; Jex, H.R.; McRuer, D.T.; DiMarco, R.J. Alcohol effects on driving behavior and performance in a car simulator. IEEE Trans. Syst. Man Cybern. 1975, SMC-5, 498–505.
  7. Wymann, B.; Espié, E.; Guionneau, C.; Dimitrakakis, C.; Coulom, R.; Sumner, A. Torcs, the Open Racing Car Simulator. 2000. Available online: http://torcs.sourceforge.net (accessed on 7 January 2023).
  8. Robin, J.L.; Knipling, R.R.; Tidwell, S.A.; Derrickson, L.; Antonik, C.; McFann, J. Truck Simulator Validation (“SimVal”) Training Effectiveness Study; International Truck and Bus Safety and Security Symposium: Alexandria, VA, USA, 2005; Available online: https://trid.trb.org/view/1156437 (accessed on 9 December 2022).
  9. Gillberg, M.; Kecklund, G.; Åkerstedt, T. Sleepiness and performance of professional drivers in a truck simulator—Comparisons between day and night driving. J. Sleep Res. 1996, 5, 12–15.
  10. Hulbert, S.; Wojcik, C. Driving Tak Simulation. Human Factors in Highway Traffic Safety Research; Michigan State University: East Lansing, MI, USA, 1972.
  11. Eryilmaz, U.; Tokmak, H.S.; Cagiltay, K.; Isler, V.; Eryilmaz, N.O. A novel classification method for driving simulators based on existing flight simulator classification standards. Transp. Res. Part C Emerg. Technol. 2014, 42, 132–146.
  12. Galloway, R.T. Model validation topics for real time simulator design courses. In Proceedings of the Society for Modeling and Simulation International, 2001 Summer Computer Simulation Conference, Orlando, FL, USA, 15–19 July 2001.
  13. Li, N.; Sun, N.; Cao, C.; Hou, S.; Gong, Y. Review on visualization technology in simulation training system for major natural disasters. Nat. Hazards 2022, 112, 1851–1882.
  14. Ng, T.S. Robotic Vehicles: Systems and Technology; Springer: Berlin/Heidelberg, Germany, 2021.
  15. Ploner-Bernard, H.; Sontacchi, A.; Lichtenegger, G.; Vössner, S.; Braunstingl, R. Sound-system design for a professional full-flight simulator. In Proceedings of the 8th International Conference on Digital Audio Effects (DAFx-05), Madrid, Spain, 20–22 September 2005; pp. 36–41.
  16. Mohellebi, H.; Kheddar, A.; Espié, S. Adaptive haptic feedback steering wheel for driving simulators. IEEE Trans. Veh. Technol. 2008, 58, 1654–1666.
  17. Mohellebi, H.; Espié, S.; Arioui, H.; Amouri, A.; Kheddar, A. Low cost motion platform for driving simulator. In Proceedings of the ICMA: International Conference on Machine Automation, Japan, November 2004; Volume 5, pp. 271–277.
  18. Seymour, N.E. VR to OR: A review of the evidence that virtual reality simulation improves operating room performance. World J. Surg. 2008, 32, 182–188.
  19. Parkes, A.M.; Reed, N.; Ride, N.M. Fuel efficiency training in a full-mission truck simulator. In Behavioural Research in Road Safety 2005: Fifteenth Seminar; Department for Transport: London, UK, 2005; pp. 135–146.
  20. Narayanasamy, V.; Wong, K.W.; Fung, C.C.; Rai, S. Distinguishing games and simulation games from simulators. Comput. Entertain. 2006, 4, 9-es.
  21. Corona, F.; Perrotta, F.; Polcini, E.T.; Cozzarelli, C. The new frontiers of edutainment: The development of an educational and socio-cultural phenomenon over time of globalization. J. Soc. Sci. 2011, 7, 408.
  22. Corona, F.; Cozzarelli, C.; Palumbo, C.; Sibilio, M. Information technology and edutainment: Education and entertainment in the age of interactivity. Int. J. Digit. Lit. Digit. Competence 2013, 4, 12–18.
  23. Buckingham, D.; Scanlon, M. Edutainment. J. Early Child. Lit. 2001, 1, 281–299.
  24. Backhaus, K.; Liff, J.P. Cognitive styles and approaches to studying in management education. J. Manag. Educ. 2007, 31, 445–466.
  25. Faus, M.; Alonso, F.; Esteban, C.; Useche, S. Are Adult Driver Education Programs Effective? A Systematic Review of Evaluations of Accident Prevention Training Courses. Int. J. Educ. Psychol. 2023, 12, 62–91.
  26. Valero-Mora, P.M.; Zacarés, J.J.; Sánchez-García, M.; Tormo-Lancero, M.T.; Faus, M. Conspiracy beliefs are related to the use of smartphones behind the wheel. Int. J. Environ. Health Res. 2021, 18, 7725.
  27. Straus, S.H. New, Improved, Comprehensive, and Automated Driver’s License Test and Vision Screening System; No. FHWA-AZ-04- 559 (1); Arizona Department of Transportation: Phoenix, AZ, USA, 2005.
  28. Sætren, G.B.; Pedersen, P.A.; Robertsen, R.; Haukeberg, P.; Rasmussen, M.; Lindheim, C. Simulator training in driver education—Potential gains and challenges. In Safety and Reliability–Safe Societies in a Changing World; CRC Press: Boca Raton, FL, USA, 2018; pp. 2045–2049.
  29. Upahita, D.P.; Wong, Y.D.; Lum, K.M. Effect of driving experience and driving inactivity on young driver’s hazard mitigation skills. Transp. Res. F Traffic Psychol. Behav. 2018, 59, 286–297.
  30. Bro, T.; Lindblom, B. Strain out a gnat and swallow a camel?—Vision and driving in the Nordic countries. Acta Ophthalmol. 2018, 96, 623–630.
  31. Al-Garawi, N.; Dalhat, M.A.; Aga, O. Assessing the road traffic crashes among novice female drivers in Saudi Arabia. Sustainability 2021, 13, 8613.
  32. Salas, E.; Rosen, M.A.; Held, J.D.; Weissmuller, J.J. Performance measurement in simulation-based training: A review and best practices. Simul. Gaming 2009, 40, 328–376.
  33. Campbell, J.P.; McCloy, R.A.; Oppler, S.H.; Sager, E. A theory of performance. In Personnel Selection in Organizations; Schmitt, N., Borman, W., Eds.; Jossey-Bass: San Francisco, CA, USA, 1993; pp. 35–70.
  34. Kendall, D.L.; Salas, E. Measuring team performance: Review of current methods and consideration of future needs. In The Science and Simulation of Human Performance; Ness, J.W., Tepe, V., Ritzer, D., Eds.; Elsevier: Boston, MA, USA, 2004; pp. 307–326.
  35. MacBryde, J.; Mendibil, K. Designing performance measurement systems for teams: Theory and practice. Manag. Decis. 2003, 41, 722–733.
  36. Salas, E.; Stagl, K.C.; Burke, C.S. 25 years of team effectiveness in organizations: Research themes and emerging needs. Int. Rev. Ind. Organ. Psychol. 2004, 19, 47–91.
  37. Salas, E.; Priest, H.A.; Burke, C.S. Teamwork and team performance measurement. In Evaluation of Human Work, 3rd ed.; Wilson, J.R., Corlett, N., Eds.; Taylor & Francis: Boca Raton, FL, USA, 2005; pp. 793–808.
  38. Marks, M.A.; Mathieu, J.E.; Zaccaro, S.J. A temporally based framework and taxonomy of team processes. Acad. Manag. Rev. 2001, 26, 356–376.
  39. Klimoski, R.; Mohammed, S. Team mental model: Construct or metaphor? J. Manag. 1994, 20, 403–437.
  40. Cannon-Bowers, J.A.; Salas, E.; Converse, S. Shared mental models in expert team decision making. In Individual and Group Decision Making; Castellan, N.J.J., Ed.; Lawrence Erlbaum: Hillsdale, NJ, USA, 1993; pp. 221–246.
  41. Mathieu, J.E.; Heffner, T.S.; Goodwin, G.F.; Salas, E.; Cannon-Bowers, J. The influence of shared mental models on team process and performance. J. Appl. Psychol. 2000, 85, 273–283.
  42. Morgan, B.B.; Salas, E., Jr.; Glickman, A.S. An analysis of team evolution and maturation. J. Gen. Psychol. 1993, 120, 277–291.
  43. Burke, C.S.; Stagl, K.C.; Salas, E.; Pierce, L.; Kendall, D. Understanding team adaptation: A conceptual analysis & model. J. Appl. Psychol. 2006, 91, 1189–1207.
  44. Zaccaro, S.J.; Rittman, A.L.; Marks, M.A. Team leadership. Leadersh. Q. 2001, 12, 451–483.
  45. Burke, C.S.; Fiore, S.M.; Salas, E. The role of shared cognition in enabling shared leadership and team adaptability. In Shared Leadership: Reframing the Hows and Whys of Leadership; Pearce, C.L., Conger, J.A., Eds.; Sage: Thousand Oaks, CA, USA, 2004; pp. 103–121.
  46. Porter, C.O.; Hollenbeck, J.R.; Ilgen, D.R.; Ellis, A.P.; West, B.J.; Moon, H. Backing up behaviors in teams: The role of personality and legitimacy of need. J. Appl. Psychol. 2003, 88, 391–403.
  47. Bandow, D. Time to create sound teamwork. J. Qual. Particip. 2001, 24, 41–47.
  48. Mohammed, S.; Klimoski, R.; Rentsch, J.R. The measurement of team mental models: We have no shared schema. Organ. Res. Methods 2000, 3, 123–165.
  49. Shadbolt, N. Eliciting expertise. In Evaluation of Human Work; Wilson, J.R., Corlett, N., Eds.; Taylor & Francis: Boca Raton, FL, USA, 2005; pp. 185–218.
  50. Flanagan, J.C. The critical incident technique. Psychol. Bull. 1954, 51, 327–358.
  51. Cooke, N.J.; Salas, E.; Kiekel, P.A.; Bell, B. Advances in measuring team cognition. In Team Cognition: Understanding the Factors that Drive Process and Performance; Salas, E., Fiore, S.M., Eds.; American Psychiatric Association: Washington, DC, USA, 2004; pp. 83–106.
  52. Smith, P.C.; Kendall, L.M. Retranslations or expectations: An approach to the construction of unambiguous anchors for rating scales. J. Appl. Scales 1963, 47, 149–155.
  53. Baddeley, A.D.; Hitch, G. The recency effect: Implicit learning with explicit retrieval? Mem. Cogn. 1993, 21, 146–155.
  54. Keikel, P.A.; Cooke, N.J.; Foltz, P.W.; Shope, S.M. Automating measurement of team cognition through analysis of communication data. In Usability Evaluation and Interface Design; Smith, M.J., Salvendy, G., Harris, D., Koubek, R.J., Eds.; Lawrence Erlbaum: Mahwah, NJ, USA, 2001; pp. 1382–1386.
  55. Rosen, M.A.; Salas, E.; Silvestri, S.; Wu, T.; Lazzara, E.H. A measurement tool for simulation-based training in emergency medicine: The Simulation Module for Assessment of Resident Targeted Event Responses (SMARTER) approach. Simul. Healthc. 2008, 3, 170–179.
  56. Papakostas, C.; Troussas, C.; Krouska, A.; Sgouropoulou, C. Measuring User Experience, Usability and Interactivity of a Personalized Mobile Augmented Reality Training System. Sensors 2021, 21, 3888.
  57. Papakostas, C.; Troussas, C.; Krouska, A.; Sgouropoulou, C. User acceptance of augmented reality welding simulator in engineering training. Educ. Inf. Technol. 2021, 27, 791–817.
  58. Faus, M.; Alonso, F.; Javadinejad, A.; Useche, S.A. Are social networks effective in promoting healthy behaviors? A systematic review of evaluations of public health campaigns broadcast on Twitter. Front. Public Health 2022, 10, 1045645.
  59. Alonso, F.; Faus, M.; Tormo, M.T.; Useche, S.A. Could technology and intelligent transport systems help improve mobility in an emerging country? Challenges, opportunities, gaps and other evidence from the caribbean. Appl. Sci. 2022, 12, 4759.
More
Information
Subjects: Robotics
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 397
Revisions: 2 times (View History)
Update Date: 28 Apr 2023
1000/1000
Video Production Service