Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 + 3396 word(s) 3396 2021-02-22 11:15:58 |
2 format correct + 25 word(s) 3421 2021-03-03 03:56:43 |

Video Upload Options

Do you have a full video?


Are you sure to Delete?
If you have any further questions, please contact Encyclopedia Editorial Office.
Enjalbert, S. Human–Machine Interface in Transport Systems. Encyclopedia. Available online: (accessed on 20 April 2024).
Enjalbert S. Human–Machine Interface in Transport Systems. Encyclopedia. Available at: Accessed April 20, 2024.
Enjalbert, Simon. "Human–Machine Interface in Transport Systems" Encyclopedia, (accessed April 20, 2024).
Enjalbert, S. (2021, March 02). Human–Machine Interface in Transport Systems. In Encyclopedia.
Enjalbert, Simon. "Human–Machine Interface in Transport Systems." Encyclopedia. Web. 02 March, 2021.
Human–Machine Interface in Transport Systems

MAN PLATOONING by MAN and DB Schenker is a technical system that turns multiple vehicles into a convoy of trucks

transport railway vehicles

1. Systems Developed by Manufacturers of No-Rail Surface Vehicles

1.1. Trucks

MAN PLATOONING by MAN and DB Schenker is a technical system that turns multiple vehicles into a convoy of trucks (Figure 1). The technology relies on driver assistance and control systems. The driver always keeps his/her hands on the wheel, even when another truck on the road behind him/her reacts directly, without active intervention. A specially designed display provides drivers with additional information about the convoy. Additionally, camera and radar systems continuously scan the surroundings. Figure 1 presents a conceptual visualization of platooning trucks.

Figure 1. Concept of trucks platooning[1].

HIGHWAY PILOT by Daimler[2] is an autopilot mechanism for trucks. It consists of assistance and connectivity systems enabling autonomous driving on highways by adapting the speed of trucks to traffic density. Overtaking maneuvers, lane changes or exiting the highway remain prerogatives of the driver. The user interface continuously informs the driver about the activation status and can manage received instructions, including activation and deactivation, meaning that the system can be overridden at any time. In addition, control is returned to the driver whenever the onboard system is no longer able to detect safety relevant aspects, as in cases of roadworks, extreme weather conditions or an absence of lane markings.

1.2. Cars

AUTOPILOT 2.5 by Tesla[3] equipment includes:

  • 8 × 360° cameras;

  • 12 ultrasonic sensors for the detection of hard and soft objects at a distance and with double the accuracy of previous systems;

  • A forward-facing radar system, with high processing capabilities, providing additional data on the surrounding environment on different wavelengths in order to counteract the effects of heavy rain, fog, dust and other cars.

The autopilot function suggests when to change lanes in order to optimize the route, and makes adjustments to avoid remaining behind slow vehicles. It also automatically guides the vehicle through junctions and onto highways exits based on the selected destination.

iNEXT-COPILOT by BMW (Figure 2) may be activated for autonomous driving or deactivated for traditional driving. The interior of the vehicle is readjusted according to touch or voice commands:

Figure 2. iNEXT-COPILOT system[4].

  • The steering wheel retracts, providing more room for passengers;

  • The pedals retract, creating a flat surface on the footwall;

  • The driver and front passenger can turn back towards other passengers in the rear seats;

  • Various displays provide information about the surrounding area.

1.3. Ships

THE FALCO by Finferries and Rolls-Royce allows vehicles to detect objects through a fusion of sensors and artificial intelligence for collision avoidance. It is the culmination of studies launched 2008 (Figure 3)[5]; since then, its realism has improved greatly thanks to the presence of advanced sensors which provide real time images of surroundings with a level of accuracy higher than that of the human eye. Basing on this, the vessel is able to alter course and speed automatically during the approach to a quay, and dock in a full-automated maneuver without human intervention. A unified bridge provides the operator with a functional HMI with control levers, touch screens for calls and control, as well as logically presented information on system status. During maneuvers outside congested harbor areas, the operator can control operations remotely using a joystick or supervise them via onboard situation awareness systems. Various autonomy levels can operate selectively or in combination, depending on the vessel situation and external conditions.

Figure 3. Autopilot system concept based on dynamic positioning[5].

2. Simulators of Transport Systems

2.1. Rail Simulators

The PSCHITT-RAIL simulator[6] (Figure 4), designed to be an additional support for research and experimentation in the field of transport and funded by the European Union with the European Regional Development Fund (ERDF) and the Hauts-de-France Region, aims to integrate new modular equipment through the study of driver behavior in risky situations. Its main functionalities are:

Figure 4. PSCHITT-RAIL simulator interface.

  • Immersion in a sonic and visual environment;

  • Integration between real components and a simulated environment;

  • Management of driver information.

The equipment includes Alstom Citadis Dualis desk, a six-degrees-of-freedom motion system, OKSimRail software, five screens providing a 225° view, three eye trackers, cameras, a 6.1 audio system, and scripting, synthetized into a dynamic platform by means of measurements, such as systems capturing movements, physiological measurement sensors, etc.

SPICA RAIL, by University of Technology of Compiegne[6], is a supervision platform to recreate real accident scenarios in the laboratory in order to analyze human behaviors and decisions in such situations. It is able to analyze and simulate the entire network by integrating traffic control functions. Moreover, to simulate crises, personnel can start from an automatic control level, and progressively insert disturbances on the network.

OKTAL SYDAC simulators[7] cover solutions for trams, light rail, metro, freight (complex systems), high-speed trains, truck, bus and airports. The full cab or desk is an exact replica of a real cab. The compact desk simulators offer a solution with limited space for training.

The IFFSTAR-RAIL by IFFSTAR[8] is a platform designed to simulate rail traffic for the assessment of some European Rail Traffic Management System (ERTMS) components. It includes three subsystems:

  • A driving simulator desk used in combination with a 3D representation of tracks;

  • A traffic simulator, acting both as a single train management tool and for railway traffic control;

  • A test bench connected with onboard ERTMS equipment, in compliance with specifications and rules.

2.2. Car Simulators

IFFSTAR TS2 is a simulator of internal and external factors influencing driver behavior, and a human-machine interface located in Bron and Salon-de-Provence. It is capable of analyzing driver behavior relative to internal (e.g., anger, sadness) or external environmental factors and studying driver–vehicle interactions. The instrumentation includes:

  • Sensors for the control, communication and processing of of dashboard information;

  • Images of road scenes projected on five frontal screens in a visual field covering 200° × 40°:

  • A device providing rear-view images;

  • Quadraphonic sound reproducing internal (motor, rolling, starter) and external traffic noises.

NVIDIA DRIVE[9] is an open platform providing an interface that integrates environmental, vehicle and sensor models with traffic scenario and data managers, including two servers: A simulator, which generates output from virtual car sensors, and a computer, which contains the DRIVE AGX Pegasus AI car computer that runs the complete, binary compatible autonomous vehicle software stack. It processes the simulated data as if it were coming from the sensors of a car actually driving on the road. The car computer receives the sensor data, makes decisions and sends vehicle control commands back to the simulator in a closed loop process enabling bit-accurate, timing-accurate hardware-in-the-loop testing. The kit enables the development of Artificial Intelligence assistants for both drivers and passengers. HMI uses data from sensors tracking the drivers and the surrounding environment to keep them alert, anticipate passenger needs and provide insightful visualizations of journeys. The system uses deep learning networks to track head movements and gaze, and can communicate verbally via advanced speech recognition, lip reading and natural language understanding.

VRX-2019 by OPTIS[10][11] is a dynamic simulator with proportions, shapes, placement and surfaces for the interior which emphasize ergonomics and the comfort of passengers. From the integration of new technologies, such as driving assistance and autonomous driving, to the validation of ergonomics for future drivers and passengers, each step of the interior has been analyzed and validated. It reproduces the feeling of a cockpit HMI. It is very useful for virtual tests and for the integration of next-generation sensors before their actual release, helping to eliminate expensive real-world tests and reduce time-to-market. By virtual displays, actuators, visual simulations, eye and finger tracking and haptic feedback, it provides a tool for full HMI evaluation. Moreover, the user can validate safety and comfort improvements for drivers and pedestrians in dangerous scenarios. Key interface features are:

  • A finger-tracking system;

  • Tactile displays and dynamic content;

  • Windshield or glasshouse reflection studies based on physically accurate reflection simulations;

  • Testing and validation of head up displays, specifying and improving optical performance and the quality of the content.

2.3. Aviation Simulators

The CAE 7000XR Series Full-Flight Simulator (FFS) surpasses the operator requirements of Level D regulations. It provides integrated access to advanced functions, such as:

  • An intuitive lesson plan builder;

  • A 3D map of flight paths with event markers;

  • Increased information density;

  • Ergonomic redesign of interiors (Figure 5).

Figure 5. Integrated CAE 7000XR flight simulator[12].

The CAE 3000 Series Helicopter flight and mission Simulator[13] is helicopter-specific for training in severe conditions, e.g., offshore, emergency services, high-altitude, etc., based on the following features:

  • A visual system with high-definition commercial projectors;

  • Up to 220° × 80° field-of-view direct projection dome, with full chin window coverage tailored to helicopter training operations.

The EXCALIBUR MP521 Simulator includes a capsule with a six-axis motion system, visual and instrument displays, touch control panels and hardware for flight control. The user can enter parameters to define an aircraft through a graphical user interface (GUI). The graphics also feature airports and reconfigurable scenic elements, in order to meet the requirements of flight training, such as runway lighting, approach aids and surroundings, with the possibility of large wall screens vision for group lessons.

The ALSIM AL250 simulator[14] includes a visual system equipped with a high quality compact panoramic visual display (minimum frame rate of 60 images/s), 250° × 49° field of view, high definition visual systems for better weather rendering and ultra-realism, new map display, positioning/repositioning system, weather condition adjustment, failure menu, position and weather presets. Optional upset recovery and a touch screen with wider instructor surface and adjustable seats complete the setup.

AIRLINER is a multipurpose hybrid simulator which is able to cover the following training scenarios: Multi Crew Cooperation (MCC), Advanced Pilot Standard (APS), airline selection, preparation and skills tests, aircraft complex systems operation, Line Oriented Flight Training (LOFT), type-rating preparation and familiarization, Upset Prevention Training (UPT). It is based on the hybrid Airbus A320/Boeing B737 philosophy, with versatility to adapt flight training and interchangeable cockpit configuration.

i-VISION by OPTIS is an immersive virtual environment for the design and validation of human-centered aircraft cockpits. It is the result of a project developed and applied to accelerate design, validation and training processes of prototyping aircraft cockpits. The applied methods are also exportable to cars, trucks, boats and trains. i-VISION includes three main technological components (Figure 7), which were integrated and validated in collaboration with relevant industrial partners:

  • Human cockpit operations analysis module, with human factor methods demonstrated in the prototype of the project;

  • Semantic virtual cockpit, with semantic virtual scene-graph and knowledge-based reasoning of objects and intelligent querying functions, providing a semantic-based scene-graph and human task data processing and management engine;

  • Virtual cockpit design environment, with a virtual environment provided by human ergonomics evaluation software based upon the Airbus flight simulator, to develop a new general user interface for cockpits.

2.4. Integrated Simulators

MISSRAIL® and INNORAIL are tools developed entirely by the Université Polytechnique Hauts-de-France in Valenciennes[15][16][17]. They include four main modules (Figure 6): (1) railway infrastructure design, (2) route design module, (3) driving cab simulation module, 4) control-command room simulation module. A version with an immersive helmet is also available.

Figure 6. MISSRAIL® interface for rail simulation.

It is a Client/Server application which is capable of connecting different users at the same time. Several configurations are possible: wired, wireless, portable, fixed. Moreover, MISSRAIL® is able to design various automated driving assistance tools (e.g., eco-driving assistance, collision avoidance system, cruise control and vigilance control system) and accident scenarios combining pedestrians, trains and cars (Figure 7)[17].

Figure 7. Train, car and pedestrian control simulation on MISSRAIL®.

3. Support Tools for Driver Assistance

3.1. Human Factors and Their Limits

Human control engineering can use several technological means to control human factors, such as workload, attention or vigilance; however, controversies exist about some of them[18], i.e., studies have highlighted that, independent from technology, vigilance can be improved from dieting or fasting[19], or even from chewing gum [20][21]. Second, two kinds of technological supports can influence human cognitive state: passive tools, i.e., human–machine interaction supports, or active ones, which are capable of making decisions and acting accordingly. Examples are listening to music, which may improve concentration[22][23]. Meanwhile, a dedicated decision support system can decrease workload by improving performance[24]. Moreover, due to disengagement from a driving situation under monotonous driving conditions, automation might lead operators to become more fatigued that they would during manual driving conditions[25][26].

A great deal of research has reported on the utility of using physiological, behavioral, auditory and vehicle data to detect the mental state of the driver, such as presence/sleep, drowsiness or cognitive workload, with considerable accuracy[27][28][29]. Diverse parameters can provide information on targeted mental states: physiological measures include signals such as ElectroEncephaloGram (EEG) data, ElectroDermal Activity (EDA), and heart rate and heart rate variability. Behavioral measures include aspects such as head-direction, head-movement, gaze-direction, pose of the superior part of the body, gaze-dispersion, blinking, saccades, PERCLOS, pupil-size, eyelid movement, postural adjustment and nonself-centered gestures. Such data may be combined in multimodal approaches with information on vehicle activity and auditory information. However, existing approaches still present clear limitations, such as with electroencephalography (EEG), which is hardly usable in real contexts due to possible discomfort, an unprovable performance, as well as, in some cases, the high computational cost for calculations, which constrains implementation in real environments[30]. Note also that more traditional behavioral measures used in experimental psychology, such as the secondary task paradigm, have been shown to be quite useful in workload studies[31].

Results from studies on human factors in driver automation based on these techniques are often concerned with questions such as of how users tackle automated driving and transitions between manual and automated control. Most such studies were motivated by the increasing prevalence of automated control in commercial and public transport vehicles, as well as increases in the degree of automation. Moreover, while automated driving significantly reduces workload, this is not the case for Adaptive Cruise Control (ACC)[32]. For instance, a driving simulator and vehicle with eye-tracking measures showed that the time required to resume control of a car is about 15 s, and up to 40 s to stabilize it[33].

Alarms comprising beeps are safer than those comprising sounds with positive or negative emotional connotations [34], and human performances can differ according to the use of interaction means involving hearing or sight[35]. Moreover, interactions with visual or audio-visual displays are more efficient than those with auditory displays only [36]. In this sense, research on multimodal perception is particularly relevant when studying human factors of driver aid systems[37][38].

Other studies have not observed significant impacts of noise or music on human performance[39] and have even concluded that silence is able to increase attention during human disorder recovery conditions[40].

Moreover, the use of decision support systems can generate ambiguous results by leading to dissonances, affordances, contradictions or interferences with safety critical behavior[41][42][43], potentially increasing hypo-vigilance and extending human response time[44]. As an example, the well-known Head-Up Display (HUD) system is useful to display important information without requiring them to move their gaze in several directions, but it is also a mean to focus attention upon a reduced control area[45]. It is, therefore, a tool to prevent accidents, but can also cause problems of focused attention.

Neuropsychological studies generally use sensors connected to the brain to assess neural activities related to cognitive processes, such as perception or problem solving. In this context, eye trackers have been demonstrated to be useful for the study of visual attention or workload via parameters such as closure percentage, blink frequency, fixation duration, etc.[46][47][48][49]. Indeed, the pupil diameter increases with the increasing demand of the performed task and higher the cognitive loads[50], while an increase of physical demand does the opposite[51], as do external cues, such as variations of ambient light, use of drugs or strong emotions. Facial recognition is also incapable of detecting emotional dissonances between expressed and felt emotions. Moreover, eye blink frequency reduces as workload increases[52][53], but it increases when a secondary task is required[54][55].

Eye-trackers can be useful to analyze overt or covert attention: when a subject looks at a point on a scene, the analysis of the corresponding eye movement supposes that the attention is focused on this point, whereas attention can also focus on other points without any eye movement[56].

Variations in heartbeat frequently correspond to variations in the level of the workload, stress or emotion [57][58][59][60], but a new hypothesis considers that perceptive ability can depend on the synchronization between frequency of dynamic events and heart beats. Recent studies have demonstrated that flashing alarms synchronized with heart rate could reduce the solicitation of the insula, i.e., the part of brain dedicated to perception, and the ability to detect it correctly[61][62]. This performance-shaping factor based on the synchronization of dynamic events with heartbeats is relevant for human error analysis.

The development of future smart tools to support driving tasks has to consider extended abilities, such as the ability:

  • To cooperate with and learn from others[63][64][65];

  • To explain results in a pedagogical way[17];

  • To discover and control dissonances between users and support tools[41][42].

Significant advances for the prediction of driver drowsiness and workload have been made in association with the use of more sophisticated features of physiological signals, as well as from the application of increasingly sophisticated machine learning models, although extrapolation of such to the context of commercial pilots has not yet been attempted. Some approaches have been based on EDA signal decomposition into tonic and phasic components[66], extraction of features in time, frequency, and time-frequency (wavelet based) domains[67], or the use of signal-entropy related features[68].

Moreover, regarding machine-learning models, while the most widely used approach is the support vector machine, artificial neural networks, such as convolutional neural networks, seems to provide better performance for the detection of drowsiness and workload[69][70][71].

The combination of such approaches with multimodal data fusion has been shown to provide a very high degree of accuracy for drowsiness detection[72].

Such approaches are applicable to overcome some of the current limitations in the detection in pilots of drowsiness and mental workload. For instance, the high accuracy accomplished with only a subset of the signals suggests that various predictive models of drowsiness and workload could be trained based on different subsets of features, thereby helping to make the system useful, even when some specific features are not momentarily available (e.g., due to occlusion of the eyes or head). Recent advances can also help in the implementation of detection systems with lower computational cost, such as efficient techniques for signal filtering[73] and feature-selection methods to reduce model dimensionality and complexity[74].

3.2. Gesture Control Technology

Many technologies to control devices by gestures are already on the market. An extended, though not comprehensive, summary of them is presented below.

DEPTHSENSE CARLIB, by Sony, aims to control infotainement by hand movement[75].

EYEDRIVE GESTURE CONTROL by EyeLights is an infrared motion sensor that recognizes simple hand gestures while driving in order to control in-vehicle devices[76].

HAPTIX by Haptix Touch is a webcam-based environment to recognize any classical hand movement and build a virtual mouse to control screen interface[77].

KINECT by Microsoft is a web-cam based device that can capture motion and control devices with body or hand movements[78][79].

LEAP MOTION by Leap Motion Inc. (now UltraHaptics) is an environment for hand movement recognition dedicated to virtual reality. Movement detection is by infrared light, while micro-cameras detect the hands or other objects in 3D [80].

MYO BRACELET by Thalmic Labs proposes an armband to control interfaces with hand or finger movement detected via the electrical activities of activated muscles[74][81][82].

SOLI by Google comprises a mini-radar which is capable of identifying movements, from fingers to the whole body [83][84].

SWIPE by FIBARO is dedicated to home automation; it is controlled via hand motions in front of a simple, contactless tablet[85].

XPERIA TOUCH DEVICE by Sony is a smartphone application for gesture control which is capable of tracking proximate hand gesture via the phone camera[86].

Table 1 summarizes a Strengths Weaknesses Opportunities and Threats (SWOT) analysis of three of systems defined above: KINECT, LEAP MOTION and MYO BRACELET developed starting from the results of similar studies [87].

Table 1. SWOT analysis of three gesture control technologies.

  • Body motion identification
  • Development kit
  • Hand and finger movements identification
  • Low price
  • Lightweight device
  • Development kit
  • Gesture detection only for the person wearing the bracelet
  • Lightweight device
  • Development kit
  • Operational difficulties in limited spaces
  • Possible interference between movements and detection sensor
  • Deep training required
  • Possible interference between movements and detection sensor
  • Limited number of identified movements
  • Deep training required
  • Combining gesture control with facial or voice recognition for security purposes
  • Combining use of infrared light with cameras for security purposes
  • Combining physiological detection (e.g., heartbeat) with gesture control for security purposes
  • Undefined gesture control intrusion recovery process
  • Undefined gesture control intrusion recovery process
  • Undefined gesture control intrusion recovery process


  1. Deng, Q. A General Simulation Framework for Modeling and Analysis of Heavy-Duty Vehicle Platooning. IEEE Trans. Intell. Transp. Syst. 2016, 17, 352–3262.
  2. Daimler. Highway Pilot. The Autopilot for Trucks. 2020. Available online: (accessed on 28 January 2021).
  3. Tesla. Il Futuro Della Guida. 2020. Available online: (accessed on 28 January 2021).
  4. Maximilan, J. 2019. Available online: (accessed on 28 January 2021).
  5. Rekdalsbakken, W.; Styve, A. Simulation of Intelligent Ship Autopilots. In Proceedings of the 22nd European Conference on Modelling and Simulation, Nicosia, Cyprus, 3–6 June 2008.
  6. PSCHITT-Rail Collaborative. Hybrid, Intermodal Simulation Platform in Land Transport—Rail. Available online: (accessed on 28 January 2021).
  7. OKTAL SYDAC. Conception. 2020. Available online: (accessed on 28 January 2021).
  8. IFFSTAR. Institut Français des Sciences et Technologies des Transports, de l’Aménagement et des Réseaux. 2020. Available online: (accessed on 28 January 2021).
  9. NVIDIA DRIVE. Scalable AI Platform for Autonomous Driving. 2019. Available online: (accessed on 28 January 2021).
  10. Ansys. VRX Dynamic Driving Experience. 2020. Available online: (accessed on 28 January 2021).
  11. Ansys. Ansys VRXPERIENCE HMI. 2020. Available online: (accessed on 28 January 2021).
  12. Epagnoux, S. CAE Flight Simulator. 2020. Available online: (accessed on 28 January 2021).
  13. CAE. CAE 3000 Series Flight Simulator. 2020. Available online: (accessed on 28 January 2021).
  14. Alsim. Alsim Flight Training Solutions. Alsim Simulators & Technology. 2020. Available online: (accessed on 28 January 2021).
  15. Vanderhaegen, F.; Richard, P. MissRail: A platform dedicated to training and research in railway systems. In Proceedings of the International Conference HCII, Heraklion, Greece, 22–27 June 2014; pp. 544–549.
  16. Vanderhaegen, F. MissRail® and Innorail. 2015. Available online: (accessed on 28 January 2021).
  17. Vanderhaegen, F. Pedagogical learning supports based on human–systems inclusion applied to rail flow control. Cogn. Technol. Work 2019.
  18. Vanderhaegen, F.; Jimenez, V. The amazing human factors and their dissonances for autonomous Cyber-Physical & Human Systems. In Proceedings of the First IEEE Conference on Industrial Cyber-Physical Systems, Saint-Petersburg, Russia, 15–18 May 2018; pp. 597–602.
  19. Fond, G.; MacGregor, A.; Leboyer, M.; Michalsen, A. Fasting in mood disorders: Neurobiology and effectiveness. A review of the literature. Psychiatry Res. 2013, 209, 253–258.
  20. Smith, A. Effects of chewing gum on cognitive function, mood and physiology in stressed and non-stressed volunteers. Nutr. Neurosci. 2010, 13, 7–16.
  21. Onyper, S.V.; Carr, T.L.; Farrar, J.S.; Floyd, B.R. Cognitive advantages of chewing gum. Now you see them, now you don’t. Appatite 2011, 57, 321–328.
  22. Mori, F.; Naghsh, F.A.; Tezuka, T. The effect of music on the level of mental concentration and its temporal change. In Proceedings of the 6th International Conference on Computer Supported Education, Barcelona, Spain, 1–3 April 2014; pp. 34–42.
  23. Chtouroua, H.; Briki, W.; Aloui, A.; Driss, T.; Souissi, N.; Chaouachi, A. Relationship between music and sport performance: Toward a complex and dynamical perspective. Sci. Sports 2015, 30, 119–125.
  24. Stanton, N.A.; Young, M.S. Driver behaviour with adaptive cruise control. Ergonomics 2005, 48, 1294–1313.
  25. Schömig, N.; Hargutt, V.; Neukum, A.; Petermann Stock, I.; Othersen, I. The interaction between highly automated driving and the development of drowsiness. Procedia Manuf. 2015, 3, 6652–6659.
  26. Vogelpohl, T.; Kühn, M.; Hummel, T.; Vollrath, M. Asleep at the automated wheel -Sleepiness and fatigue during highly automated driving. Accid. Anal. Prev. 2019, 126, 70–84.
  27. Borghini, G.; Astolfi, L.; Vecchiato, G.; Mattia, D.; Babiloni, F. Measuring neurophysiological signals in aircraft pilots and car drivers for the assessment of mental workload, fatigue and drowsiness. Neurosci. Biobehav. Rev. 2014, 44, 58–75.
  28. Thomas, L.C.; Gast, C.; Grube, R.; Craig, K. Fatigue detection in commercial flight operations: Results using physiological measures. Procedia Manuf. 2015, 3, 2357–2364.
  29. Wanyan, X.; Zhuang, D.; Zhang, H. Improving pilot mental workload evaluation with combined measures. BioMed Mater. Eng. 2014, 24, 2283–2290.
  30. Pereda-Baños, A.; Arapakis, I.; Barreda-Ángeles, M. On human information processing in information retrieval (position paper). In Proceedings of the SIGIR Workshop Neuro-Physiological Methods IR, Santiago, Chile, 13 August 2015.
  31. Hensch, A.C.; Rauh, N.; Schmidt, C.; Hergeth, S.; Naujoks, F.; Krems, J.F.; Keinath, A. Effects of secondary tasks and display position on glance behavior during partially automated driving. Transp. Res. Part F Traffic Psychol. Behav. 2020, 68, 23–32.
  32. De Winter, J.C.; Happee, R.; Martens, M.H.; Stanton, N.A. Effects of adaptive cruise control and highly automated driving on workload and situation awareness: A review of the empirical evidence. Transp. Res. Part F Traffic Psychol. Behav. 2014, 27, 196–217.
  33. Merat, N.; Jamson, A.H.; Lai, F.C.; Daly, M.; Carsten, O.M. Transition to manual: Driver behaviour when resuming control from a highly automated vehicle. Transp. Res. Part F Traffic Psychol. Behav. 2014, 27, 274–282.
  34. Di Stasi, L.L.; Contreras, D.; Cañas, J.J.; Cándido, A.; Maldonado, A.; Catena, A. The consequences of unexpected emotional sounds on driving behaviour in risky situations. Saf. Sci. 2010, 48, 1463–1468.
  35. Sanderson, P.; Crawford, J.; Savill, A.; Watson, M.; Russell, W.J. Visual and auditory attention in patient monitoring: A formative analysis. Cogn. Technol. Work 2004, 6, 172–185.
  36. Jakus, G.; Dicke, C.; Sodnikv, J. A user study of auditory, head-up and multi-modal displays in vehicles. Appl. Ergon. 2015, 46, 184–192.
  37. Geitner, C.; Biondi, F.; Skrypchuk, L.; Jennings, P.; Birrell, S. The comparison of auditory, tactile, and multimodal warnings for the effective communication of unexpected events during an automated driving scenario. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 23–33.
  38. Salminen, K.; Farooq, A.; Rantala, J.; Surakka, V.; Raisamo, R. Unimodal and Multimodal Signals to Support Control Transitions in Semiautonomous Vehicles. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, The Netherlands, 22–25 September 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 308–318.
  39. Dalton, B.H.; Behm, D.G. Effects of noise and music on human and task performance: A systematic review. Occup. Ergon. 2007, 7, 143–152.
  40. Prince-Paul, M.; Kelley, C. Mindful communication: Being present. Semin. Oncol. Nurs. 2017, 33, 475–482.
  41. Vanderhaegen, F. Dissonance engineering: A new challenge to analyze risky knowledge when using a system. Int. J. Comput. Commun. Control 2014, 9, 750–759.
  42. Vanderhaegen, F. A rule-based support system for dissonance discovery and control applied to car driving. Expert Syst. Appl. 2016, 65, 361–371.
  43. Vanderhaegen, F. Towards increased systems resilience: New challenges based on dissonance control for human reliability in Cyber-Physical & Human Systems. Annu. Rev. Control 2017, 44, 316–322.
  44. Dufour, A. Driving assistance technologies and vigilance: Impact of speed limiters and cruise control on drivers’ vigilance. In Proceedings of the Seminar on the Impact of Distracted Driving and Sleepiness on Road Safety, Paris, France, 15 April 2014.
  45. JTSB. Aircraft Serious Incident—Investigation Report; Report AI2008–01; JTSB: Tokyo Japan, 2008. Available online: (accessed on 28 January 2021).
  46. Galluscio, E.H.; Fjelde, K. Eye movement and reaction time measures of the effectiveness of caution signs. Saf. Sci. 1993, 16, 627–635.
  47. Rosch, J.L.; Vogel-Walcutt, J.J. A review of eye-tracking applications as tools for training. Cogn. Technol. Work 2013, 15, 313–327.
  48. De Winter, J.C.F.; Eisma, Y.B.; Cabrall, C.D.D.; Hancock, P.A.; Stanton, N.A. Situation awareness based on eye movements in relation to the task environment. Cogn. Technol. Work 2018, 21, 99–111.
  49. Samima, S.; Sarma, S.; Samanta, D.; Prasad, G. Estimation and quantification of vigilance using ERPs and eye blink rate with a fuzzy model-based approach. Cogn. Technol. Work 2019, 21, 517–533.
  50. Beatty, J. Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol. Bull. 1982, 91, 276–292.
  51. Fletcher, K.; Neal, A.; Yeo, G. The effect of motor task precision on pupil diameter. Appl. Ergon. 2017, 65, 309–315.
  52. Fogarty, C.; Stern, J.A. Eye movements and blinks: Their relationship to higher cognitive processes. Int. J. Psychophysiol. 1989, 8, 35–42.
  53. Benedetto, S.; Pedrotti, M.; Minin, L.; Baccino, T.; Re, A.; Montanari, R. Driver workload and eye blink duration. Transp. Res. Part F Traffic Psychol. Behav. 2011, 14, 199–208.
  54. Tsai, Y.F.; Viirre, E.; Strychacz, C.; Chase, B.; Jung, T.P. Task performance and eye activity: Predicting behavior relating to cognitive workload. Aviat. Space Environ. Med. 2007, 78, 176–185.
  55. Recarte, M.A.; Pérez, E.; Conchillo, A.; Nunes, L.M. Mental workload and visual impairment: Differences between pupil, blink, and subjective rating. Span. J. Psychol. 2008, 11, 374–385.
  56. Findley, J.M. Visual selection, covert attention and eye movements? In Active Vision: The Psychology of Looking and Seeing; Oxford Psychology Series; Oxford University Press: Oxford, UK, 2003; pp. 35–54.
  57. Taelman, J.; Vandeput, S.; Spaepen, A.; Van Huffel, S. Influence of mental stress on heart rate and heart rate variability. In 4th European Conference of the International Federation for Medical and Biological Engineering Proceedings; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1366–1369.
  58. Geisler, F.C.M.; Vennewald, N.; Kubiak, T.; Weber, H. The impact of heart rate variability on subjective well-being is mediated by emotion regulation. Personal. Individ. Differ. 2010, 49, 723–728.
  59. Pizziol, S.; Dehais, F.; Tessier, C. Towards human operator state assessment. In Proceedings of the 1st International Conference on Application and Theory of Automation in Command and Control Systems, Barcelone, Spain, 26–27 May 2011; IRIT Press: Oxford, UK, 2011; pp. 99–106.
  60. Hidalgo-Muñoz, A.R.; Mouratille, D.; Matton, N.; Caussec, M.; Rouillard, Y.; El-Yagoubi, R. Cardiovascular correlates of emotional state, cognitive workload and time on-task effect during a realistic flight simulation. Int. J. Psychophysiol. 2018, 128, 62–69.
  61. Salomon, R.; Ronchi, R.; Dönz, J.; Bello-Ruiz, J.; Herbelin, B.; Martet, R.; Faivre, N.; Schaller, K.; Blanke, O. The insula mediates access to awareness of visual stimuli presented synchronously to the heartbeat. J. Neurosci. 2016, 36, 5115–5127.
  62. Vanderhaegen, F.; Wolff, M.; Ibarboure, S.; Mollard, R. Heart-Computer synchronization Interface to control human-machine symbiosis: A new human availability support for cooperative systems. In Proceedings of the 14th IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of Human-Machine Systems, Tallinn, Estonia, 16–19 September 2019.
  63. Vanderhaegen, F. Multilevel organization design: The case of the air traffic control. Control Eng. Pract. 1997, 5, 391–399.
  64. Vanderhaegen, F. Toward a model of unreliability to study error prevention supports. Interact. Comput. 1999, 11, 575–595.
  65. Vanderhaegen, F. Human-error-based design of barriers and analysis of their uses. Cogn. Technol. Work 2010, 12, 133–142.
  66. Dehzangi, O.; Rajendra, V.; Taherisadr, M. Wearable driver distraction identification on the road via continuous decomposition of galvanic skin responses. Sensors 2018, 18, 503.
  67. Chen, L.L.; Zhao, Y.; Ye, P.F.; Zhang, J.; Zou, J.Z. Detecting driving stress in physiological signals based on multimodal feature analysis and kernel classifiers. Expert Syst. Appl. 2017, 85, 279–291.
  68. Collet, C.; Salvia, E.; Petit-Boulanger, C. Measuring workload with Electrodermal activity during common braking actions. Ergonomics 2014, 57, 886–896.
  69. De Naurois, C.J.; Bourdin, C.; Stratulat, A.; Diaz, E.; Vercher, J.L. Detection and prediction of driver drowsiness using artificial neural network models. Accid. Anal. Prev. 2017, 126, 95–104.
  70. Ngxande, M.; Tapamo, J.R.; Burke, M. Driver drowsiness detection using behavioral measures and machine learning techniques: A review of state-of-art techniques. In Proceedings of the 2017 Pattern Recognition Association of South Africa and Robotics and Mechatronics (PRASA-RobMech), Bloemfontein, South Africa, 30 November–1 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 156–161.
  71. Zhao, L.; Wang, Z.; Wang, X.; Liu, Q. Driver drowsiness detection using facial dynamic fusion information and a DBN. IET Intell. Transp. Syst. 2017, 12, 127–133.
  72. Lim, S.; Yang, J.H. Driver state estimation by convolutional neural network using multimodal sensor data. Electron. Lett. 2016, 52, 1495–1497.
  73. Shukla, J.; Barreda-Ángeles, M.; Oliver, J.; Puig, D. Efficient wavelet-based artefact removal for Electrodermal activity in real-world applications. Biomed. Signal Process. Control 2018, 42, 45–52.
  74. Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R.P.; Tang, J.; Liu, H. Feature selection: A data perspective. ACM Comput. Surv. 2017, 50, 94.
  75. Koifman, V. Sofkinetic CARlib. 2016. Available online: (accessed on 28 January 2021).
  76. Dhall, P. EyeDrive: A Smart Drive. BWCIO BUSINESSWORLD. 2019. Available online: (accessed on 28 January 2021).
  77. Boulestin, R. L’Haptix Transforme Toute Surface en Interface Tactile. 2013. Available online: (accessed on 28 January 2021).
  78. Ganguly, B.; Vishwakarma, P.; Biswas, S.; Rahul, S. Kinect Sensor Based Single Person Hand Gesture Recognition for Man–Machine Interaction. Comput. Adv. Commun. Circuits Lect. Notes Electr. Eng. 2020, 575, 139–144.
  79. Saha, S.; Lahiri, R.; Konar, A. A Novel Approach to Kinect-Based Gesture Recognition for HCI Applications. In Handbook of Research on Emerging Trends and Applications of Machine Learning; IGI Global: Hershey, PA, USA, 2020; pp. 62–78.
  80. Georgiou, O.; Biscione, V.; Hardwood, A.; Griffiths, D.; Giordano, M.; Long, B.; Carter, T. Haptic In-Vehicle Gesture Controls. In Proceedings of the 9th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, Automotive, Oldenburg, Germany, 24–27 September 2017.
  81. He, S.; Yang, C.; Wang, M.; Cheng, L.; Hu, Z. Hand gesture recognition using MYO armband. In Proceedings of the Chinese Automation Congress, Jinan, China, 20–22 October 2017; pp. 4850–4855.
  82. Wong, A.M.H.; Furukawa, M.; Ando, H.; Maeda, T. Dynamic Hand Gesture Authentication using Electromyography (EMG). In Proceedings of the IEEE/SICE International Symposium on System Integration, Honolulu, HI, USA, 12–15 January 2020; pp. 300–304.
  83. Anderson, T. OK, Google. We’ve Got Just the Gesture for You: Hand-Tracking Project Soli Coming to Pixel 4. The Register. 2019. Available online: (accessed on 28 January 2021).
  84. Raphael, J.R. Project Soli in Depth: How Radar-Detected Gestures Could Set the Pixel 4 Apart. COMPUTERWORLD. 2019. Available online: (accessed on 28 January 2021).
  85. Priest, D. The Fibaro Swipe Makes Your Hand the Remote. CNET. 2016. Available online: (accessed on 28 January 2021).
  86. Shankland, S. Minority Report’ Gesture Control is about to Get Very Real. CNET. 2018. Available online: (accessed on 28 January 2021).
  87. Zhao, L. Gesture Control Technology: An Investigation on the Potential Use in Higher Education; University of Birmingham, IT Innovation Centre: Birmingham, UK, 2016.
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to :
View Times: 639
Revisions: 2 times (View History)
Update Date: 04 Mar 2021