Technology facilitates humans, improves productivity and leads to a better quality of life. Technological developments and automation in vehicular networks will lead to better road safety and lower congestion in present urban areas where the traditional transport system is becoming increasingly disorganised and inefficient. Therefore, the development of the intelligent transport systems (ITS) concept has been proposed, with the aim and focus on improving traffic safety and providing different services to its users. There has been considerable research in ITS resulting in significant contributions .
1. System Architecture for Autonomous Vehicles
An ordinary vehicle can be converted into an autonomous one by adding some additional components including sensors that allow the vehicle to make its own decisions by sensing the environment and controlling the mobility of the vehicle [1][2][3][4].
An ordinary vehicle can be converted into an autonomous one by adding some additional components including sensors that allow the vehicle to make its own decisions by sensing the environment and controlling the mobility of the vehicle [12,14,37,46]. illustrates the overall communication process/protocol in AVs and also lists the sensors, actuators, hardware and the software control required. The protocol architecture, explained below, is composed of four main stages and enables a Level 5 fully autonomous vehicle where all the users are passengers.
illustrates the overall communication process/protocol in AVs and also lists the sensors, actuators, hardware and the software control required. The protocol architecture, explained below, is composed of four main stages and enables a Level 5 fully autonomous vehicle where all the users are passengers.
Figure 14.
System Architecture for AVs.
-
Perception: This stage involves sensing of the AVs surrounding through various sensors and also detecting its own position with respect to the surroundings. In this stage, some of the sensors used by the AV are RADAR, LIDAR, camera, real-time kinetic (RTK), etc. The information from these sensors is then passed to the recognition modules which process this information. Generally, the AV consists of adaptive detection and recognition framework (ADAF), a control system, LDWS, TSR, unknown obstacles recognition (UOR), vehicle positioning and localisation (VPL) module, etc. This processed information is fused and passed to the decision and planning stage.
-
Decision and Planning: Utilising the data gathered in the perception process, this stage decides, plans and controls the motion and behaviour of the AV. This stage is analogous to the brain and makes decision such as path planning, action prediction, obstacle avoidance, etc. The decision is made based on the current as well as past information available including real-time map information, traffic details and patterns, information by the user, etc. There may be a data log module that records errors and information for future reference.
-
Control: The control module receives information from the decision and planning module and performs functions/actions related to physical control of the AV such as steering, braking, accelerating etc.
-
Chassis: The final stage includes the interface with the mechanical components mounted on the chassis such as the accelerator pedal motor, brake pedal motor, steering wheel motor and gear motor. All these components are signalled to and controlled by the control module.
After discussing the overall communication and sensor architecture of an AV, we discuss the design, functionality and utilisation of some main sensors.
1.1. Ultrasonic Sensors
These sensors use ultrasonic waves and operate in the range of 20–40 kHz [2]. These waves are generated by a magneto-resistive membrane used to measure the distance to the object. The distance is measured by calculating the time-of-flight (ToF) of the emitted wave to the echoed signal. Ultrasonic sensors have very limited range which is generally less than 3 m [5]. The sensor output is updated after every 20 ms [5], making it not compliant with the strict QoS constraints of an ITS. These sensors are directional and provide a very narrow beam detection range [2]. Therefore, multiple sensors are needed to to get a full-field view. However, multiple sensors will influence each other and can cause extreme ranging errors [6]. The general solution is to provide a unique signature or identification code which will be required to discard the echoes of other ultrasonic sensors operating in near-by range [7]. In AVs, these sensors are utilised to measure short distances at low speeds. For example, they are used for SPA and LDWS [8]. Moreover, these sensors work satisfactorily with any material (independent of color), in bad weather conditions and even in dusty environments.
These sensors use ultrasonic waves and operate in the range of 20–40 kHz [14]. These waves are generated by a magneto-resistive membrane used to measure the distance to the object. The distance is measured by calculating the time-of-flight (ToF) of the emitted wave to the echoed signal. Ultrasonic sensors have very limited range which is generally less than 3 m [47]. The sensor output is updated after every 20 ms [47], making it not compliant with the strict QoS constraints of an ITS. These sensors are directional and provide a very narrow beam detection range [14]. Therefore, multiple sensors are needed to to get a full-field view. However, multiple sensors will influence each other and can cause extreme ranging errors [48]. The general solution is to provide a unique signature or identification code which will be required to discard the echoes of other ultrasonic sensors operating in near-by range [49]. In AVs, these sensors are utilised to measure short distances at low speeds. For example, they are used for SPA and LDWS [50]. Moreover, these sensors work satisfactorily with any material (independent of color), in bad weather conditions and even in dusty environments.
1.2. RADAR: Radio Detection and Ranging
RADARs, in AVs, are used to scan the surroundings to detect the presence and location of cars and objects. RADARs operate in the millimetre-wave (mm-Wave) spectrum and are typically used in military and civil applications such as airports or meteorological systems [9]. In modern vehicles, different frequency bands such as 24, 60, 77 and 79 GHz are employed and they can measure a range from 5 to 200 m [10]. The distance between the AV and the object is calculated by measuring the ToF between the emitted signal and the received echo. In AVs, the RADARs use an array of micro-antennas that generate a set of lobes to improve the range resolution as well as the detection of multiple targets [11]. As mm-Wave RADAR has higher penetrability and a wider bandwidth, and it can accurately measure the short-range targets in any direction utilising the variation in Doppler shift [9][10][11][12]. Due to longer wavelength, mm-Wave radars have an anti-blocking and anti-pollution capability that allows them to cope in rain, snow, fog and low-light. Furthermore, mm-Wave radars have the ability to measure the relative velocity using the Doppler shift [13]. This ability of mm-Wave radars make them suitable for extensive AV application such as obstacle detection [14], pedestrian recognition [15] and vehicle recognition [16]. Some applications of RADARs in AVs are forward cross traffic alert (FCTA), lane change assistance (LCA), blind spot detection (BSD), rear cross traffic alert (RCTA), etc. The mm-Wave also has some disadvantages such as reduced field-of-view (FoV), less precision and results in getting more false alarm as a result of emitted signals which gets bounced from the surroundings [12].
RADARs, in AVs, are used to scan the surroundings to detect the presence and location of cars and objects. RADARs operate in the millimetre-wave (mm-Wave) spectrum and are typically used in military and civil applications such as airports or meteorological systems [51]. In modern vehicles, different frequency bands such as 24, 60, 77 and 79 GHz are employed and they can measure a range from 5 to 200 m [52]. The distance between the AV and the object is calculated by measuring the ToF between the emitted signal and the received echo. In AVs, the RADARs use an array of micro-antennas that generate a set of lobes to improve the range resolution as well as the detection of multiple targets [53]. As mm-Wave RADAR has higher penetrability and a wider bandwidth, and it can accurately measure the short-range targets in any direction utilising the variation in Doppler shift [51,52,53,54]. Due to longer wavelength, mm-Wave radars have an anti-blocking and anti-pollution capability that allows them to cope in rain, snow, fog and low-light. Furthermore, mm-Wave radars have the ability to measure the relative velocity using the Doppler shift [15]. This ability of mm-Wave radars make them suitable for extensive AV application such as obstacle detection [55], pedestrian recognition [56] and vehicle recognition [57]. Some applications of RADARs in AVs are forward cross traffic alert (FCTA), lane change assistance (LCA), blind spot detection (BSD), rear cross traffic alert (RCTA), etc. The mm-Wave also has some disadvantages such as reduced field-of-view (FoV), less precision and results in getting more false alarm as a result of emitted signals which gets bounced from the surroundings [54].
1.3. LiDAR: Light Detection and Ranging
LiDAR utilises the 905 and 1550 nm spectra [17]. The 905 nm spectrum may cause retinal damage to the human eye, and, therefore, the modern LiDAR is operated in the 1550 nm spectrum to minimise the retinal damage [18]. The maximum working distance of LiDAR is up to 200 m [13]. LiDAR can be categorised into 2D, 3D and solid-state LiDAR [19]. A 2D LiDAR uses the single laser beam diffused over the mirror that rotates at high speed. A 3D LiDAR can obtain the 3D image of the surrounding by locating multiple lasers on the pod [20]. At present, the 3D LiDAR can produce reliable results with an accuracy of few centimetres by integrating 4–128 lasers with a horizontal movement of 360 degrees and the vertical movement of 20–45 degrees [21]. The solid-state LiDAR uses the micro-electromechanical system (MEMS) circuit with micro-mirrors to synchronise the laser beam to scan the horizontal FoV several times. The laser light is diffused with the help of a micro-mirror to create the vertical projection of the object. The received signal is captured by a photo-detector and the process repeats until the complete image of the object is created. LiDAR is used for positioning, obstacle detection and environmental reconstruction [13]. 3D LiDAR sensors are playing an increasingly significant role in the AV system [22]. As a result, the LiDARs can be used for ACC, 2D or 3D maps and object identification and avoidance. A roadside LiDAR system has shown to reduce the vehicle-to-pedestrian (V2P) crashes both at intersections and non-intersection areas [23]. In [23], a 16-line real-time computationally efficient LiDAR system is employed. Deep auto-encoder artificial neural network (DA-ANN) is proposed, which achieves an accuracy of
LiDAR utilises the 905 and 1550 nm spectra [58]. The 905 nm spectrum may cause retinal damage to the human eye, and, therefore, the modern LiDAR is operated in the 1550 nm spectrum to minimise the retinal damage [43]. The maximum working distance of LiDAR is up to 200 m [15]. LiDAR can be categorised into 2D, 3D and solid-state LiDAR [59]. A 2D LiDAR uses the single laser beam diffused over the mirror that rotates at high speed. A 3D LiDAR can obtain the 3D image of the surrounding by locating multiple lasers on the pod [60]. At present, the 3D LiDAR can produce reliable results with an accuracy of few centimetres by integrating 4–128 lasers with a horizontal movement of 360 degrees and the vertical movement of 20–45 degrees [61]. The solid-state LiDAR uses the micro-electromechanical system (MEMS) circuit with micro-mirrors to synchronise the laser beam to scan the horizontal FoV several times. The laser light is diffused with the help of a micro-mirror to create the vertical projection of the object. The received signal is captured by a photo-detector and the process repeats until the complete image of the object is created. LiDAR is used for positioning, obstacle detection and environmental reconstruction [15]. 3D LiDAR sensors are playing an increasingly significant role in the AV system [62]. As a result, the LiDARs can be used for ACC, 2D or 3D maps and object identification and avoidance. A roadside LiDAR system has shown to reduce the vehicle-to-pedestrian (V2P) crashes both at intersections and non-intersection areas [63]. In [63], a 16-line real-time computationally efficient LiDAR system is employed. Deep auto-encoder artificial neural network (DA-ANN) is proposed, which achieves an accuracy of 95% within a range of 30 m. In [24], a 64-line 3D LiDAR utilising a support vector machine (SVM)-based algorithm is shown to improve the detection of the pedestrian. Although LiDAR is superior to a mm-Wave radar in measurement accuracy and 3D perception, its performance suffers under severe weather conditions such as fog, snow and rain [25]. In addition, its operating range detection capability depends on the reflectiveness of the object [26].
within a range of 30 m. In [64], a 64-line 3D LiDAR utilising a support vector machine (SVM)-based algorithm is shown to improve the detection of the pedestrian. Although LiDAR is superior to a mm-Wave radar in measurement accuracy and 3D perception, its performance suffers under severe weather conditions such as fog, snow and rain [65]. In addition, its operating range detection capability depends on the reflectiveness of the object [66].
1.4. Cameras
The camera in AVs can be classified as either visible-light based or infrared-based depending upon the wavelength of the device. The camera uses image sensors built with two technologies that are charge-coupled device (CCD) and a complementary metal-oxide-semiconductor (CMOS) [18]. The maximum range of the camera is around 250 m depending on the quality of the lens [13]. The visible cameras use the same wavelength as the human eye i.e., 400–780 nm, and is divided into three bands: Red, Green and Blue (RGB). To obtain the stereoscopic vision, two VIS cameras are combined with known focal length to generate the new channel with the depth (D) information. Such a feature allows the camera (RGBD) to obtain a 3D image of the scene around the vehicle [27].
The camera in AVs can be classified as either visible-light based or infrared-based depending upon the wavelength of the device. The camera uses image sensors built with two technologies that are charge-coupled device (CCD) and a complementary metal-oxide-semiconductor (CMOS) [43]. The maximum range of the camera is around 250 m depending on the quality of the lens [15]. The visible cameras use the same wavelength as the human eye i.e., 400–780 nm, and is divided into three bands: Red, Green and Blue (RGB). To obtain the stereoscopic vision, two VIS cameras are combined with known focal length to generate the new channel with the depth (D) information. Such a feature allows the camera (RGBD) to obtain a 3D image of the scene around the vehicle [67].
The infrared (IR) camera uses passive sensors with a wavelength between 780 nm and 1 mm. The IR sensors in AVs provide vision control in peak illumination. This camera assists AVs in BSD, side view control, accident recording and object identification [28]. Nevertheless, the performance of the camera changes in bad weather conditions such as snow, fog and moment-of-light variation [13].
The infrared (IR) camera uses passive sensors with a wavelength between 780 nm and 1 mm. The IR sensors in AVs provide vision control in peak illumination. This camera assists AVs in BSD, side view control, accident recording and object identification [68]. Nevertheless, the performance of the camera changes in bad weather conditions such as snow, fog and moment-of-light variation [15].
The main advantages of a camera are that it can gather and record the texture, color distribution and contour of the surroundings accurately [13]. However, the angle of observation is limited due to narrow view of the camera lens [29]. Therefore, multiple cameras have been adopted in AVs to monitor the surrounding environment [30][31]. A three-stage RGBD architecture using deep learning and convolutional neural networks was proposed by Ferraz et al. for vehicle and pedestrian detection [32]. However, this requires the AV to process huge amount of data [13]. Currently, AVs do not possess such computational resources; therefore, computational offloading may be an appropriate solution [33].
The main advantages of a camera are that it can gather and record the texture, color distribution and contour of the surroundings accurately [15]. However, the angle of observation is limited due to narrow view of the camera lens [69]. Therefore, multiple cameras have been adopted in AVs to monitor the surrounding environment [70,71]. A three-stage RGBD architecture using deep learning and convolutional neural networks was proposed by Ferraz et al. for vehicle and pedestrian detection [72]. However, this requires the AV to process huge amount of data [15]. Currently, AVs do not possess such computational resources; therefore, computational offloading may be an appropriate solution [73].
summarises the challenges of the discussed sensor technologies. It can be observed in
that the detection capability and reliability of the various sensors is limited in different environments. This limitation can be overcome and the accuracy of target detection along with the reliability can be improved through multi-sensor fusion. Radar–camera (RC) [15][34], Camera–LiDAR (CL) [20][35], Radar–LiDAR (RL) [36] and Radar–Camera–LiDAR (RCL) [37][38] have been proposed where different sensors are combined together to improve the perception of the environment. Furthermore, in [2], three different sensor plans are developed based on range, cost and balance function. In this study, several different sensors are combined. In Plan
that the detection capability and reliability of the various sensors is limited in different environments. This limitation can be overcome and the accuracy of target detection along with the reliability can be improved through multi-sensor fusion. Radar–camera (RC) [56,74], Camera–LiDAR (CL) [60,75], Radar–LiDAR (RL) [76] and Radar–Camera–LiDAR (RCL) [77,78] have been proposed where different sensors are combined together to improve the perception of the environment. Furthermore, in [14], three different sensor plans are developed based on range, cost and balance function. In this study, several different sensors are combined. In Plan A
, four cameras, a mm-Wave RADAR, 32- and 4-layer LiDAR and a GPS+IMU are employed. In Plan
B
, four cameras, three mm-Wave RADAR, a four-layer LiDAR and a GPS+IMU are utilised. Finally, in Plan
C
, two regular cameras, three mm-Wave RADARs, a surrounding camera and a twelve-unit ultrasonic sensor are utilised.
Table 4.
Comparison of sensor and their challenges.
1.5. GNSS and GPS, IMU: Global Navigation Satellite System and Global Positioning System, Inertial Measurement Unit
This technology can determine the exact position of the AV and helps it navigate [39]. GNSS utilises a set of satellites orbiting around the earth’s surface to localise [40]. The system contains the information of AV’s position, speed and the exact time [40]. It operates by calculating the ToF between the satellite emitted signal and the receiver [41]. The AV position is usually extracted from the Global Positioning System (GPS) coordinates. The extracted coordinates by GPS are not always accurate and they usually introduce an error in the position with a mean value of 3 m and a standard deviation of 1 m [42]. The performance is further degraded in urban environments and an error in position can increase up to 20 m [43] and in some extreme cases the GPS position error is around 100 m [44]. In addition to this, the RTK system can also be used in AVs to precisely calculate the position of the vehicle [45]. Furthermore, dead reckoning (DR) and the inertial position can also be used in AVs to determine the position and the direction of the vehicle [46]. A technique known as odometry can be used to measure the position of the vehicle by fixing the rotary sensors to the wheels of the vehicle [39]. To make the AV capable of detecting slippage or lateral movements, the inertial measurement unit (IMU) is used and it detects this using accelerometers, gyroscopes and the magnetometer sensor’s data. The IMU combined with all units can rectify the errors and increases the sampling speed of the measuring system. Although the IMU cannot provide the position error unless it is not accompanied by the GNSS system, AVs can get information from different sources such as RADAR, LiDAR, IMU, GNSS, UWB and camera to minimise the possibilities of error and perform reliable position measurement [18]. GPS can be combined with IMU techniques such as DR and the inertial position to confirm and improve the position estimate of the AV [47].
This technology can determine the exact position of the AV and helps it navigate [79]. GNSS utilises a set of satellites orbiting around the earth’s surface to localise [80]. The system contains the information of AV’s position, speed and the exact time [80]. It operates by calculating the ToF between the satellite emitted signal and the receiver [81]. The AV position is usually extracted from the Global Positioning System (GPS) coordinates. The extracted coordinates by GPS are not always accurate and they usually introduce an error in the position with a mean value of 3 m and a standard deviation of 1 m [82]. The performance is further degraded in urban environments and an error in position can increase up to 20 m [83] and in some extreme cases the GPS position error is around 100 m [84]. In addition to this, the RTK system can also be used in AVs to precisely calculate the position of the vehicle [85]. Furthermore, dead reckoning (DR) and the inertial position can also be used in AVs to determine the position and the direction of the vehicle [86]. A technique known as odometry can be used to measure the position of the vehicle by fixing the rotary sensors to the wheels of the vehicle [79]. To make the AV capable of detecting slippage or lateral movements, the inertial measurement unit (IMU) is used and it detects this using accelerometers, gyroscopes and the magnetometer sensor’s data. The IMU combined with all units can rectify the errors and increases the sampling speed of the measuring system. Although the IMU cannot provide the position error unless it is not accompanied by the GNSS system, AVs can get information from different sources such as RADAR, LiDAR, IMU, GNSS, UWB and camera to minimise the possibilities of error and perform reliable position measurement [43]. GPS can be combined with IMU techniques such as DR and the inertial position to confirm and improve the position estimate of the AV [87].
1.6. Sensor Fusion
Real-time and accurate knowledge of vehicle position, state and other vehicle parameters such as weight, stability, velocity, etc. are important for vehicle handling and safety and, thus, need to be acquired by the AVs using various sensors [48]. The process of sensor fusion is used to obtain coherent information by combining the data obtained from different sensors [13]. The process allows the synthesis action of raw data obtained from complimentary sources [49][50]. Therefore, sensor fusion allows the AV to precisely understand its surrounding by combining all the beneficial information obtained from different sensors [51]. The fusion process in AVs is carried out by using different types of algorithms such as Kalman filters and Bayesian filters. The Kalman filter is considered very important for a vehicle to drive independently because it is utilised in different applications such as RADAR tracking, satellite navigation system and visual odometry [52].
Real-time and accurate knowledge of vehicle position, state and other vehicle parameters such as weight, stability, velocity, etc. are important for vehicle handling and safety and, thus, need to be acquired by the AVs using various sensors [88]. The process of sensor fusion is used to obtain coherent information by combining the data obtained from different sensors [15]. The process allows the synthesis action of raw data obtained from complimentary sources [89,90]. Therefore, sensor fusion allows the AV to precisely understand its surrounding by combining all the beneficial information obtained from different sensors [91]. The fusion process in AVs is carried out by using different types of algorithms such as Kalman filters and Bayesian filters. The Kalman filter is considered very important for a vehicle to drive independently because it is utilised in different applications such as RADAR tracking, satellite navigation system and visual odometry [92].
2. Vehicular Ad-Hoc Networks (VANETs)
VANETs are an emerging sub-class of mobile ad-hoc networks capable of spontaneous creation of a network of mobile devices/vehicles [53]. VANETs can be used for vehicle-to-vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication [54][55]. The main purpose of such technology is to generate security on the roads; for example, during hazardous conditions such as accidents and traffic jam the vehicles can communicate with each other and the network to share vital information [56][57]. The main components of VANET technology are:
VANETs are an emerging sub-class of mobile ad-hoc networks capable of spontaneous creation of a network of mobile devices/vehicles [123]. VANETs can be used for vehicle-to-vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication [44,124]. The main purpose of such technology is to generate security on the roads; for example, during hazardous conditions such as accidents and traffic jam the vehicles can communicate with each other and the network to share vital information [125,126]. The main components of VANET technology are:
-
On-board unit (OBU): It is a GPS-based tracking device embedded in every vehicle to communicate with each other and with roadside unit (RSU) [124,126]. To retrieve the vital information, the OBU is equipped with many electronic components such as resource command processor (RCP), sensor devices and user interfaces. Its main goal is to communicate between different RSUs and OBUs via a wireless link [44].
-
Roadside Unit (RSU): RSU is a computing unit fixed at specific location on roads, parking areas and intersections [127]. Its main goal is to provide connectivity between autonomous vehicle and the infrastructure and also assists in vehicle localisation [44,127]. It can also be used to connect vehicle with other RSUs using different network topologies [44]. They have also been powered using ambient energy sources such as solar power [128].
-
Trusted Authority (TA): It is an authority which manages the entire process for VANETs, so that only valid RSUs and vehicle OBUs can register and communicate [129]. It provides security by verifying the OBU ID and authenticates the vehicle. It also detects malicious messages or suspicious behaviour [44].
VANETs have some unique properties which are very different from other ad-hoc technologies.
-
On-board unit (OBU): It is a GPS-based tracking device embedded in every vehicle to communicate with each other and with roadside unit (RSU) [55][57]. To retrieve the vital information, the OBU is equipped with many electronic components such as resource command processor (RCP), sensor devices and user interfaces. Its main goal is to communicate between different RSUs and OBUs via a wireless link [54].
-
Roadside Unit (RSU): RSU is a computing unit fixed at specific location on roads, parking areas and intersections [58]. Its main goal is to provide connectivity between autonomous vehicle and the infrastructure and also assists in vehicle localisation [54][58]. It can also be used to connect vehicle with other RSUs using different network topologies [54]. They have also been powered using ambient energy sources such as solar power [59].
-
Trusted Authority (TA): It is an authority which manages the entire process for VANETs, so that only valid RSUs and vehicle OBUs can register and communicate [60]. It provides security by verifying the OBU ID and authenticates the vehicle. It also detects malicious messages or suspicious behaviour [54].
VANETs have some unique properties which are very different from other ad-hoc technologies.
Vehicular communication, utilising VANETs, includes V2V communication, V2I communication and V2X communication, as illustrated in . The details are given below.
. The details are given below.
Figure 25.
Vehicular communication (VC) system.
2.1. Vehicle-To-Vehicle (V2V) Communication
It is also called inter-vehicle communication (IVC) that allows the vehicles to communicate with each other and share the necessary information about traffic congestion, accidents and speed limits [64]. V2V communication can generate the network by connecting different nodes (Vehicles) using a mesh (partial or full) topology [65]. Depending upon the number of hops used for inter-vehicle communication, they are classified as single-hop (SIVC) or Multi-hop (MIVC) systems [66]. The SIVC can be used for short-range applications such as lane merging, ACC, etc., whereas MIVC can be used for long-range communication such as traffic monitoring. The V2V communication provides several advantages such as BSD, FCWS, automatic emergency braking (AEB) and LDWS [64].
It is also called inter-vehicle communication (IVC) that allows the vehicles to communicate with each other and share the necessary information about traffic congestion, accidents and speed limits [4]. V2V communication can generate the network by connecting different nodes (Vehicles) using a mesh (partial or full) topology [5]. Depending upon the number of hops used for inter-vehicle communication, they are classified as single-hop (SIVC) or Multi-hop (MIVC) systems [9]. The SIVC can be used for short-range applications such as lane merging, ACC, etc., whereas MIVC can be used for long-range communication such as traffic monitoring. The V2V communication provides several advantages such as BSD, FCWS, automatic emergency braking (AEB) and LDWS [4].
2.2. Vehicle-To-Infrastructure (V2I) Communication
It is also known as roadside-to-vehicle communication (RVC) and allows the vehicles to interact with the RSUs. It helps to detect traffic lights, cameras, lane markers and parking meters [67]. The communication of vehicles with the infrastructure is ad-hoc, wireless and bidirectional [68]. The data collected from the infrastructure are used for traffic supervision and management. They are used to set different speed variables allowing the vehicles to maximise fuel efficiency as well as control the traffic flow [64]. Depending on the infrastructure, the RVC system can be divided into the Sparse RVC (SRVC) and the Ubiquitous RVC (URVC) [69]. The SRVC system provides communication services at hotspots only, for example to detect available parking spaces or gas stations, whereas the URVC system provides coverage throughout the road, even at high speeds. Therefore, the URVC system requires a large investment to ensure network coverage [69].
It is also known as roadside-to-vehicle communication (RVC) and allows the vehicles to interact with the RSUs. It helps to detect traffic lights, cameras, lane markers and parking meters [1]. The communication of vehicles with the infrastructure is ad-hoc, wireless and bidirectional [10]. The data collected from the infrastructure are used for traffic supervision and management. They are used to set different speed variables allowing the vehicles to maximise fuel efficiency as well as control the traffic flow [4]. Depending on the infrastructure, the RVC system can be divided into the Sparse RVC (SRVC) and the Ubiquitous RVC (URVC) [23]. The SRVC system provides communication services at hotspots only, for example to detect available parking spaces or gas stations, whereas the URVC system provides coverage throughout the road, even at high speeds. Therefore, the URVC system requires a large investment to ensure network coverage [23].
2.3. Vehicle-To-Everything (V2X) Communication
The V2X communication allows the vehicle to communicate with other entities such as pedestrians (V2P), roadside (V2R), devices (V2D) and the Grid (V2G) [70]. This communication is used to prevent road accidents with vulnerable pedestrians, cyclists and motorcyclists [71]. The V2X communication allows the Pedestrian Collision Warning (PCW) mechanism to alert the roadside passenger before any serious accident takes place. The PCW can access the Bluetooth or Near Field Communication (NFC) of the smartphone and may use beacon stuffing to deliver critical messages to the pedestrian [64].
The V2X communication allows the vehicle to communicate with other entities such as pedestrians (V2P), roadside (V2R), devices (V2D) and the Grid (V2G) [133]. This communication is used to prevent road accidents with vulnerable pedestrians, cyclists and motorcyclists [20]. The V2X communication allows the Pedestrian Collision Warning (PCW) mechanism to alert the roadside passenger before any serious accident takes place. The PCW can access the Bluetooth or Near Field Communication (NFC) of the smartphone and may use beacon stuffing to deliver critical messages to the pedestrian [4].