With the concept of Internet-of-Things, autonomous vehicles can provide higher driving efficiency, traffic safety, and freedom for the driver to perform other tasks. This paper first covers enabling technology involving a vehicle moving out of parking, traveling on the road, and parking at the destination. The development of autonomous vehicles relies on the data collected for deployment in actual road conditions. Research gaps and recommendations for autonomous intelligent vehicles are included
The Internet of Things (IoT) enables billions of intelligent devices with processing, sensing, and actuating capabilities to connect to the internet and facilitate the sharing and collaboration of data [1][2][3]. Such platforms can be applied to smart homes, warning systems, smart cities, threat identification systems, the automotive field, and mobility [1][2][4]. In the automotive field, IoT allows the development of a vehicle capable of driving autonomously to provide higher driving efficiency, traffic safety, and freedom for the driver to perform other tasks [5][6][7][8]. The driving process can be described as a series of acceleration changes, direction changes, lane changes, and light changes [9]. To drive without human intervention, an autonomous vehicle should consider the overall situation [10], which requires five primary functions: localization, perception, planning, vehicle control, and system management [11].
The localization module is responsible for estimating vehicle position, and the perception module creates a model of the driving environment from multisensor information-fused data. Based on localization and perception information, the planning module then determines the vehicle’s maneuvers for safe navigation. The vehicle control module follows the planning module’s desired command by controlling the steering, acceleration, and braking. Finally, the system management module supervises the overall autonomous driving system [11]. However, the process becomes much more sophisticated when there are more elements to consider, such as other vehicles, pedestrians, or cyclists on the road. Therefore, to enable communication between an autonomous vehicle and other road elements, a vehicle communication system becomes an indispensable and critical component in an autonomous vehicle. Such communication system is commonly known as vehicle-to-everything (V2X) communication, which includes several scenarios such as vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-pedestrian (V2P), and vehicle-to-network (V2N) [12][13].
V2V consists of multiple vehicles in which any two vehicles can communicate with each other [14][15]. Thus, allowing the driver to be aware of other cars’ speed, acceleration or deceleration, and accidents such as collision and road departure can be avoided [16]. In contrast, V2I communication allows the vehicle to access the roadside infrastructure for vast area information dissemination [17]. Services include infotainment delivery, as well as safety-related information such as speed limits, safe distance warnings, lane-keeping support, intersection safety, traffic jam warnings, and accident warnings [18]. The idea of V2P is the exchange of information between a vehicle and a pedestrian through sensors and smart devices to avoid collision [19][20][21]. Finally, V2N puts vehicular user equipment in communication with a server that provides centralized control and traffic, road, and service information [22]. Therefore, the use of V2X communications coupled with existing vehicle-sensing capabilities provides the fundamentals for advanced applications targeted toward road safety, passenger infotainment, manufacturer services, and vehicle traffic optimization [23][24].
The development of such systems will ultimately rely on the data collected from actual interaction if they are to be effective when deployed in a real-life situation [25]. For example, machine vision uses image processing to monitor rear vehicles [26] and the trajectory analysis of surrounding vehicles [27]. Furthermore, historical data are used to determine the optimal control parameters for maximal fuel efficiency and saving [28]. Without full vehicle autonomy, data collected by in-car sensors can also be used to analyze driver behavior, which reduces the likelihood of impaired or drowsy driving [29].
There exists literature that has surveyed V2X communication and focused on the area of network connectivity and security [30][31][32][33][34]. Reviews that focus on other areas related to the autonomous vehicle have also been presented. Siegel et al. [29] summarized the state of the art of connected vehicles from the need for vehicle data, applications, enabling technologies, and challenges. Chen and Englund [35] reviewed cooperative intersection management where road users, infrastructure, and traffic control centers can communicate and coordinate traffic safely and efficiently. Their paper includes methods for both signalized and nonsignalized intersections with emphasis on nonsignalized intersections. Dixit et al. [36] provided a review in the area of autonomous overtaking. The authors showed that the two essential aspects of high-speed overtaking are vehicle dynamics and environmental constraints, as well as accurate knowledge of the environment and surrounding obstacles. Bresson et al. [37] provided a survey on localization techniques using onboard sensing systems and their combinations with V2V and V2I systems for autonomous vehicles and investigated their applicability. The authors of [38][39][40] published reviews on vehicular cloud computing, which is an extension of mobile cloud computing based on vehicular networks, and also reviewed several proposed cloud computing schemes [41][42][43][44][45]. Bousselham et al. [38] focused on applications, cloud formations, key management, intercloud communication systems, and aspects of privacy and security issues. Mekki et al. [39] focused on the challenges of vehicular cloud networks. Finally, Boukerche and De Grande [40] described solutions for vehicular clouds, featuring applications, services, and traffic models that can enable vehicular clouds in a more dynamic environment.
Prior to fully autonomous driving, the technology of advanced driver-assistance systems (ADASs) is briefly discussed. ADASs can help with monitoring, braking, and warning tasks to enhance safety conditions on the road. ADASs can perform parking assistance or monitoring. Connected technologies such as V2X and V2I, streetlights and traffic information, together with ADASs, can create a safer road for drivers and pedestrians. As ADASs continue to strive for more benefits, it is known that governments may soon require vehicles to install essential ADASs and their components over the next few years. It is necessary to highlight that the ADASs discussed here refer to technology to assist the driver during driving rather than an autonomously driven car. Current driver-assistance systems are gradually being equipped with more advanced technology. Most systems aim to provide parking assistance, forward collision warnings, lane-departure warnings, adaptive cruise control, and driver drowsiness detection [46].
Parking assistance systems aim to provide safe and comfortable backward parking. These systems work by reading data such as the magnitude of steering wheel rotation, speed, and lateral acceleration from the vehicle’s electronic stability control system to generate the backward parking trajectory. The rear camera then captures the vehicle’s rear view, and the trajectory is integrated into the view and displayed on the in-car monitor. Such a reference provides the driver with the information of where the vehicle is heading and prevents a crash while reversing.
Forward collision avoidance systems are designed to provide the driver with a visual and audible warning when they are too close to a vehicle ahead [47]. In general, such systems monitor the vehicle’s speed and the speed of the vehicle in front of it while measuring the distance between the two vehicles to analyze if there is a risk of collision [48]. Monitoring can be achieved using vision-based sensors, GPS, radar, or LiDAR [49].
Current technology for lane changing focuses primarily on blind-spot identification and warning [50]. The lane-change assistant recognizes vehicles in the blind spot and warns the driver when changing lanes [51]. A lane-departure warning system usually estimates the vehicle’s relative position on the road using a camera to track road markings. The system then provides an alert to prevent an unintended lane departure using audible, visual, or haptic steering wheel feedback [52][53][54]. Such systems have already been rolled out in several commercial vehicles by Volvo, Mercedes, Audi, BMW, Nissan, and Honda [52][53].
Adaptive cruise control (ACC) is an enhancement of the traditional cruise control (CC) system that improves driver convenience, reduces driver workload, and has the potential to improve vehicle safety [55]. Current ACC systems are intelligent systems that control a vehicle’s acceleration and deceleration to maintain pace with the preceding car or travel at the desired speed [55][56]. Such systems are achieved with the data collected from onboard sensors such as an infrared laser, radar, and video sensors [57].
Abnormal driving is usually caused by fatigue, recklessness, and/or drunkenness [58]. A driver with any of these conditions usually exhibits a specific change in behavior or body movement. When a driver is drowsy, they will usually perform actions such as rapid and constant blinking, nodding or swinging their head, and frequent yawning [59]. On the other hand, a drunk driver intoxicated by alcohol usually develops the habit of sudden acceleration or deceleration with a delayed response. Reckless driving is also similar to drunk driving to a certain degree. The driver may be awake but affected by emotional factors, thereby exhibiting sudden acceleration or deceleration and violating the speed limit [58]. Therefore, a driver monitoring system can be achieved by monitoring the driver directly or indirectly. Direct driver monitoring systems include monitoring heart rate and driver body movements using different sensors. Indirect driver monitoring includes analyzing pedal and steering activities and reactions to certain events [60][61]. Upon detecting such abnormal behavior, a warning system will be activated.
The designated purpose of a vehicle is to provide transportation from Point A to Point B. Therefore, our review will follow the same sequence of a vehicle moving out of a parking lot/garage, traveling on the road, and parking at the destination.
On the Internet of Vehicle environment, finding an available parking lot can be solved [62]. This futuristic parking system provides a car park recommendation for the driver while taking the driver’s preferences into account [63][64][65]. Such preferences may include parking fees, distance from the car park to the destination, driving time from the current location to the destination, and reservation reliability [64]. Supporting such a system will require a cooperative network of vehicles, parking lots, and a central server. The car park will first need to provide the correct occupancy status with the precise location and then update it into the central server. Once a driver makes a reservation in that particular car park, information must be updated to the central server. Another possible solution is to book the parking lot in advance through the use of short message service (SMS) [66] or an Android application [67].
For an accurate occupancy status, several authors [68][69][70][71][72] have proposed to use image processing techniques as opposed to using counter-based or sensor-based techniques, which are relatively higher in cost [73]. Sensors deployed to detect occupancy include a passive and an active infrared sensor, an ultrasonic sensor, a magnetometer sensor, and a microwave sensor [74]. Sensors are usually embedded in parking lots to detect the presence of a vehicle. The idea of using the image processing technique is to first capture the parking lot’s image using a camera installed in the car park, followed by using an image processing algorithm to either extract or enhance features in the image. Finally, a classifier is used to classify the occupancy status. Such a system brings the benefits of saving fuel time, as car cruising while waiting for an empty lot is reduced, improving the traffic flow in the car park [75][76]. Furthermore, conflicts between drivers over the rights to the parking lot can be reduced [77].
While traveling on the road, there are several actions that an autonomous vehicle would perform, such as lane keeping, lane changing, overtaking, and obeying traffic rules. For fully autonomous driving, lane keeping is the evolution of a lane-departure warning system. While the idea of detecting whether a vehicle has drifted into another lane unintentionally is the same for both systems, the lane-keeping system corrects the vehicle’s direction to keep it within its lane [78][79]. Early systems corrected vehicle direction by differential braking. Still, current systems actively control steering to maintain lane centering [78] by taking the dynamic and kinematics model of the vehicle into consideration to control the lateral motion of the vehicle [79][80][81][82][83][84]. Such a system is a great tool to prevent off-the-road crashes [85]. Studies have shown a significant reduction in driver injury crashes with cars equipped with such a system compared to cars without this system.
A lane change is described as a maneuver that involves a deliberate and substantial shift in the lateral position when traveling in the same direction associated with simple lane change, merge, exit, pass, and weave maneuvers [86]. Such actions will require a suitable time gap to avoid a frontal collision crash [87]. A lane change is usually carried out due to the following: (1) to travel at a faster lane and (2) to obstacle avoidance. For an autonomous vehicle, other than assessing the lane change risk by checking surrounding vehicles, it should control the vehicle to complete the intended lane change if there is no risk involved [88]. Information such as the position, velocity, and acceleration of surrounding vehicles to generate the risk analysis can be based on computer vision systems, such as those used in lane-departure warning systems, a precise measurement system (inertial navigation aided by a global navigation satellite system (GNSS) in conjunction with high-resolution maps) [89] or a V2V communication system [90]. For normal lane changing, the collision avoidance problem is formulated as an optimization problem where the time for completing the lane changing maneuver is minimized by selecting the appropriate lateral and longitudinal control inputs. After the collection of this information, the vehicle then computes the trajectory of each neighboring vehicle then compares this with its own trajectory to determine if any adjacent vehicle poses a safety risk [89]. In an emergency lane-changing situation such as obstacle avoidance, lateral and longitudinal control inputs are calculated to minimize the longitudinal distance between the vehicle and the obstacle [91]. In such a scenario, geometric characteristics of obstacles are also taken into consideration for the planning of trajectory [92]. Finally, steering control would perform the maneuvering action based on the planned course as closely as possible [92][93]. Such a system will potentially reduce human errors such as inaccurate estimation of the surrounding traffic or illegal maneuvers.
The autonomous overtaking procedure consists of a combination of lane keeping and lane changing. Three consecutive maneuvers could begin with lane changing followed by traveling on a straight path (lane keeping) parallel to the vehicle to overtake and, again, a lane change, which has to be planned and coordinated [94]. This procedure can be divided into two stages. The procedure begins with checking if overtaking can be carried out [95]. Essential factors to consider when performing the maneuvers include the following: a safe distance to the vehicle to be overtaken; an adequate period for each lane-change maneuver, accounting for varying road widths; a smooth and comfortable lane-change trajectory; and safely returning to the original lane or maintaining a safe distance from a vehicle ahead when the overtaking maneuver cannot be executed [96]. Information of surrounding vehicles can be obtained as discussed earlier. The second stage is the performing of the maneuver. The steering control can be achieved through an infrastructure-supported or autonomous approach. The infrastructure-supported system is based on physically or virtually marked trajectories, usually together with V2V communication. In the autonomous overtaking process, only onboard sensors are used to determine the relative position and orientation between vehicles. The vehicle steering control is determined according to the relative position and orientation concerning the vehicle to overtake. Thus, the overtaking vehicle accomplishes the maneuver concerning the overtaken vehicle instead of the road. This system can reduce numerous fatal crashes due to unsafe diversion space from the original lane, poor visibility when passing a vehicle, or erroneous judgment in returning to the lane [96].
By using V2I communication and positioning technology, the traffic controller/roadside infrastructure will have real-time awareness of the number of vehicles, their positions, and their speeds [97]. The autonomous vehicle will communicate with the traffic controller/roadside where information consists of the right of way [97] and allows planning of safe trajectory crossing the intersection [98]. A traffic controller communicates with individual vehicles and assigns specific time slots to pass the intersection after real-time information processing [99]. The controller functions as a virtual traffic light that can change at an infinite frequency [99], enabling intelligent traffic signal phase setting, unlike the current traffic light setting. With the current setting, vehicles are not allowed to cross the intersection if the light corresponding to their lane is red, even in the absence of conflicting vehicles [100]. This causes delay, fuel wastage, tailpipe emissions, and passenger frustration [101][102]. In the nonsignalized intersection where there is no traffic light or other controlling facilities, vehicles first communicate with each other through V2V communication and negotiate intersection passing. Two categories of solutions can be adopted to manage such intersections: a cooperative resource reservation and a trajectory planning approach. The former focuses on scheduling space tiles and time slots requested by vehicles intending to cross the intersection. In contrast, the latter focuses on the relative motion between vehicles to determine a safe crossing sequence [103].
To achieve several functions, as discussed early, an autonomous vehicle’s accurate localization system is required. As classified by Kuutti et al. [6], localization techniques can be categorized by the mapping approach, the sensor-based approach, and cooperative localization techniques. The first method localizes vehicles concerning a reference global or local map. The sensor-based approach utilizes onboard vehicle sensors to find the global position of a vehicle in a specified coordinate system. The primary sensors involved are GPS, inertial motion units (IMUs), cameras, radar, light detection and ranging (LiDAR), and ultrasonic sensors. In the last method, the localization of vehicles is accomplished through broadcasting information of their current state. Through V2V and/or V2I communications, the vehicle can know the surrounding vehicles’ exact location.
If a reservation is not made in advance, the autonomous vehicle will assist by using sensors installed to scan for an empty car lot to prevent the driver from driving past an empty lot. Barnes et al. [104] categorized empty lot detection into seven methods: (1) ultrasonic sensor-based, (2) short-range radar sensor-based, (3) image processing, (4) motion stereo-based method, (5) binocular stereo vision-based method, (6) light stripe projection-based method, and (7) scanning laser radar-based. Ultrasonic sensors and radar sensors work in the same way by detecting the distance between the sensor and targeted object based on the data collected to determine the occupancy status [104][105][106][107]. The concept of image processing is similar to car park monitoring, as discussed earlier. The only difference is the camera is installed on the vehicle body. Stereo-based methods first construct a 3D map [108][109] then designate the target position. The light stripe projection-based method recognizes 3D information by analyzing the light stripe made by a light plane projector reflected from objects [110].
Finally, after identifying an empty parking lot, the autonomous vehicle completes the final task by maneuvering itself into the lot. Although there are several different types of parking lots such as parallel, perpendicular, and fishbone parking [111][112], autonomous parking usually consists of two steps: (1) optimal path planning and (2) path following/tracking [112][113][114]. Using data collected from a combination of sensors, the optimal path planning algorithm begins by generating a suitable collision-free path from a given starting point to a required position within the parking lot that satisfies all kinematic constraints. The parking control scheme then adopts a step-by-step control strategy that compares the current position and requires the position to choose the steering action at the current position [115].
Maneuvering a vehicle into a parking spot of limited space is often challenging, especially for novice drivers, and carries the risk of expensive damage [116][117]. Due to this problem, novice drivers are reluctant to try parking in smaller lots and cruise around for another empty lot. This contributes to additional air pollution, fuel consumption, and congestion [114].
This entry is adapted from the peer-reviewed paper 10.3390/electronics10091021