Lane detection and tracking are the advanced key features of the advanced driver assistance system. Lane detection is the process of detecting white lines on the roads. Lane tracking is the process of assisting the vehicle to remain in the desired path, and it controls the motion model by using previously detected lane markers.
Methods | Steps | Tool Used | Data Used | Methods Classification | Remarks |
---|
Sources | Data | Method Used | Advantages | Drawbacks | Results | Tool Used | Future Prospects | Data | Reason for Drawbacks | |
---|---|---|---|---|---|---|---|---|---|---|
Simulation | Real | |||||||||
Image and sensor-based lane detection and tracking | [24] | [20] | ||||||||
KITT | ||||||||||
---- | ||||||||||
Sources | Data | Method | Advantages | Drawbacks | Result | Tool Used | Future Prospects | Data | Reason for Drawback | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Advantages | Drawbacks | Result | Tool Used | Future Prospects | Data | Reason for Drawbacks | Simulation | Real | ||||||||||||||||||||||
Simulation | Real | |||||||||||||||||||||||||||||
[42] | [38] | |||||||||||||||||||||||||||||
[49] | [45 |
|
| sensors values | Feature-based approach | Frequent calibration is required for accurate decision making in a complex environment | ||||||||||||||||||||||||
Y | Inverse perspective mapping method is applied to convert the image to bird’s eye view. | Minimal error and quick detection of lane. | The algorithm performance drops when driving in tunnel due to the fluctuation in the lighting conditions. | The lane detection error is 5%. The cross-track error is 25% and lane detection time is 11 ms. | Fisheye dashcam, inertial measurement unit and ARM processor-based computer. | ] | Enhancing the algorithm suitable for complex road scenario and with less light conditions. | Data obtained by using a model car running at a speed of 100 m/s. | YPerformance drop in determining the lane, if the vehicle is driving in a tunnel and the road conditions where there is no proper lighting. The complex environment creates unnecessary tilt causing some inaccuracy in lane detection. |
Gradient cue, color cue and line clustering are used to verify the lane markings. | The proposed method works better under different weather conditions such as rainy and snowy environments. | YThe suitability of the algorithm for multi-lane detection of lane curvature is to be studied. | Except rainy condition during the day, the proposed system provides better results. | Inverse perspective mapping method is applied to convert the image to bird’s eye view. | Quick detection of lane. | The algorithm performance drops due to the fluctuation in the lighting conditions.C++ and OpenCV on ubuntu operating system. Hardware: duel ARM cortex-A9 processors. |
---- | 48 video clips from USA and Korea | The lane detection error is 5%. The cross-track error is 25% lane detection time is 11 ms.Since the road environment may not be predictable, leads to false detection. | Fisheye dashcam: inertial measurement unit; Arm processor-based computer. | Predictive controller for lane detection and controller | Machine learning technique (e.g., neural networks,) |
| data obtained from the controller | Learning-based approach | Reinforcement learning with model predictive controller could be a better choice to avoid false lane detection. | ||||
Enhancing the algorithm suitable for complex road scenario and with less light conditions. | Data obtained by using a model car running at a speed of 1 m/s | The complex environment creates unnecessary tilt causing some inaccuracy in lane detection. | Robust lane detection and tracking | |||||||||||||||||||||||||||
[25] | [21] | Y | Kinematic motion model to determine the lane with minimal parameters of the vehicle. | No need for parameterization of the vehicle with variables like cornering stiffness and inertia. Prediction of lane even in absence of camera input for around 3 s. | The algorithm suitable for different environment situation not been considered | Lateral error of 0.15 m in the absence of camera image. | Mobileye camera, carsim and MATLAB/Simulink, Auto box from dSPACE. | Trying the fault tolerant model in real vehicle. | Test vehicle | ---- | [43] | [39] | Y | Extraction of lanes from the captured image Random, sample consensus algorithm is used to eradicate error in lane detection. | Multilane detection even during poor lane markings. No prior knowledge about the lane is required. | Urban driving scenario quality has to be improved in cardova 2dataset since it perceives the curb of the sidewalk as a lane. | ||||||||||||||
[50] | [46] | The Caltech lane datasets consisting of four types of urban driving scenarios: | Cordova 1; | Cordova 2; Washington2; with a total of 1224 frames containing 4172 lane markings. |
MATLAB | Real time implementation of the proposed algorithm | ||||||||||||||||||||||||
Data from south Korea road and Caltech dataset. | IMU sensors could be incorporated to avoid the false detection of lanes. | Y | Deep learning-based reinforcement learning is used for decision making in the changeover. The reward for decision making is based on the parameters like traffic efficiency | Cooperative decision-making processes involving the reward function comparing delay of a vehicle and traffic. | Validation expected to check the accuracy of the lane changing algorithm for heterogeneous environment | The performance is fine-tuned based on the cooperation for both accident and non-accidental scenario | Custom made simulator | Dynamic selection of cooperation coefficient under different traffic scenario | Newell car following model. | ---- | [26] | [22] |
| Based on robust lane detection model algorithms | Real-time | Model-based approach | Provides better result in different environmental conditions. Camera quality plays important role in determining lanes marking |
Y | Usage of inverse mapping for the creation of bird’s eye view of the environment. | Improved accuracy of lane detection in the range of 86%to 96% for different road types. | Performance under different vehicle speed and inclement weather conditions not considered. | The algorithm requires 0.8 s to process frame. Higher accuracy when more than 59% of lane markers are visible. | Firewire color camera, MATLAB | Real-time implementation of the work | Highway and streets and around Atlanta | ---- | ||||||||||||||
[ | ||||||||||||||||||||||
[ | ||||||||||||||||||||||
35 | ||||||||||||||||||||||
] | ||||||||||||||||||||||
[ | ||||||||||||||||||||||
31 | ||||||||||||||||||||||
[44] | [40]Y | 51] | [Y | 47Rectangular detection region is formed on the image. Edge points of lane is extracted using threshold algorithm. A modified Brenham line voting space is used to detect lane segment. | Robust lane detection method by using a monocular camera in which the roads are provided with proper lane markings. | ]Performance drops when road is not flat | In Cardova 2 dataset, the false detection value is higher around 38%. The algorithm shows better performance under different roads geometries such as straight, curve, polyline and complex | Software based performance analysis on Caltech dataset for different urban driving scenario. Hardware implementation on the Tuyou autonomous vehicle. | ---- | Y | Reinforcement learning-based approach for decision making by using Q-function approximator. | Decision-making process involving reward function comprising yaw rate, yaw acceleration and lane changing time. | Need for more testing to check the efficiency of the approximator function for its suitability under different real-time conditions. | The reward functions are used to learn the lane in a better way. | Custom made simulatorCaltech and custom-made dataset | To test the efficiency of the proposed approach under different road geometrics and traffic conditions. Testing the feasibility of the reinforcement learning with fuzzy logic for image input and controller action based on the current situation.Due to the difficulty In image capturing false detection happened. More training or inclusion of sensors for live dataset collection will help to mitigate it. |
[27] | [23 | ||||
custom | More parameters could be considered for the reward function. | ] | Y | Y | Hough transform to extract the line segments, usage of a convolutional neural network-based classifier to determine the confidence of line segment. | Tolerant to noise | In the custom dataset, the performance drops compared to Caltech dataset. | For urban scenario, the proposed algorithm provides accuracy greater than 95%. The accuracy obtained in lane detection in the custom setup is 72% to 86%. | OV10650 camera and I MU is Epson G320. | [45 | Performance improvement is future consideration. | ] | [41Caltech dataset and custom dataset. | ] | ||||||||
[52] | [ | The device specification and calibration, it plays important role in capturing the lane. | ||||||||||||||||||||
48] | Y | Based on voting map, detected vanishing points, usage of distinct property of lane colour to obtain illumination invariant lane marker and finally found main lane by using clustering methods. | Y | Overall method test algorithm within 33 ms per frame. | Need to reduce computational complexity by using vanishing point and adaptive ROI for every frame. | Under various Illumination condition lane detection rate of the algorithm is an average 93% |
Probabilistic and prediction for the complex driving scenario.Software based analysis done. | Usage of deterministic and probabilistic prediction of traffic of other vehicles to improve the robustness | Analysis of the efficiency of the system under real-time noise is challenging. | Robust decision making compared to the deterministic method. Lesser probability of collision. | MATLAB/Simulink and carsim. Used real-time setup as following:There are chances, to test algorithm at day time with inclement weather conditions. | Custom data based on Real-time |
----- | |||||||||
Hyundai-Kia motors K7, mobile eye camera system, micro auto box II, Delphi radars, IBEO laser scanner. | Testing undue different scenario | Custom dataset (collection of data using test vehicle). | The algorithm to be modified for real suitability for real-time monitoring. | [28] | [24] | Y | Feature-line-pairs (FLP) along with Kalman filter for road detection. | [46] | [Faster detection of lanes, suitable for real-time environment. | 42] | ||||||||||||
[53] | Testing the algorithm suitability under different environmental conditions could be done. | [ | Around 4 ms to detect the edge pixels, 80 ms to detect all the FLPs, 1 ms to determine the extract road model with Kalman filter tracking. | C++; camera and a matrox meteor RGB/ PPB digitizer. | Y | 49 | Robust tracking and improve the performance in urban dense traffic. | Test robot. | ------ | |||||||||||||
] | Y | Proposed a sharp curve lane from the input image based on hyperbola fitting. The input image is converted to grayscale image and the feature namely left edge, right edge and the extreme points of the lanes are calculated | Better accuracy for sharp curve lanes. | The suitability of the algorithm for different road geometrics yet to study. | The results show that the accuracy of lane detection is around 97% and the average time taken to detect the lane is 20 ms. | Usage of pixel hierarchy to the occurrence of lane markings. Detection of the lane markings using a boosting algorithm. Tracking of lanes using a particle filter.Custom made simulator C/C++ and visual studio | ----- | Custom data | ----- | |||||||||||||
Detection of the lane without prior knowledge on-road model and vehicle speed. | Usage of vehicles inertial sensors GPS information and geometry model further improve performance under different environmental conditions | Improved performance by using support vector machines and artificial neural networks on the image. | Machine with 4-GHz processor capable of working on image approximately 240 × 320 image at 15 frames per second. | To test the efficiency of the algorithm by using the Kalman filter. | custom data | Calibration of the sensors needs to be maintained. | [29] | [25] | Y | Dual thresholding algorithm for pre-processing and the edge is detected by single direction gradient operator. Usage of the noise filter to remove the noise. | [47 | The lane detection algorithm insensitive headlight, rear light, cars, road contour signs. | ] | [43The algorithm detects the straight lanes during the night. | Detection Of straight lanes. |
Camera with RGB channel. | ------- | Custom dataset | ] | YSuitability of the algorithm for different types of roads during night to be studied. | ||
vanishing point detection method for unstructured roads | Accurate and robust performance for unstructured roads. | Difficult to obtain robust vanishing point for detection of lane for unstructured scene. | The accuracy of vanishing point range between 80.9% to 93.6% for different scenarios. | Unmanned ground vehicle and mobile robot. | Future scope for structured roads with different scenarios. | Custom data | Complex background interference and unclear road marking. | [30] | [26] | Y | Determination of region of interest and conversion of binary image via adaptive threshold. | Better accuracy | The algorithm needs changes for checking its suitability for the day time lane detection | 90% accuracy during night at isolated highways | Firewire S400 camera and MATLAB | Geometrics transformation of image for increasing the accuracy and intensity normalization. | ||||||
[48 | Custom dataset | ] | [44 | The constraints and assumption considered do not suit for the day time. | ||||||||||||||||||
] | Y | Proposed a lane detection approach using Gaussian distribution random sample consensus (G-RANSAC), usage of rider detector to extract the features of lane points and adaptable neural network for remove noise. | Provides better results during the presence of vehicle shadow and minimal illumination of the environment. | ---- | The proposed algorithm is tested under different illumination condition ranging from normal, intense, normal and poor and provides lane detection accuracy as 95%, 92%, 91% and 90%. | Software based analysis | Need to test proposed method under various times like day, night. | Test vehicle | ---- | [31] | [27] | Y | Canny edge detector algorithm is used to detect the edges of the lanes. | Hough transform improves the output of the lane tracker. | ------ | Performance of the proposed system is better. | Raspberry pi based robust with camera and sensors. | Simulation of the proposed method by using raspberry Pi based robot with a monocular camera and radar-based sensors to determine the distance between neighboring vehicles. | Custom data | ------ | ||
[32] | [28] | Y | Video processing technique to determine the lanes illumination change on the region of interest. | ---- | ---- | Robust performance | vision-based vehicle | Determine the lanes illumination changes on the region of interest for curve line roads | Simulator | ---- | ||||||||||||
[33] | [29] | Y | Y | A colour-based lane detection and representative line extraction algorithm is used. | Better accuracy in the day time. | Algorithm needs changes to test in different scenario. | The results show that the lane detection rate is more than 93%. | MATLAB | There is scope to test the algorithm in the night time. | Custom data | Unwanted noise reduces the performance of the algorithm. | |||||||||||
[34] | [30] | Y | Proposed hardware architecture for detecting straight lane lines using Hough transform. | Proposed algorithm provides better accuracy for occlusion, poor line paintings. | Computer complexity and high cost of HT (Hough transform) | Algorithm tested under various conditions of roads such as urban street, highway and algorithm provides a detection rate of 92%. | Virtex-5 ML 505 platform | Algorithm need to test with different weather condition. | Custom | ] | Y | Proposed a lane detection methodology in a circular arc or parabolic based geometric method. | Video sensor improves the performance of the lane marking. | Performance dropped in lane detection when entering the tunnel region | Experiment performed with different road scene and provided better results. | maps, video sensors, GPS. | Proposed method can test with previously available data. | Custom | Due to low illumination | |||
[36] | [32] | Y | Proposed a hierarchical lane detection system to detect the lanes on the structured and unstructured roads. | Quick detection of lanes. | ---- | The system achieves an accuracy of 97% in lane detection. | MATLAB | Algorithm can test on an isolated highway, urban roads. | ---- | |||||||||||||
----- | [37] | [33] | Y | LIDAR sensor-based boundary detection and tracking method for structured and unstructured roads. | Regardless of road types, algorithm detect accurate lane boundaries. | Difficult to track lane boundaries for unstructured roads because of low contract, arbitrary road shape | The road boundary detection accuracy is 95% for structured roads and 92% for unstructured roads. | Test vehicle with LIDAR, GPS and IMU. | Algorithm needs to test with RADAR based and vision-based sensors. | Custom data | Low contract arbitrary road shape | |||||||||||
[38] | [34] | Y | Proposed a method to detect the pedestrian lanes under different illumination conditions with no lane markings. | Robust performance for pedestrian lane detection under unstructured environment. | More challenging for indoor and outdoor environment. | The result shows that the lane detection accuracy is 95%. | MATLAB | There is scope for structured roads with different speeds limit | New dataset of 2000 images (custom) | Complex environment | ||||||||||||
[39] | [35] | Y | Y | The proposed system is implemented using an improved Hough transform, which pre-process different light intensity road images and convert it to the polar angle constraint area. | Robust performance for a campus road, in which the road does not have lane markings. | Performance drops due to low intensity of light | ------ | Test vehicle and MATLAB | ------- | Custom data | Low illumination | |||||||||||
[40] | [36] | Y | A lane detection algorithm based on camera and 2D LIDAR input data. | Computational and experimental results show the method significantly increases accuracy. | ------ | The proposed approach shows better accuracy compared with the traditional methods for distance less than 9 m. | Proposed method need to test with RADAR and vision-based sensors data | software based analysis and MATLAB | Fusion of camera and 2D LIDAR data | ----- | ||||||||||||
[41] | [37] | Y | A deep learning-based approach for detecting lanes, object and free space. | The Nvidia tool comes with SDK (software development kit) with inbuild options for object detection, lane detection and free space. | Monocular camera with advance driver assistance system is costly. | The time taken to determine the lane falls under 6 to 9 ms. | C++ and NVidia’s drive PX2 platform | Complex road scenario with different high intensity of light. |
Sources | Data | Method Used |
---|