Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1744 2022-07-12 04:48:55 |
2 layout -2 word(s) 1742 2022-07-12 05:03:50 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Pavel, M.I.;  Tan, S.Y.;  Abdullah, A. Vision-Based Autonomous Vehicle Systems. Encyclopedia. Available online: https://encyclopedia.pub/entry/25032 (accessed on 23 December 2024).
Pavel MI,  Tan SY,  Abdullah A. Vision-Based Autonomous Vehicle Systems. Encyclopedia. Available at: https://encyclopedia.pub/entry/25032. Accessed December 23, 2024.
Pavel, Monirul Islam, Siok Yee Tan, Azizi Abdullah. "Vision-Based Autonomous Vehicle Systems" Encyclopedia, https://encyclopedia.pub/entry/25032 (accessed December 23, 2024).
Pavel, M.I.,  Tan, S.Y., & Abdullah, A. (2022, July 12). Vision-Based Autonomous Vehicle Systems. In Encyclopedia. https://encyclopedia.pub/entry/25032
Pavel, Monirul Islam, et al. "Vision-Based Autonomous Vehicle Systems." Encyclopedia. Web. 12 July, 2022.
Vision-Based Autonomous Vehicle Systems
Edit

Autonomous vehicle systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. Deep learning is fast becoming a successful alternative approach for perception-based AVS as it reduces both cost and dependency on sensor fusion.

autonomous controlling deep learning decision making

1. Introduction

Recently, the autonomous vehicle system (AVS) has become one of the most trending research domains that focus on driverless intelligent transport for better safety and reliability on roads [1]. One of the main motives for enhancing AVS developments is its ability to overcome human driving mistakes, including distraction, discomfort and lack of experience, that cause nearly 94% of accidents, according to a statistical survey by the National Highway Traffic Safety Administration (NHTSA) [2]. In addition, almost 50 million people are severely injured by road collisions, and over 1.25 million people worldwide are killed annually in highway accidents. The possible reasons for these injuries may derive from less emphasis on educating drivers with behavior guidance and poorly developed drivers’ training procedures, fatigue while driving, visual complexities, that is, human error, which can be potentially solved by adopting highly efficient self-driving vehicles [3][4]. The NHTSA and the U.S. Department of Transportation formed the SAE International levels of driving automation, identifying autonomous vehicles (AV) from ‘level 0′ to the ‘level 5′ [5], where levels 3 to 5 are considered to be fully AV. However, as of 2019, the manufacturing of level 1 to 3 vehicle systems has been achieved but level 4 vehicle systems are in the testing phase [6]. Moreover, it is highly anticipated that autonomous vehicles will be employed to support people in need of mobility as well as reduce the costs and times of transport systems and provide assistance to people who cannot drive [7][8]. In the past couple of years, not only the autonomous driving academic institutions but also giant tech companies like Google, Baidu, Uber and Nvidia have shown great interest [9][10][11] and vehicle manufacturing companies such as Toyota, BMW and Tesla are already working on launching AVSes within the first half of this decade [12]. Although different sensors such as radar, lidar, geodimetric, computer views, Kinect and GPS are used by conventional AVS to perceive the environment [13][14][15][16][17], it is indeed expensive to equip vehicles with these sensors and the high costs of these sensors are often limited to on-road vehicles [18]. Table 1 shows a comparison of three major vision sensors based on a total of nine factors. While the concept of driverless vehicles has existed for decades, the exorbitant costs have inhibited development for large-scale deployment [19]. To resolve this issue and build a system that is cost efficient with high accuracy, deep learning applied vision-based systems are becoming more popular where RGB vision is used as the only camera sensor. The recent developments in this field of deep learning have accelerated the potential of profound learning applications for the solution of complex real-world challenges [20].
Table 1. Comparison of vision sensors.
VS VR FoV Cost PT DA AAD FE LLP AWP
Camera High High Low Medium Medium High High Medium Medium
Lidar High Medium High Medium High Medium Medium High Medium
Radar Medium Low Medium High High Low Low High Low
VS = Vision Sensor, VR = Visibility Range, FoV = Field of View, PT = Processing Time, DA = Distance Accuracy, AAD = AI Algorithm Deployment, FE = Feature Engineering, LLP = Low-Light Performance, AWP = All-Weather Performance.

2. Vision-Based Autonomous Vehicle Systems Based on Deep Learning

In addition, a good amount of attention was given to developing safe AVS systems for pedestrian detection. Multiple deep learning approaches such as DNN, CNN, YOLO V3-Tiny, DeepSort R-CNN, single-shot late-fusion CNN, Faster R-CNN, R-CNN combined ACF model, dark channel prior-based SVM, attention-guided encoder–decoder CNN outperformed the baseline of applied datasets that presented a faster warning area by bounding each pedestrian in real time [21], detection in crowded environments, and dim lighting or haze scenarios [22][23] for position estimation [23], minimizing computational cost and outperforming state-of-the-art methods [24]. The approaches offer an ideal pedestrian method once their technical challenges have been overcome, for example, dependency on preliminary boxing during detection, presumption of constant depths in input image and improvement to avoid missing rate when dealing with a complex environment.
Moreover, to estimate steering angles, velocity alongside controlling for lane keeping or changing, overcome slow drifting, take action on a human’s weak zone such as a blind spot and decreasing manual labelling for data training, multiple methods, such as multimodal multitask-based CNN [25], CNN with LSTM [26] and ST-LSTM [27], were studied in this research for AVS’s end-to-end control system.
Furthermore, one of the most predominant segments of AVS, traffic scene analysis, was covered to understand scenes from a challenging and crowded movable environment [28], improve performance by making more expensive spatial-feature risk prediction [29] and on-road damage detection [24]. For this purpose, HRNet + contrastive loss [30], Multi-Stage Deep CNN [31], 2D-LSTM with RNN [32], DNN with Hadamard layer [33], Spatial CNN [29], OP-DNN [34] and the methods mentioned in Table 2 were reviewed. However, there are still some limitations, for instance, data dependency or relying on pre-labelled data, decreased accuracy in challenging traffic or at nighttime.
Table 2. Summary of multiple deep learning methods for traffic scene analysis.
Ref. Method Outcomes Advantages Limitations
[35] VGG-19 SegNet Highest 91% classification accuracy. Efficient in specified scene understanding, reducing the person manipulation. Showed false rate for not having high-resolution labelled dataset.
[28] Markov Chain
Monte Carlo
Identify intersections with 90% accuracy. Identified intersections from challenging and crowded urban scenario. Independent tractlets caused unpredictable collision in complex scenarios.
[36] HRNet 81.1% mIoU. Able to perform semantic segmentation with high resolution. Required huge memory size.
[30] HRNet + contrastive loss 82.2% mIoU. Contrastive loss with pixel-to-pixel dependencies enhanced performance. Did not show success of contrastive learning in limited data-labelled cases.
[37] DeepLabV3 and ResNet-50 79% mIoU with 50% less labelled dataset. Reduce dependency on huge labelled data with softmax fine-tuning. Dependency on labelled dataset.
[31] Multistage Deep CNN Highest 92.90% accuracy. Less model complexity and three times less time complexity than GoogleNet. Did not demonstrate for challenging scenes.
[38] Fine- and coarse-resolution CNN 13.2% error rate. Applicable at different scale. Multilabel classification from scene was missing.
[32] 2D-LSTM with RNN 78.52% accuracy. Able to avoid the confusion of ambiguous labels by increasing the contrast. Suffered scene segmentation in foggy vision.
[39] CDN Achieved 80.5% mean IoU. Fixed image semantic information and outperformed expressive spatial feature. Unable to focus on each object in low-resolution images.
[33] DNN with Hadamard layer 0.65 F1 score, 0.67 precision and 0.64 recall. Foresaw road topology with pixel-dense categorization and less computing cost. Restrictions by the double-loss function caused difficulties in optimizing the process.
[40] CNN with pyramid pooling Scored 54.5 mIoU. Developed novel image augmentation technique from fisheye images. Not applicable for far field of view.
[29] Spatial CNN 96.53% accuracy and 68.2% mIoU. Re-architected CNN for long continuous road and traffic scenarios. Performance dropped significantly during low-light and rainy scenarios.
[34] OP-DNN 91.1% accuracy after 7000 iterations. Decreased the issue of overfitting in small-scale training set. Required re-weighting for improved result but inapplicable in uncertain environment.
[41] CNN and LSTM 90% accuracy in 3 s. Predict risk of accidents lane merging, tollgate and unsigned intersections. Slower computational time and tested in similar kinds of traffic scenes.
[42] DNN 68.95% accuracy and 77% recall. Determined risk of class from traffic scene. Sensitivity analysis was not used for crack detection.
[43] Graph-Q and DeepScene-Q Obtained p-value of 0.0011. Developed dynamic interaction-aware-based scene understanding for AVS. Unable to see fast lane result and slow performance of agent.
[44] PCA with CNN High accuracy for transverse classification. Identified damages and cracks in the road, without pre-processing. Required manual labelling which was time consuming.
[45] CNN 92.51%, 89.65% recall and F1 score, respectively. Automatic learning feature and tested in complex background. Had not performed in real-time driving environment.
[24] SegNet and SqueezedNet Highest accuracy (98.93%) in GAPs dataset. Identified potholes with texture-reliant approach. Failed cases due to confusing with texture of the restoration patches.
Taking into account all taxonomies as features, the decision-making process for AVS was broadly analyzed where driving decisions such as overtaking, emergency braking, lane shifting with collision and driving safety in intersections adopting methods such as deep recurrent reinforcement learning [46], actor-critic-based DRL with DDPG [47], double DQN, TD3, SAC [48], dueling DQN [49], gradient boosting decision tree [50], deep RL using Q-masking and autonomically generated curriculum-based DRL [51]. Despite solving most of the tasks for safe deployment in level 4 or 5 AVS, challenges remain, such as complex training cost, lack of proper surrounding vehicles’ behavior analysis and unfinished case in complex scenarios. Some problems remain to be resolved for better outcomes, such as the requirement of a larger labelled dataset [52], struggle to classify in blurry visual conditions [53] and small traffic signs from a far field of view [54], background complexity [55] and detecting two traffic signs rather than one, which occurred for different locations of the proposed region [56]. Apart from these, one of the most complicated tasks for AVS, only vision-based path and motion planning were analyzed by reviewing approaches such as deep inverse reinforcement learning, DQN time-to-go method, MPC, Dijkstra with TEB method, DNN, discrete optimizer-based approach, artificial potential field, MPC with LSTM-RNN, advance dynamic window using, 3D-CNN, spatio-temporal LSTM and fuzzy logic, where solutions were provided by avoiding cost function and manual labelling, reducing the limitation of rule-based methods for safe navigation [57] and better path planning for intersections [58], motion planning by analyzing risks and predicting motions of surrounding vehicles [59], hazard detection-based safe navigation [60], avoiding obstacles for smooth planning in multilane scenarios [61], decreasing computational cost [62] and path planning by replicating human-like control thinking in ambiguous circumstances. Nevertheless, these approaches faced challenges such as lack of live testing, low accuracy in far predicted horizon, impaired performance in complex situations or being limited to non-rule-based approaches and constrained kinematics or even difficulty in establishing a rule base to tackle unstructured conditions.
Finally, to visualize overlaying outcomes generated from the previous methods superimposed on the front head-up display or smart windshield, augmented reality-based approaches combining deep learning methods were reviewed in the last section. AR-HUD based solutions such as 3D surface reconstruction, object marking, path overlaying, reducing drivers’ attention, boosting visualization in tough hazy or low-light conditions by overlapping lanes, traffic signs as well as on-road objects to reduce accidents using deep CNN, RANSAC, TTC methods and so on. However, there are still many challenges for practical execution, such as human adoption of AR-based HUD UI, limited visualization in bright daytime conditions, overlapping non-superior objects as well as visualization delay for fast moving on-road objects. In summary, the research established for vision-based deep learning approaches of 10 taxonomies for AVS with discussion of outcomes, challenges and limitations could be a pathway to improve and rapidly develop cost-efficient level 4 or 5 AVS without depending on expensive and complex sensor fusion.

References

  1. Alawadhi, M.; Almazrouie, J.; Kamil, M.; Khalil, K.A. A systematic literature review of the factors influencing the adoption of autonomous driving. Int. J. Syst. Assur. Eng. Manag. 2020, 11, 1065–1082.
  2. Pandey, P.; Shukla, A.; Tiwari, R. Three-dimensional path planning for unmanned aerial vehicles using glowworm swarm optimization algorithm. Int. J. Syst. Assur. Eng. Manag. 2018, 9, 836–852.
  3. Dirsehan, T.; Can, C. Examination of trust and sustainability concerns in autonomous vehicle adoption. Technol. Soc. 2020, 63, 101361.
  4. Khamis, N.K.; Deros, B.M.; Nuawi, M.Z.; Omar, R.B. Driving fatigue among long distance heavy vehicle drivers in Klang Valley, Malaysia. Appl. Mech. Mater. 2014, 663, 567–573.
  5. Naujoks, F.; Wiedemann, K.; Schömig, N.; Hergeth, S.; Keinath, A. Towards guidelines and verification methods for automated vehicle HMIs. Transp. Res. Part F Traffic Psychol. Behav. 2019, 60, 121–136.
  6. Li, D.; Wagner, P. Impacts of gradual automated vehicle penetration on motorway operation: A comprehensive evaluation. Eur. Transp. Res. Rev. 2019, 11, 36.
  7. Mutz, F.; Veronese, L.P.; Oliveira-Santos, T.; De Aguiar, E.; Cheein, F.A.A.; De Souza, A.F. Large-scale mapping in complex field scenarios using an autonomous car. Expert Syst. Appl. 2016, 46, 439–462.
  8. Gandia, R.M.; Antonialli, F.; Cavazza, B.H.; Neto, A.M.; Lima, D.A.d.; Sugano, J.Y.; Nicolai, I.; Zambalde, A.L. Autonomous vehicles: Scientometric and bibliometric review. Transp. Rev. 2019, 39, 9–28.
  9. Maurer, M.; Christian, G.; Lenz, B.; Winner, H. Autonomous Driving: Technical, Legal and Social Aspects; Springer Nature: Berlin/Heidelberg, Germany, 2016.
  10. Levinson, J.; Askeland, J.; Becker, J.; Dolson, J.; Held, D.; Kammel, S.; Kolter, J.Z.; Langer, D.; Pink, O.; Pratt, V. Towards fully autonomous driving: Systems and algorithms. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 163–168.
  11. Yu, H.; Yang, S.; Gu, W.; Zhang, S. Baidu driving dataset and end-to-end reactive control model. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 341–346.
  12. Hashim, H.; Omar, M. Towards autonomous vehicle implementation: Issues and opportunities. J. Soc. Automot. Eng. Malays. 2017, 1, 111–123.
  13. Chen, L.; Li, Q.; Li, M.; Zhang, L.; Mao, Q. Design of a multi-sensor cooperation travel environment perception system for autonomous vehicle. Sensors 2012, 12, 12386–12404.
  14. Li, Q.; Chen, L.; Li, M.; Shaw, S.-L.; Nüchter, A. A sensor-fusion drivable-region and lane-detection system for autonomous vehicle navigation in challenging road scenarios. IEEE Trans. Veh. Technol. 2013, 63, 540–555.
  15. Rahman, A.H.A.; Ariffin, K.A.Z.; Sani, N.S.; Zamzuri, H. Pedestrian Detection using Triple Laser Range Finders. Int. J. Electr. Comput. Eng. 2017, 7, 3037.
  16. Wang, H.; Wang, B.; Liu, B.; Meng, X.; Yang, G. Pedestrian recognition and tracking using 3D LiDAR for autonomous vehicle. Robot. Auton. Syst. 2017, 88, 71–78.
  17. Wang, L.; Zhang, Y.; Wang, J. Map-based localization method for autonomous vehicles using 3D-LIDAR. IFAC-Pap. 2017, 50, 276–281.
  18. Kong, P.-Y. Computation and Sensor Offloading for Cloud-Based Infrastructure-Assisted Autonomous Vehicles. IEEE Syst. J. 2020, 14, 3360–3370.
  19. Zhao, J.; Xu, H.; Liu, H.; Wu, J.; Zheng, Y.; Wu, D. Detection and tracking of pedestrians and vehicles using roadside LiDAR sensors. Transp. Res. Part C Emerg. Technol. 2019, 100, 68–87.
  20. Huval, B.; Wang, T.; Tandon, S.; Kiske, J.; Song, W.; Pazhayampallil, J.; Andriluka, M.; Rajpurkar, P.; Migimatsu, T.; Cheng, Y.R. An empirical evaluation of deep learning on highway driving. arXiv 2015, arXiv:01716.
  21. Zhan, H.; Liu, Y.; Cui, Z.; Cheng, H. Pedestrian Detection and Behavior Recognition Based on Vision. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 771–776.
  22. Qiu, Z.; Zhao, N.; Zhou, L.; Wang, M.; Yang, L.; Fang, H.; He, Y.; Liu, Y. Vision-based moving obstacle detection and tracking in paddy field using improved yolov3 and deep SORT. Sensors 2020, 20, 4082.
  23. Ding, B.; Liu, Z.; Sun, Y. Pedestrian Detection in Haze Environments Using Dark Channel Prior and Histogram of Oriented Gradient. In Proceedings of the 2018 Eighth International Conference on Instrumentation & Measurement, Computer, Communication and Control (IMCCC), Harbin, China, 19–21 July 2018; pp. 1003–1008.
  24. Anand, S.; Gupta, S.; Darbari, V.; Kohli, S. Crack-pot: Autonomous road crack and pothole detection. In Proceedings of the 2018 Digital Image Computing: Techniques and Applications (DICTA), Canberra, ACT, Australia, 10–13 December 2018; pp. 1–6.
  25. Yang, Z.; Zhang, Y.; Yu, J.; Cai, J.; Luo, J. End-to-end multi-modal multi-task vehicle control for self-driving cars with visual perceptions. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2289–2294.
  26. Lee, M.-J.; Ha, Y.-G. Autonomous Driving Control Using End-to-End Deep Learning. In Proceedings of the 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), Busan, Korea, 19–22 February 2020; pp. 470–473.
  27. Chi, L.; Mu, Y. Deep steering: Learning end-to-end driving model from spatial and temporal visual cues. arXiv 2017, arXiv:03798.
  28. Geiger, A.; Lauer, M.; Wojek, C.; Stiller, C.; Urtasun, R. 3D Traffic Scene Understanding From Movable Platforms. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1012–1025.
  29. Pan, X.; Shi, J.; Luo, P.; Wang, X.; Tang, X. Spatial as deep: Spatial cnn for traffic scene understanding. arXiv 2017, arXiv:06080.
  30. Wang, W.; Zhou, T.; Yu, F.; Dai, J.; Konukoglu, E.; Van Gool, L. Exploring cross-image pixel contrast for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 7303–7313.
  31. Tang, P.; Wang, H.; Kwong, S. G-MS2F: GoogLeNet based multi-stage feature fusion of deep CNN for scene recognition. Neurocomputing 2017, 225, 188–197.
  32. Byeon, W.; Breuel, T.M.; Raue, F.; Liwicki, M. Scene labeling with lstm recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3547–3555.
  33. Oeljeklaus, M.; Hoffmann, F.; Bertram, T. A combined recognition and segmentation model for urban traffic scene understanding. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6.
  34. Mou, L.; Xie, H.; Mao, S.; Zhao, P.; Chen, Y. Vision-based vehicle behaviour analysis: A structured learning approach via convolutional neural networks. IET Intell. Transp. Syst. 2020, 14, 792–801.
  35. Liu, X.; Neuyen, M.; Yan, W.Q. Vehicle-related scene understanding using deep learning. In Asian Conference on Pattern Recognition; Springer: Sinapore, 2020; pp. 61–73.
  36. Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. 2020, 43, 3349–3364.
  37. Zhao, X.; Vemulapalli, R.; Mansfield, P.A.; Gong, B.; Green, B.; Shapira, L.; Wu, Y. Contrastive Learning for Label Efficient Semantic Segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10623–10633.
  38. Wang, L.; Guo, S.; Huang, W.; Xiong, Y.; Qiao, Y. Knowledge guided disambiguation for large-scale scene classification with multi-resolution CNNs. IEEE Trans. Image Process. 2017, 26, 2055–2068.
  39. Fu, J.; Liu, J.; Li, Y.; Bao, Y.; Yan, W.; Fang, Z.; Lu, H. Contextual deconvolution network for semantic segmentation. Pattern Recognit. 2020, 101, 107152.
  40. Xue, J.-R.; Fang, J.-W.; Zhang, P. A survey of scene understanding by event reasoning in autonomous driving. Int. J. Autom. Comput. 2018, 15, 249–266.
  41. Jeon, H.-S.; Kum, D.-S.; Jeong, W.-Y. Traffic scene prediction via deep learning: Introduction of multi-channel occupancy grid map as a scene representation. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1496–1501.
  42. Theofilatos, A.; Chen, C.; Antoniou, C. Comparing Machine Learning and Deep Learning Methods for Real-Time Crash Prediction. Transp. Res. Rec. J. Transp. Res. Board 2019, 2673, 169–178.
  43. Huegle, M.; Kalweit, G.; Werling, M.; Boedecker, J. Dynamic interaction-aware scene understanding for reinforcement learning in autonomous driving. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4329–4335.
  44. Nguyen, N.T.H.; Le, T.H.; Perry, S.; Nguyen, T.T. Pavement crack detection using convolutional neural network. In Proceedings of the Ninth International Symposium on Information and Communication Technology, Sharjah, United Arab Emirates, 18–19 November 2019; pp. 251–256.
  45. Zhang, L.; Yang, F.; Zhang, Y.D.; Zhu, Y.J. Road crack detection using deep convolutional neural network. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3708–3712.
  46. Hoel, C.-J.; Driggs-Campbell, K.; Wolff, K.; Laine, L.; Kochenderfer, M.J. Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving. IEEE Trans. Intell. Veh. 2020, 5, 294–305.
  47. Fu, Y.; Li, C.; Yu, F.R.; Luan, T.H.; Zhang, Y. A Decision-Making Strategy for Vehicle Autonomous Braking in Emergency via Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2020, 69, 5876–5888.
  48. Munk, J.; Kober, J.; Babuška, R. Learning state representation for deep actor-critic control. In Proceedings of the 2016 IEEE 55th Conference on Decision and Control (CDC), Las Vegas, NV, USA, 12–14 December 2016; pp. 4667–4673.
  49. Liao, J.; Liu, T.; Tang, X.; Mu, X.; Huang, B.; Cao, D. Decision-making Strategy on Highway for Autonomous Vehicles using Deep Reinforcement Learning. IEEE Access 2020, 8, 177804–177814.
  50. Gómez-Huélamo, C.; Egido, J.D.; Bergasa, L.M.; Barea, R.; López-Guillén, E.; Arango, F.; Araluce, J.; López, J. Train here, drive there: Simulating real-world use cases with fully-autonomous driving architecture in carla simulator. In Workshop of Physical Agents; Springer: Cham, Switzerland, 2020; pp. 44–59.
  51. Qiao, Z.; Muelling, K.; Dolan, J.M.; Palanisamy, P.; Mudalige, P. Automatically generated curriculum based reinforcement learning for autonomous vehicles in urban environment. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1233–1238.
  52. Behrendt, K.; Novak, L.; Botros, R. A deep learning approach to traffic lights: Detection, tracking, and classification. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 1370–1377.
  53. Natarajan, S.; Annamraju, A.K.; Baradkar, C.S. Traffic sign recognition using weighted multi-convolutional neural network. IET Intell. Transp. Syst. 2018, 12, 1396–1405.
  54. Zhang, J.; Huang, M.; Jin, X.; Li, X. A real-time chinese traffic sign detection algorithm based on modified YOLOv2. Algorithms 2017, 10, 127.
  55. Cao, J.; Song, C.; Peng, S.; Xiao, F.; Song, S. Improved traffic sign detection and recognition algorithm for intelligent vehicles. Sensors 2019, 19, 4021.
  56. Jung, S.; Lee, U.; Jung, J.; Shim, D.H. Real-time Traffic Sign Recognition system with deep convolutional neural network. In Proceedings of the 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Xi’an, China, 19–22 August 2016; pp. 31–34.
  57. You, C.; Lu, J.; Filev, D.; Tsiotras, P. Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning. Robot. Auton. Syst. 2019, 114, 1–18.
  58. Isele, D.; Rahimi, R.; Cosgun, A.; Subramanian, K.; Fujimura, K. Navigating occluded intersections with autonomous vehicles using deep reinforcement learning. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 2034–2039.
  59. Zhang, L.; Xiao, W.; Zhang, Z.; Meng, D. Surrounding Vehicles Motion Prediction for Risk Assessment and Motion Planning of Autonomous Vehicle in Highway Scenarios. IEEE Access 2020, 8, 209356–209376.
  60. Islam, M.; Chowdhury, M.; Li, H.; Hu, H. Vision-Based Navigation of Autonomous Vehicles in Roadway Environments with Unexpected Hazards. Transp. Res. Rec. J. Transp. Res. Board 2019, 2673, 494–507.
  61. Ma, L.; Xue, J.; Kawabata, K.; Zhu, J.; Ma, C.; Zheng, N. Efficient sampling-based motion planning for on-road autonomous driving. IEEE Trans. Intell. Transp. Syst. 2015, 16, 1961–1976.
  62. Gu, T.; Dolan, J.M. On-road motion planning for autonomous vehicles. In Proceedings of the International Conference on Intelligent Robotics and Applications, Montreal, QC, Canada, 3–5 October 2012; pp. 588–597.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , ,
View Times: 1.0K
Entry Collection: Remote Sensing Data Fusion
Revisions: 2 times (View History)
Update Date: 12 Jul 2022
1000/1000
Video Production Service