Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 + 2557 word(s) 2557 2021-05-22 06:45:26 |
2 format change Meta information modification 2557 2021-05-28 06:02:25 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Ungureanu, V. Visibility Enhancement and Fog Detection. Encyclopedia. Available online: https://encyclopedia.pub/entry/10201 (accessed on 18 April 2024).
Ungureanu V. Visibility Enhancement and Fog Detection. Encyclopedia. Available at: https://encyclopedia.pub/entry/10201. Accessed April 18, 2024.
Ungureanu, Vlad. "Visibility Enhancement and Fog Detection" Encyclopedia, https://encyclopedia.pub/entry/10201 (accessed April 18, 2024).
Ungureanu, V. (2021, May 27). Visibility Enhancement and Fog Detection. In Encyclopedia. https://encyclopedia.pub/entry/10201
Ungureanu, Vlad. "Visibility Enhancement and Fog Detection." Encyclopedia. Web. 27 May, 2021.
Visibility Enhancement and Fog Detection
Edit

In mobile systems, fog, rain, snow, haze, and sun glare are natural phenomena that can be very dangerous for drivers. In addition to the visibility problem, the driver must face also the choice of speed while driving. The main effects of fog are a decrease in contrast and a fade of color. Rain and snow cause also high perturbation for the driver while glare caused by the sun or by other traffic participants can be very dangerous even for a short period. In the field of autonomous vehicles, visibility is of the utmost importance. To solve this problem, different researchers have approached and offered varied solutions and methods. It is useful to focus on what has been presented in the scientific literature over the past ten years relative to these concerns. 

Visibility Enhancement and Fog Detection

1. Introduction

Adapting vehicle speed to environmental conditions is the main way to reduce the number of accidents on public roads [1]. Bad visibility caused by the weather conditions while driving proved to be one of the main factors of accidents [1]. The research from the last decade came with different features to help the drivers, such as redesigning the headlights by using LED or laser devices or improving the directivity of the beam in real time; with these new technologies, the emitted light is closer to the natural one [2]. In addition, they also introduced a new feature, auto-dimming technologies being already installed on most of the high-end vehicles [3]. In case of fog, unfortunately, this is not enough, and up until now, no reliable and robust system was developed to be installed on a commercial vehicle. There were approaches based on image processing by detecting lane marking, traffic signs, or hazards such as obstacles [4], image dehazing and deblurring [5], image segmentation, or machine learning methods [6][7]. Other methods are based on evaluating the optical power of a light source in direct transmission or backscattering, by analyzing the scattering and dispersion of the beam [8][9]. There are approaches that are using systems already installed on the vehicle such as ADAS (Advanced Driver Assistant Systems), LIDAR (LIght Detection And Ranging), radar, cameras, or different sensors [10][11][12] and even geostationary satellite approaches [13]. While imaging sensors output reliable results in good weather conditions, their efficiency is decreasing in bad weather conditions such as fog, rain, snow, or glare of the sun.
The biggest companies around the world are working these years to develop a technology that will completely change driving, the autonomous vehicle [14]. When this will be rolled out in public ways, the expectation will be for crashes to decrease considerably. However, let us think about how an autonomous vehicle will behave in bad weather conditions: loss of vehicle adherence, problems on vehicle stability, and maybe the most important fact is related to the decrease or lack of visibility: non-visible traffic signs and lane markings, non-identifiable pedestrian [15], objects or vehicles on its way [16], lack of visibility due to sun glare [17], etc. We have also the example of the autonomous vehicle developed by Google, which failed the tests in bad weather conditions in 2014. Now, the deadline for rolling out the autonomous vehicle is very close; 2020 was already announced by many companies, and they must find a proper solution for these problems because these vehicles will take decision exclusively based on the inputs obtained from the cameras and sensors or in case of doubts will hand over the vehicle control to the driver.
In the next decades, there will be a transition period; on the public roads, there will be autonomous vehicles but also vehicles controlled by the drivers; as drivers’ reactions are unpredictable, these systems will have to have an extremely short evaluation and reaction time to avoid possible accidents. Based upon this reasoning visibility estimation and the general improvement of visibility remain viable fields of study, we did a study on the state of the research for papers that use image processing as the means to estimate visibility in fog conditions, thus increasing general traffic safety.
Figure 1 presents an overview of the field, starting from the main methods from the state of the art, visibility enhancement (2), and fog detection (3), following by systems and sensors (4) that use the methods proposed in the first two subsections to detect visibility in adverse weather conditions and ending by presenting the human observer’s reactions in such conditions (5).
Figure 1. Overall structure.
Basically, in the first category, the methods are based on image processing, while in the second one, they are based on optical power measurements or image processing. In the next sections, the most known and used methods from these two broad categories will be detailed. The goal of this work is to present the advantages but also the weaknesses of every method to identify new ways of improvement. Afterwards, as it is stated in the figure below, we propose a mix of methods with the scope of counterbalancing the shortages of a method with the other one. The final step will be to check if the results obtained from such a system are valid for human beings and additionally usable by autonomous vehicles.

2. Visibility Enhancement Methods

In the last decade, there was a great interest in the area of improving visibility in bad weather conditions and especially in foggy conditions. The methods are based on image processing algorithms and can be split into two categories: image processing using a single input image (one of the first approaches was presented by Tarel and Hautiere in ([18]) and using multiple images ([19]) as input. Taking multiple input images of the same scene is usually impractical in several real applications; that is why single image haze removal has recently received much attention.

3. Fog Detection and Visibility Estimation Methods

In the previous section, we mentioned Hautière and He as pilots for the field of image dehazing; now, one of the most relevant works for vision in the atmosphere is the work of Nayar and Narasimhan [20], which is based on reputed research of Middleton [21] and McCartney [22].
Most of the approaches for detecting fog and determining its density for visibility estimation are based on optical power measurements (OPM), but there are also image processing approaches. The basic principle of the methods from the first category is the fact that infrared or light pulses emitted in the atmosphere are scattered and absorbed by the fog particles and molecules, resulting in an attenuation of the optical power. Methods of detecting the attenuation degree are by measuring the optical power after the light beam passed a layer of fog (direct transmission) or by measuring the reflected light when the light beam is backscattered by the fog layer. Figure 2 provides an overview of optical power measurement methods.
Figure 2. Optical power measurement methods.
 

4. Sensors and Systems for Fog Detection and Visibility Enhancement

Nowadays, vehicles are equipped with plenty of cameras and sensors desired for some specific functionalities that might be used also for fog detection and visibility improvements. For example, Tesla Model S has only for the autopilot functionality 8 surround cameras, 12 ultrasonic sensors, and forward-facing radar with enhanced processing capabilities.

5. Conclusions

This paper presented methods and systems from the scientific literature related to fog detection and visibility enhancement in foggy conditions that appeared over the past ten years. In the next period, the main focus of the automotive companies will be the development of autonomous vehicles, and visibility requirements in bad weather conditions will be of high importance. The actual methods from the state of the art are based on image processing, optical power measurements, or based on different sensors, some of them already available on actual commercial vehicles but used for different functionalities. The image processing methods are based on cameras, which are devices that have a lot of advantages such as freedom of implementing different algorithms, versatility, or costs, but on the other hand, the results obtained from such a system can be erroneous due to blindness caused by other traffic participants, environment, or weather. Methods based on image processing can be applied for low fog conditions; if fog becomes denser, the system is not able to give any valid output. Some methods presented in the literature work only in day conditions, making them unusable for automotive applications that require systems able to offer reliable results in real time and complex scenarios 24 h/day.
Focusing on the fact that images are degraded in foggy or hazy conditions, the degradation depends on the distance, the density of the atmospheric particles, and the wavelength. The authors in [23] tested multiple single image dehazing algorithms and performed an evaluation based on two strategies: one based on the analysis of state-of-the-art metrics and the other one based on psychophysical experiments. The results of the study suggest that the higher the wavelength within the visible range, the higher the quality of the dehazed images. The methods tested during the experiments were dark channel prior [24], Tarel method [18], Meng method [25], DehazeNet method [26], and Berman method [27]. The presented work emphasizes the fact that there is no method that is superior to every single metric; therefore, the best algorithm would vary according to the selected metric. The results of the subjective analysis revealed the fact that the observers preferred the output of the Berman algorithm. The main conclusion is that it is very important to set the correct expectations that will lead to a selection of some metrics and then, based on that, a dehazing algorithm can be preferred.
Systems based on optical power measurement, by direct transmission or backscattering, improve some of the drawbacks described above for cameras: the result is not influenced by day or night conditions, can measure also very dense fog, and the computational complexity is lower comparing to the previous category, making them more sensitive to very quick changes in the environment, which is important in real-time applications. The results obtained using such systems can be also erroneous, due to environmental conditions (bridges, road curves) or traffic participants; that is why our conclusion after gathering all these methods and systems in a single paper is that at least two different systems shall be interconnected to validate the results of each other.
One big challenge, from our point of view, for the next years in this field is to prove that the results obtained from the systems presented above are valid for a human being. The validity of the results is a relevant topic also for autonomous vehicles that need to identity the road, objects, other vehicles, and traffic signs in bad weather conditions, and the automotive companies shall define the visibility limit for these vehicles.
The evaluation of the state-of-the-art methods is presented in Table 1.
Table 1. Evaluation of the state-of-the-art methods.
Methods Evaluation Criteria
Computation Complexity Availability on Vehicles Data Processing Speed Day/Night Use Real-Time Use Result Distribution Reliable Link to Visual Accuracy
Image dehazing Koschmieder’s law
[28][29][30][31][32][33][34][35][36]
Medium/High Partial (camera) Medium Daytime only Yes Local for 1 user No (not for all inputs) Yes
Dark channel prior
[24][37][38][39][40][41][42][43][44][45][46][20][47][48][49][50][51][52]
High Partial (camera) Medium Daytime only Yes Local for 1 user No (not for all inputs) Yes
Dark channel prior integrated in SIDE
[53]
High Partial (camera) Medium Both Yes Local for 1 user Yes Yes
Image segmentation using single input image
[54][55][56][57]
High Partial (camera) Low Daytime only No Local for 1 user No Yes
Image segmentation using multiple input images
[58][59][60]
High Partial (camera) Medium Daytime only Yes (notify drivers) Local for many users (highways) No (not for all cases) Yes
Learning-based methods I
[61][62][63][64]
High Partial (camera) Medium Daytime only No Local for many users (highways) Depends on the training data No
Learning-based methods II
[65]
High No Medium Daytime only No Large area Depends on the training data Yes
Learning-based methods III
[66][67]
High Partial (camera) Medium Daytime only No Local for 1 user Depends on the training data Yes
Learning-based methods IV
[68]
High Partial (camera + extra hardware) High Daytime only Yes Local for 1 user Depends on the training data Yes
Learning-based methods V
[69]
High Partial (camera) High Both Yes Local for 1 user Depends on the training data Yes
Fog detection and visibility estimation Direct transmission measurement
[8][70][71][72]
Low No High Both Yes Local for many users (highways) Yes No
(still need to prove)
Backscattering measurement I
[9][10][11][12][73][74]
Low Partial (LIDAR) High Both Yes Local for 1 or many users Yes No
(still need to prove)
Backscattering measurement II
[75]
Medium No Medium Both Yes Local for 1 or many users No Yes
Global feature image-based analysis
[76][77][78][79][80][81][82][83][84][85][86]
Medium Partial (camera) Low Both No Local for 1 user No Yes
Sensors and Systems Camera + LIDAR
[12]
High Partial (High-end vehicles) High Both Yes Local for 1 or many users Yes Yes
Learning based methods + LIDAR
[87]
High Partial
(LIDAR)
Medium Both Yes Local for 1 user Depends on the training data Yes
Radar
[81]
Medium Partial (High-end vehicles) High Both Yes Local for 1 or many users No (need to be prove in complex scenarios) Yes
Highway static system (laser)
[88]
Medium No
(static system)
Medium Both Yes Local (can be extend to a larger area) Yes No
(still need to prove)
Motion detection static system
[89]
Medium No
(static system)
Medium Day Yes Local for 1 or many users No
(not for all cases)
Yes
Camera based static system
[90][91][92]
High No
(static system)
Medium Both Yes Local for 1 or many users Depends on the training data Yes
Satellite-based system I
[93]
High No (satellite-based system) Medium Night Yes Large area Yes Yes
Satellite-based system II
[94]
High No (satellite-based system) Medium Both Yes Large area Yes Yes
Wireless sensor network
[95]
High No
(static system)
Medium Both Yes Large area No
(not tested in real conditions)
No
Visibility Meter (camera)
[70][71]
Medium - Medium Day time only No Local for many users (highways) No
(not tested in real conditions)
No
Fog sensor (LWC, particle surface, visibility)
[72]
Medium No
(PVM-100)
Medium Both - Local for many users (highways) No
(error rate ~20%)
No
Fog sensor (density, temperature, humidity)
[9][73]
Medium No Low Both No Local for many users (highways) No No
Fog sensor (particle size—laser and camera)
[96][97]
High Partial (High-end vehicles) High Day time only No Local for many users (highways) No No
Based on the evaluation criteria listed in the table above (Table 1), we can conclude that a system able to determine and improve visibility in a foggy environment shall include a camera and a device able to make optical measurements in the atmosphere. Both categories have their drawbacks, but putting them together, most of the gaps can be covered; every subsystem can work as a backup and can validate the result offered by the other one. An example can be a system composed of a camera and a LIDAR such as in [12]; both systems are already available on nowadays high-end vehicles, offering reliable results, in real-time, 24 h/day. The results obtained from a vehicle can be shared with other traffic participants from that area, in this way creating a network of systems. The direction of improvement for such a system would be to increase the detection range for LIDARs and to use infrared cameras that can offer reliable results in night conditions and to validate the results obtained from the LIDAR.
This synthesis can be a starting point for developing a reliable system for fog detection and visibility improvement, by presenting the weaknesses of the methods from the state of the art (the referenced articles have more than 30,000 citations in Google Scholar), which can lead to some new ideas of improving them. Additionally, we described ways of interconnecting these systems to get more robust and reliable results.

References

  1. U.S. Department of Transportation. Traffic Safety Facts—Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey; National Highway Traffic Safety Administration (NHTSA): Washington, DC, USA, 2015.
  2. OSRAM Automotive. Available online: (accessed on 30 June 2020).
  3. The Car Connection. 12 October 2018. Available online: (accessed on 30 June 2020).
  4. Aubert, D.; Boucher, V.; Bremond, R.; Charbonnier, P.; Cord, A.; Dumont, E.; Foucher, P.; Fournela, F.; Greffier, F.; Gruyer, D.; et al. Digital Imaging for Assessing and Improving Highway Visibility; Transport Research Arena: Paris, France, 2014.
  5. Rajagopalan, A.N.; Chellappa, R. (Eds.) Motion Deblurring Algorithms and Systems; Cambridge University Press: Cambridge, UK, 2014.
  6. Palvanov, A.; Giyenko, A.; Cho, Y.I. Development of Visibility Expectation System Based on Machine Learning. In Computer Information Systems and Industrial Management; Springer: Berlin/Heidelberg, Germany, 2018; pp. 140–153.
  7. Yang, L.; Muresan, R.; Al-Dweik, A.; Hadjileontiadis, L.J. Image-Based Visibility Estimation Algorithm for Intelligent Transportation Systems. IEEE Access 2018, 6, 76728–76740.
  8. Ioan, S.; Razvan-Catalin, M.; Florin, A. System for Visibility Distance Estimation in Fog Conditions based on Light Sources and Visual Acuity. In Proceedings of the 2016 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR), Cluj-Napoca, Romania, 19–21 May 2016.
  9. Ovseník, Ľ.; Turán, J.; Mišenčík, P.; Bitó, J.; Csurgai-Horváth, L. Fog density measuring system. Acta Electrotech. Inf. 2012, 12, 67–71.
  10. Gruyer, D.; Cord, A.; Belaroussi, R. Vehicle detection and tracking by collaborative fusion between laser scanner and camera. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 5207–5214.
  11. Gruyer, D.; Cord, A.; Belaroussi, R. Target-to-track collaborative association combining a laser scanner and a camera. In Proceedings of the 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), The Hague, The Netherlands, 6–9 October 2013.
  12. Dannheim, C.; Icking, C.; Mäder, M.; Sallis, P. Weather Detection in Vehicles by Means of Camera and LIDAR Systems. In Proceedings of the 2014 Sixth International Conference on Computational Intelligence, Communication Systems and Networks, Tetova, Macedonia, 27–29 May 2014.
  13. Chaurasia, S.; Gohil, B.S. Detection of Day Time Fog over India Using INSAT-3D Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4524–4530.
  14. Levinson, J.; Askeland, J.; Becker, J.; Dolson, J.; Held, D.; Kammel, S.; Kolter, J.Z.; Langer, D.; Pink, O.; Pratt, V.; et al. Towards fully autonomous driving: Systems and algorithms. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 163–168.
  15. Jegham, I.; Khalifa, A.B. Pedestrian Detection in Poor Weather Conditions Using Moving Camera. In Proceedings of the IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), Hammamet, Tunisia, 30 October–3 November 2017.
  16. Dai, X.; Yuan, X.; Zhang, J.; Zhang, L. Improving the performance of vehicle detection system in bad weathers. In Proceedings of the 2016 IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi’an, China, 3–5 October 2016.
  17. Miclea, R.-C.; Silea, I.; Sandru, F. Digital Sunshade Using Head-up Display. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2017; Volume 633, pp. 3–11.
  18. Tarel, J.-P.; Hautiere, N. Fast visibility restoration from a single color or gray level image. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 27 September–4 October 2009; pp. 2201–2208.
  19. Narasimhan, S.G.; Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 713–724.
  20. Narasimhan, S.G.; Nayar, S.K. Vision and the atmosphere. Int. J. Comput. Vis. 2002, 48, 233–254.
  21. Middleton, W.E.K.; Twersky, V. Vision through the Atmosphere. Phys. Today 1954, 7, 21.
  22. McCartney, E.J.; Hall, F.F. Optics of the Atmosphere: Scattering by Molecules and Particles. Phys. Today 1977, 30, 76–77.
  23. Martínez-Domingo, M.Á.; Valero, E.M.; Nieves, J.L.; Molina-Fuentes, P.J.; Romero, J.; Hernández-Andrés, J. Single Image Dehazing Algorithm Analysis with Hyperspectral Images in the Visible Range. Sensors 2020, 20, 6690.
  24. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353.
  25. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient image dehazing with boundary constraint and con-textual regularization. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013.
  26. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. DehazeNet: An End-to-End System for Single Image Haze Removal. IEEE Trans. Image Process. 2016, 25, 5187–5198.
  27. Berman, D.; Treibitz, T.; Avidan, S. Non-local Image Dehazing. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682.
  28. Hautière, N.; Tarel, J.P.; Aubert, D. Towards Fog-Free In-Vehicle Vision Systems through Contrast Restoration. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 18–23 June 2007.
  29. Tarel, J.-P.; Hautiere, N.; Caraffa, L.; Cord, A.; Halmaoui, H.; Gruyer, D. Vision Enhancement in Homogeneous and Heterogeneous Fog. IEEE Intell. Transp. Syst. Mag. 2012, 4, 6–20.
  30. Hautière, N.; Tarel, J.P.; Halmaoui, H.; Brémond, R.; Aubert, D. Enhanced fog detection and free-space segmentation for car navigation. Mach. Vis. Appl. 2014, 25, 667–679.
  31. Negru, M.; Nedevschi, S. Image based fog detection and visibility estimation for driving assistance systems. In Proceedings of the 2013 IEEE 9th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 5–7 September 2013; pp. 163–168.
  32. Negru, M.; Nedevschi, S. Assisting Navigation in Homogenous Fog. In Proceedings of the 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, 5–8 January 2014.
  33. Negru, M.; Nedevschi, S.; Peter, R.I. Exponential Contrast Restoration in Fog Conditions for Driving Assistance. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2257–2268.
  34. Abbaspour, M.J.; Yazdi, M.; Masnadi-Shirazi, M. A new fast method for foggy image enhancemen. In Proceedings of the 2016 24th Iranian Conference on Electrical Engineering (ICEE), Shiraz, Iran, 10–12 May 2016.
  35. Liao, Y.Y.; Tai, S.C.; Lin, J.S.; Liu, P.J. Degradation of turbid images based on the adaptive logarithmic algorithm. Comput. Math. Appl. 2012, 64, 1259–1269.
  36. Halmaoui, H.; Joulan, K.; Hautière, N.; Cord, A.; Brémond, R. Quantitative model of the driver’s reaction time during daytime fog—Application to a head up display-based advanced driver assistance system. IET Intell. Transp. Syst. 2015, 9, 375–381.
  37. Yeh, C.H.; Kang, L.W.; Lin, C.Y.; Lin, C.Y. Efficient image/video dehazing through haze density analysis based on pixel-based dark channel prior. In Proceedings of the 2012 International Conference on Information Security and Intelligent Control, Yunlin, Taiwan, 14–16 August 2012.
  38. Yeh, C.H.; Kang, L.W.; Lee, M.S.; Lin, C.Y. Haze Effect Removal from Image via Haze Density estimation in Optical Model. Opt. Express 2013, 21, 27127–27141.
  39. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8.
  40. Fattal, R. Single image dehazing. ACM Trans. Graph. 2008, 27, 1–9.
  41. Huang, S.-C.; Chen, B.-H.; Wang, W.-J. Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 1814–1824.
  42. Wang, Z.; Feng, Y. Fast single haze image enhancement. Comput. Electr. Eng. 2014, 40, 785–795.
  43. Zhang, Y.-Q.; Ding, Y.; Xiao, J.-S.; Liu, J.; Guo, Z. Visibility enhancement using an image filtering approach. EURASIP J. Adv. Signal Process. 2012, 2012, 220.
  44. Tarel, J.-P.; Hautiere, N.; Cord, A.; Gruyer, D.; Halmaoui, H. Improved visibility of road scene images under heterogeneous fog. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010.
  45. Wang, R.; Yang, X. A fast method of foggy image enhancement. In Proceedings of the 2012 International Conference on Measurement, Information and Control, Harbin, China, 18–20 May 2012.
  46. Kim, J.-H.; Jang, W.-D.; Sim, J.-Y.; Kim, C.-S. Optimized contrast enhancement for real-time image and video dehazing. J. Vis. Commun. Image Represent. 2013, 24, 410–425.
  47. Peli, E. Contrast in complex images. J. Opt. Soc. Am. A 1990, 7, 2032–2040.
  48. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409.
  49. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice-Hall: Hoboken, NJ, USA, 2007.
  50. Su, C.; Wang, W.; Zhang, X.; Jin, L. Dehazing with Offset Correction and a Weighted Residual Map. Electronics 2020, 9, 1419.
  51. Wu, X.; Wang, K.; Li, Y.; Liu, K.; Huang, B. Accelerating Haze Removal Algorithm Using CUDA. Remote Sens. 2020, 13, 85.
  52. Ngo, D.; Lee, S.; Nguyen, Q.H.; Ngo, T.M.; Lee, G.D.; Kang, B. Single Image Haze Removal from Image En-hancement Perspective for Real-Time Vision-Based Systems. Sensors 2020, 20, 5170.
  53. He, R.; Guo, X.; Shi, Z. SIDE—A Unified Framework for Simultaneously Dehazing and Enhancement of Nighttime Hazy Images. Sensors 2020, 20, 5300.
  54. Zhu, Q.; Mai, J.; Song, Z.; Wu, D.; Wang, J.; Wang, L. Mean shift-based single image dehazing with re-refined transmission map. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014.
  55. Das, D.; Roy, K.; Basak, S.; Chaudhury, S.S. Visibility Enhancement in a Foggy Road Along with Road Boundary Detection. In Proceedings of the Blockchain Technology and Innovations in Business Processes, New Delhi, India, 8 October 2015; pp. 125–135.
  56. Yuan, H.; Liu, C.; Guo, Z.; Sun, Z. A Region-Wised Medium Transmission Based Image Dehazing Method. IEEE Access 2017, 5, 1735–1742.
  57. Zhu, Y.-B.; Liu, J.-M.; Hao, Y.-G. An single image dehazing algorithm using sky detection and segmentation. In Proceedings of the 2014 7th International Congress on Image and Signal Processing, Hainan, China, 20–23 December 2014; pp. 248–252.
  58. Gangodkar, D.; Kumar, P.; Mittal, A. Robust Segmentation of Moving Vehicles under Complex Outdoor Conditions. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1738–1752.
  59. Yuan, Z.; Xie, X.; Hu, J.; Zhang, Y.; Yao, D. An Effective Method for Fog-degraded Traffic Image Enhance-ment. In Proceedings of the 2014 IEEE International Conference on Service Operations and Logistics, and Informatics, Qingdao, China, 8–10 October 2014.
  60. Wu, B.-F.; Juang, J.-H. Adaptive Vehicle Detector Approach for Complex Environments. IEEE Trans. Intell. Transp. Syst. 2012, 13, 817–827.
  61. Cireşan, D.; Meier, U.; Masci, J.; Schmidhuber, J. Multi-column deep neural network for traffic sign classification. Neural Netw. 2012, 32, 333–338.
  62. Hussain, F.; Jeong, J. Visibility Enhancement of Scene Images Degraded by Foggy Weather Conditions with Deep Neural Networks. J. Sens. 2015, 2016, 1–9.
  63. Singh, G.; Singh, A. Object Detection in Fog Degraded Images. Int. J. Comput. Sci. Inf. Secur. 2018, 16, 174–182.
  64. Cho, Y.I.; Palvanov, A. A New Machine Learning Algorithm for Weather Visibility and Food Recognition. J. Robot. Netw. Artif. Life 2019, 6, 12.
  65. Hu, A.; Xie, Z.; Xu, Y.; Xie, M.; Wu, L.; Qiu, Q. Unsupervised Haze Removal for High-Resolution Optical Remote-Sensing Images Based on Improved Generative Adversarial Networks. Remote Sens. 2020, 12, 4162.
  66. Ha, E.; Shin, J.; Paik, J. Gated Dehazing Network via Least Square Adversarial Learning. Sensors 2020, 20, 6311.
  67. Chen, J.; Wu, C.; Chen, H.; Cheng, P. Unsupervised Dark-Channel Attention-Guided CycleGAN for Sin-gle-Image Dehazing. Sensors 2020, 20, 6000.
  68. Ngo, D.; Lee, S.; Lee, G.-D.; Kang, B.; Ngo, D. Single-Image Visibility Restoration: A Machine Learning Approach and Its 4K-Capable Hardware Accelerator. Sensors 2020, 20, 5795.
  69. Feng, M.; Yu, T.; Jing, M.; Yang, G. Learning a Convolutional Autoencoder for Nighttime Image Dehazing. Information 2020, 11, 424.
  70. Pesek, J.; Fiser, O. Automatically low clouds or fog detection, based on two visibility meters and FSO. In Proceedings of the 2013 Conference on Microwave Techniques (COMITE), Pardubice, Czech Republic, 17–18 April 2013; pp. 83–85.
  71. Brazda, V.; Fiser, O.; Rejfek, L. Development of system for measuring visibility along the free space optical link using digital camera. In Proceedings of the 2014 24th International Conference Radioelektronika, Bratislava, Slovakia, 15–16 April 2014; pp. 1–4.
  72. Brazda, V.; Fiser, O. Estimation of fog drop size distribution based on meteorological measurement. In Proceedings of the 2015 Conference on Microwave Techniques (COMITE), Pardubice, Czech Republic, 23–24 April 2008; pp. 1–4.
  73. Ovseník, Ľ.; Turán, J.; Tatarko, M.; Turan, M.; Vásárhelyi, J. Fog sensor system: Design and measurement. In Proceedings of the 13th International Carpathian Control Conference (ICCC), High Tatras, Slovakia, 28–31 May 2012; pp. 529–532.
  74. Sallis, P.; Dannheim, C.; Icking, C.; Maeder, M. Air Pollution and Fog Detection through Vehicular Sensors. In Proceedings of the 2014 8th Asia Modelling Symposium, Taipei, Taiwan, 23–25 September 2014; pp. 181–186.
  75. Kim, Y.-H.; Moon, S.-H.; Yoon, Y. Detection of Precipitation and Fog Using Machine Learning on Backscatter Data from Lidar Ceilometer. Appl. Sci. 2020, 10, 6452.
  76. Pavlic, M.; Belzner, H.; Rigoll, G.; Ilic, S. Image based fog detection in vehicles. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium, Madrid, Spain, 3–7 June 2012; pp. 1132–1137.
  77. Pavlic, M.; Rigoll, G.; Ilic, S. Classification of images in fog and fog-free scenes for use in vehicles. In Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), Gold Coast City, Australia, 23–26 June 2013.
  78. Spinneker, R.; Koch, C.; Park, S.B.; Yoon, J.J. Fast Fog Detection for Camera Based Advanced Driver Assistance Systems. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), The Hague, The Netherlands, 24–26 September 2014.
  79. Asery, R.; Sunkaria, R.K.; Sharma, L.D.; Kumar, A. Fog detection using GLCM based features and SVM. In Proceedings of the 2016 Conference on Advances in Signal Processing (CASP), Pune, India, 9–11 June 2016; pp. 72–76.
  80. Zhang, D.; Sullivan, T.; O’Connor, N.E.; Gillespie, R.; Regan, F. Coastal fog detection using visual sensing. In Proceedings of the OCEANS 2015, Genova, Italy, 18–21 May 2015.
  81. Alami, S.; Ezzine, A.; Elhassouni, F. Local Fog Detection Based on Saturation and RGB-Correlation. In Proceedings of the 2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV), Beni Mellal, Morocco, 29 March–1 April 2016; pp. 1–5.
  82. Gallen, R.; Cord, A.; Hautière, N.; Aubert, D. Method and Device for Detecting Fog at Night. Versailles. France Patent WO 2 012 042 171 A2, 5 April 2012.
  83. Gallen, R.; Cord, A.; Hautière, N.; Dumont, É.; Aubert, D. Night time visibility analysis and estimation method in the presence of dense fog. IEEE Trans. Intell. Transp. Syst. 2015, 16, 310–320.
  84. Pagani, G.A.; Noteboom, J.W.; Wauben, W. Deep Neural Network Approach for Automatic Fog Detection. In Proceedings of the CIMO TECO, Amsterdam, The Netherlands, 8–16 October 2018.
  85. Li, S.; Fu, H.; Lo, W.-L. Meteorological Visibility Evaluation on Webcam Weather Image Using Deep Learning Features. Int. J. Comput. Theory Eng. 2017, 9, 455–461.
  86. Chaabani, H.; Kamoun, F.; Bargaoui, H.; Outay, F.; Yasar, A.-U.-H. A Neural network approach to visibility range estimation under foggy weather conditions. Procedia Comput. Sci. 2017, 113, 466–471.
  87. Liang, X.; Huang, Z.; Lu, L.; Tao, Z.; Yang, B.; Li, Y. Deep Learning Method on Target Echo Signal Recognition for Obscurant Penetrating Lidar Detection in Degraded Visual Environments. Sensors 2020, 20, 3424.
  88. Miclea, R.-C.; Silea, I. Visibility Detection in Foggy Environment. In Proceedings of the 2015 20th International Conference on Control Systems and Computer Science, Bucharest, Romania, 27–29 May 2015; pp. 959–964.
  89. Kumar, T.S.; Pavya, S. Segmentation of visual images under complex outdoor conditions. In Proceedings of the 2014 International Conference on Communication and Signal Processing, Chennai, India, 3–5 April 2014; pp. 100–104.
  90. Han, Y.; Hu, D. Multispectral Fusion Approach for Traffic Target Detection in Bad Weather. Algorithms 2020, 13, 271.
  91. Ibrahim, M.R.; Haworth, J.; Cheng, T. WeatherNet: Recognising Weather and Visual Conditions from Street-Level Images Using Deep Residual Learning. ISPRS Int. J. Geo-Inf. 2019, 8, 549.
  92. Qin, H.; Qin, H. Image-Based Dedicated Methods of Night Traffic Visibility Estimation. Appl. Sci. 2020, 10, 440.
  93. Weston, M.; Temimi, M. Application of a Nighttime Fog Detection Method Using SEVIRI Over an Arid Environment. Remote Sens. 2020, 12, 2281.
  94. Han, J.-H.; Suh, M.-S.; Yu, H.-Y.; Roh, N.-Y. Development of Fog Detection Algorithm Using GK2A/AMI and Ground Data. Remote Sens. 2020, 12, 3181.
  95. Li, L.; Zhang, H.; Zhao, C.; Ding, X. Radiation fog detection and warning system of highway based on wireless sensor networks. In Proceedings of the 2014 IEEE 7th Joint International Information Technology and Artificial Intelligence Conference, Chongqing, China, 20–21 December 2014; pp. 148–152.
  96. Miclea, R.-C.; Dughir, C.; Alexa, F.; Sandru, F.; Silea, A. Laser and LIDAR in A System for Visibility Distance Estimation in Fog Conditions. Sensors 2020, 20, 6322.
  97. Tóth, J.; Ovseník, Ľ.; Turán, J. Free Space Optics—Monitoring Setup for Experimental Link. Carpathian J. Electron. Comput. Eng. 2015, 8, 27–30.
More
Information
Subjects: Others
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 804
Revisions: 2 times (View History)
Update Date: 28 May 2021
1000/1000