Major Application of Machine Learning in WRM: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: ,

In accordance with the rapid proliferation of machine learning (ML) and data management, ML applications have evolved to encompass all engineering disciplines. Owing to the importance of the world’s water supply throughout the rest of this century, much research has been concentrated on the application of ML strategies to integrated water resources management (WRM). 

  • classification
  • climate change
  • clustering
  • machine learning (ML)

1. Introduction

In recent years, machine learning (ML) applications in water resources management (WRM) have garnered significant interest [1]. The advent of big data has substantially enhanced the ability of hydrologists to address existing challenges and encouraged novel applications of ML. The global data sphere is expected to reach 175 zettabytes by 2025 [2]. The availability of this large amount of data is forming a new era in the field of WRM. The next step for hydrological sciences is determining a method to integrate traditional physical-based hydrology into new machine-aided techniques to draw information directly from big data. An extensive range of decisions, from superficial to complicated scientific problems, is now handled by various ML techniques. Only a machine is capable of fully utilizing big data because of its veracity, velocity, volume, and variety. In recent decades, ML has attracted a great deal of attention from hydrologists and has been widely applied to a variety of fields because of its ability to manage complex environments.
In the coming decades, the issues surrounding climate change, increasing constraints on water resources, population growth, and natural hazards will force hydrologists worldwide to adapt and develop strategies to maintain security related to WRM. The Intergovernmental Hydrological Programme (IHP) just started its ninth phase plans (IHP-IX, 2022-2029), which place hydrologists, scholars, and policymakers on the frontlines of action to ensure a water-secure world despite climate change, with the goal of creating a new and sustainable water culture [3]. Moreover, the rapid growth in the availability of hydrologic data repositories, alongside advanced ML models, offers new opportunities for improved assessments in the field of hydrology by simplifying the existing complexity. For instance, it is possible to switch from traditional single-step prediction to multi-step ahead prediction, from short-term to long-term prediction, from deterministic models to their probabilistic counterparts, from univariate to multivariate models, from the application of structured data to volumetric and unstructured data, and from spatial to spatio-temporal and the more advanced geo-spatiotemporal environment. Moreover, ML models have contributed to optimal decision-making in WRM by efficiently modeling the nonlinear, erratic, and stochastic behaviors of natural hydrological phenomena. Furthermore, when solving complicated models, ML techniques can dramatically reduce the computational cost, which allows decision-makers to switch from physical-based models to ML models for cumbersome problems. Therefore, the new emerging hydrological crises, such as droughts and floods, can now be efficiently investigated and mitigated with the assistance of the advancements in ML algorithms.

2. Major Application of ML in WRM

ML algorithms are typically categorized into three main groups: supervised, unsupervised, and RL [4]. A comparison of these is summarized in Table 1. Supervised learning algorithms employ labeled datasets to train the algorithms to classify or predict the output, where both the input and output values are known beforehand. Unsupervised learning algorithms are trained using unlabeled datasets for clustering. These algorithms discover hidden patterns or data groupings without the need for human intervention. RL is an area of ML that concerns how intelligent an agent is to take action in an environment to obtain the maximum reward. In both supervised and RL, inputs and outputs are mapped such that the agent is informed of the best strategy to take in order to complete a task. In RL, positive and negative behaviors are signaled through incentives and penalties, respectively. As a result, in supervised learning, a machine learns the behavior and characteristics of labeled datasets, detects patterns in unsupervised learning, and explores the environment without any prior training in RL algorithms. Thus, an appropriate category of ML is required based on the engineering application. The major ML learning types in WRM are summarized in Figure 1, where the first segment covers the core contents of the research reviewed in the following sections.
Figure 1. Four major types of machine learning.
Table 1. Comparison of supervised, unsupervised, and reinforcement learning algorithms.

This entry is adapted from the peer-reviewed paper 10.3390/w15040620

References

  1. Razavi, S.; Hannah, D.M.; Elshorbagy, A.; Kumar, S.; Marshall, L.; Solomatine, D.P.; Dezfuli, A.; Sadegh, M.; Famiglietti, J. Coevolution of Machine Learning and Process-Based Modelling to Revolutionize Earth and Environmental Sciences A Perspective. Hydrol. Process. 2022, 36, e14596.
  2. Reinsel, D.; Gantz, J.; Rydning, J. The Digitization of the World From Edge to Core; International Data Corporation: Framingham, MA, USA, 2018; Available online: https://www.seagate.com/files/www-content/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf (accessed on 7 October 2022).
  3. UNESCO IHP-IX: Strategic Plan of the Intergovernmental Hydrological Programme: Science for a Water Secure World in a Changing Environment, Ninth Phase 2022-2029. 2022. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381318 (accessed on 7 October 2022).
  4. Mosaffa, H.; Sadeghi, M.; Mallakpour, I.; Naghdyzadegan Jahromi, M.; Pourghasemi, H.R. Application of Machine Learning Algorithms in Hydrology. Comput. Earth Environ. Sci. 2022, 585–591.
  5. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-Level Control through Deep Reinforcement Learning. Nature 2015, 518, 529–533.
  6. Géron, A. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow SECOND EDITION Concepts, Tools, and Techniques to Build Intelligent Systems; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2022.
  7. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT press: Cambridge, MA, USA, 2016; Volume 521, ISBN 978-0262035613.
  8. Agha-Hoseinali-Shirazi, M.; Bozorg-Haddad, O.; Laituri, M.; DeAngelis, D. Application of Agent Base Modeling in Water Resources Management and Planning. Springer Water 2021, 177–216.
  9. Tang, T.; Jiao, D.; Chen, T.; Gui, G. Medium- and Long-Term Precipitation Forecasting Method Based on Data Augmentation and Machine Learning Algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1000–1011.
  10. Rahman, A.T.M.S.; Kono, Y.; Hosono, T. Self-Organizing Map Improves Understanding on the Hydrochemical Processes in Aquifer Systems. Sci. Total Environ. 2022, 846, 157281.
  11. Jang, B.; Kim, M.; Harerimana, G.; Kim, J.W. Q-Learning Algorithms: A Comprehensive Classification and Applications. IEEE Access 2019, 7, 133653–133667.
  12. Zhu, Z.; Hu, Z.; Chan, K.W.; Bu, S.; Zhou, B.; Xia, S. Reinforcement Learning in Deregulated Energy Market: A Comprehensive Review. Appl. Energy 2023, 329, 120212.
  13. Nurcahyono, A.; Fadhly Jambak, F.; Rohman, A.; Faisal Karim, M.; Neves-Silva, P.; Cruz Foundation, O.; Horizonte, B. Shifting the Water Paradigm from Social Good to Economic Good and the State’s Role in Fulfilling the Right to Water. F1000Research 2022, 11, 490.
  14. Damascene123, N.J.; Dithebe, M.; Laryea, A.E.N.; Medina, J.A.M.; Bian, Z.; Gilbert, M.A.S.E.N.G.O. Prospective Review of Mining Effects on Hydrology in a Water-Scarce Eco-Environment; North American Academic Research: San Francisco, CA, USA, 2022; Volume 5, pp. 352–365.
  15. Yan, B.; Jiang, H.; Zou, Y.; Liu, Y.; Mu, R.; Wang, H. An Integrated Model for Optimal Water Resources Allocation under “3 Redlines” Water Policy of the Upper Hanjiang River Basin. J. Hydrol. Reg. Stud. 2022, 42, 101167.
  16. Xiao, Y.; Fang, L.; Hipel, K.W.; Wre, H.D.; Asce, F. Agent-Based Modeling Approach to Investigating the Impact of Water Demand Management; American Society of Civil Engineers (ASCE): Reston, VA, USA, 2018.
  17. Lin, Z.; Lim, S.H.; Lin, T.; Borders, M. Using Agent-Based Modeling for Water Resources Management in the Bakken Region. J. Water Resour. Plan. Manag. 2020, 146, 05019020.
  18. Berglund, E.Z.; Asce, M. Using Agent-Based Modeling for Water Resources Planning and Management. J. Water Resour. Plan. Manag. 2015, 141, 04015025.
  19. Tourigny, A.; Filion, Y. Sensitivity Analysis of an Agent-Based Model Used to Simulate the Spread of Low-Flow Fixtures for Residential Water Conservation and Evaluate Energy Savings in a Canadian Water Distribution System. J. Water Resour. Plan. 2019, 145, 1.
  20. Giacomoni, M.H.; Berglund, E.Z. Complex Adaptive Modeling Framework for Evaluating Adaptive Demand Management for Urban Water Resources Sustainability. J. Water Resour. Plan. Manag. 2015, 141, 11.
  21. Tensorforce: A TensorFlow Library for Applied Reinforcement Learning—Tensorforce 0.6.5 Documentation. Available online: https://tensorforce.readthedocs.io/en/latest/ (accessed on 26 October 2022).
  22. Plappert, M. keras-rl. “GitHub—Keras-rl/Keras-rl: Deep Reinforcement Learning for Keras.” GitHub Repos. 2019. Available online: https://github.com/keras-rl/keras-rl (accessed on 26 October 2022).
  23. Guadarrama, S.; Korattikara, A.; Ramirez, O.; Castro, P.; Holly, E.; Fishman, S.; Wang, K.; Gonina, E.; Wu, N.; Kokiopoulou, E.; et al. TF-Agents: A Library for Reinforcement Learning in Tensorflow. GitHub Repos. 2018. Available online: https://github.com/tensorflow/agents (accessed on 26 October 2022).
  24. Caspi, I.; Leibovich, G.; Novik, G.; Endrawis, S. Reinforcement Learning Coach, December 2017.
  25. Hoffman, M.W.; Shahriari, B.; Aslanides, J.; Barth-Maron, G.; Nikola Momchev, D.; Sinopalnikov, D.; Stańczyk, P.; Ramos, S.; Raichuk, A.; Vincent, D.; et al. Acme: A Research Framework for Distributed Reinforcement Learning. arXiv 2020, arXiv:2006.00979.
  26. Castro, P.S.; Moitra, S.; Gelada, C.; Kumar, S.; Bellemare, M.G.; Brain, G. Dopamine: A Research Framework for Deep Reinforcement Learning. arXiv 2018, arXiv:1812.06110.
  27. Liang, E.; Liaw, R.; Nishihara, R.; Moritz, P.; Fox, R.; Gonzalez, J.; Goldberg, K. Ray RLlib: A Composable and Scalable Reinforcement Learning Library. arXiv 2017, preprint. arXiv:1712.09381.
  28. Raffin, A.; Hill, A.; Gleave, A.; Kanervisto, A.; Ernestus, M.; Dormann, N. Stable-Baselines3: Reliable Reinforcement Learning Implementations. J. Mach. Learn. Res. 2021, 22, 12348–12355.
  29. Van Hasselt, H.; Guez, A.; Silver, D. Deep Reinforcement Learning with Double Q-Learning. Proc. AAAI Conf. Artif. Intell. 2016, 30, 2094–2100.
  30. Wang, Z.; Schaul, T.; Hessel, M.; Hasselt, H.; Lanctot, M.; Freitas, N. Dueling Network Architectures for Deep Reinforcement Learning 2016. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY USA, 19–24 June 2016; pp. 1995–2003.
  31. PFRL, a Deep Reinforcement Learning Library — PFRL 0.3.0 Documentation. Available online: https://pfrl.readthedocs.io/en/latest/ (accessed on 26 October 2022).
  32. Mnih, V.; Badia, A.P.; Mirza, L.; Graves, A.; Harley, T.; Lillicrap, T.P.; Silver, D.; Kavukcuoglu, K. Asynchronous Methods for Deep Reinforcement Learning. arXiv 2016, arXiv:1602.01783v2.
  33. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal Policy Optimization Algorithms. arXiv 2017, arXiv:1707.06347.
  34. Racanière, S.; Weber, T.; Reichert, D.P.; Buesing, L.; Guez, A.; Rezende, D.; Badia, A.P.; Vinyals, O.; Heess, N.; Li, Y.; et al. Imagination-Augmented Agents for Deep Reinforcement Learning. arxiv 2017, preprint. arXiv:1707.06203.
  35. Feinberg, V.; Wan, A.; Stoica, I.; Jordan, M.I.; Gonzalez, J.E.; Levine, S. Model-Based Value Estimation for Efficient Model-Free Reinforcement Learning. arxiv 2018, arXiv:1803.00101.
  36. Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; et al. Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. arXiv 2017, arXiv:1712.01815.
  37. Schulman, J.; Levine, S.; Moritz, P.; Jordan, M.; Abbeel, P. Trust Region Policy Optimization. arXiv 2015, arXiv:1502.05477.
  38. Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous Control with Deep Reinforcement Learning. arXiv 2015, arXiv:1509.02971v6.
  39. Fujimoto, S.; Van Hoof, H.; Meger, D. Addressing Function Approximation Error in Actor-Critic Methods. arXiv 2018, arXiv:1802.09477.
  40. Haarnoja, T.; Zhou, A.; Abbeel, P.; Levine, S. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. arXiv 2018, arXiv:1801.01290.
  41. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing Atari with Deep Reinforcement Learning. arXiv 2013, preprint. arXiv:1312.5602.
  42. Bellemare, M.G.; Dabney, W.; Munos, R. A Distributional Perspective on Reinforcement Learning. arXiv 2017, arXiv:1707.06887.
  43. Dabney, W.; Rowland, M.; Bellemare, M.G.; Munos, R. Distributional Reinforcement Learning with Quantile Regression. arXiv 2017, arXiv:1710.10044.
  44. Andrychowicz, M.; Wolski, F.; Ray, A.; Schneider, J.; Fong, R.; Welinder, P.; McGrew, B.; Tobin, J.; Abbeel, P.; Zaremba, W. Hindsight Experience Replay. arXiv 2017, arXiv:1707.01495.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations