Enter Zoom Meeting

HS3.4

EDI
Deep learning in hydrological science

Machine learning (ML) is now widely used across Hydrology and the broader Earth Sciences and especially its subfield deep learning (DL) has recently enjoyed increased attention.. This session highlights the continued integration of ML, and its many variants, including deep learning (DL), into traditional and emerging Hydrology-related workflows. Abstracts are solicited related to novel theory development, novel methodology, or practical applications of ML and DL in hydrological modeling. This might include, but is not limited to, the following:

(1) Development of novel DL models or modeling workflows.
(2) Integrating DL with process-based models and/or physical understanding.
(3) Improving understanding of the (internal) states/representations of ML/DL models.
(4) Understanding the reliability of ML/DL, including under nonstationarity.
(5) Deriving scaling relationships or process-related insights with ML/DL.
(6) Modeling human behavior and impacts on the hydrological cycle.
(7) Hazard analysis, detection, and mitigation.
(8) Natural Language Processing in support of models and/or modeling workflows

Co-organized by ESSI1/NP4
Convener: Frederik KratzertECSECS | Co-conveners: Claire BrennerECSECS, Pierre Gentine, Daniel KlotzECSECS, Grey Nearing
Welcome to this vPICO session. All conveners, speakers, and attendees join the Zoom Meeting for the live presentations through the green button to the top right. On this page, you will find a list of presentations, their abstracts linked, and you can use the handshake to start spontaneous chats with others.

Activation of the text chat sets a cookie in your browser that is automatically deleted at the end of the conference.

A chat user is typing ...
SHIFT+ENTER for line break
We are sorry but we encountered a problem while running the chat HS3.4 . Please reload this browser window. In case this message is shown again after reloading, please contact us at: egu21@copernicus.org. We are sorry for this inconvenience.

Thu, 29 Apr, 15:30–17:00

Chairpersons: Frederik Kratzert, Daniel Klotz, Grey Nearing

15:30–15:35
5-minute convener introduction

15:35–15:37
|
EGU21-8350
|
ECS
Yassine Belghaddar et al.

Wastewater networks are mandatory for urbanization. Their management, which includes reparation and expansion operations, requires precise information about their underground components, mainly pipes. For hydraulic  modelling purposes, the characteristics of the nodes and pipes in the model must be fully known via specific, complete and consistent attribute tables. However, due to years of service and interventions by different actors, information about the attributes and characteristics associated with the various objects constituting a network are not  properly  tracked and reported. Therefore, databases related to wastewater networks, when available, still suffer from a large amount of missing data.

A wastewater network constitutes a graph composed of nodes and edges. Nodes represent manholes, equipment, repairs, etc. while edges represent pipes. Each of the nodes and edges has a set of properties in the form of attributes such as diameters of the pipes. In this work, we seek to complete the missing attributes of wastewater networks using machine learning techniques. The main goal is to make use of the graph structures in the learning process, taking into consideration the topology and the relationships between their components (nodes and edges) to predict missing attribute values.

Graph Convolutional Network models (GCN) have gained a lot of attention in recent years and achieved state of the art in many applications such as chemistry. These models are applied directly on graphs to perform diverse machine learning tasks. We present here the use of GCN models such as ChebConv to complete the missing attribute values of two datasets (1239 and 754 elements) extracted from the wastewater networks of  Montpellier and Angers Metropolis in France. To emphasize the importance of the graph structure in the learning process and thus on the quality of the predictions, GCNs' results are benchmarked against non-topological neural networks. The application on diameter value completion, indicates that using the structure of the wastewater network in the learning process has a significant impact on the prediction results especially for minority classes. Indeed, the diameter classes are very heterogeneous in terms of number of elements with a highly majority class and several classes with few elements. Non-topological neural networks always fail to predict these classes and affect the majority class value to every missing diameter, yielding a perfect precision for this class but a null one for all the others. On the contrary, the ChebConv model precision is slightly lower (0.93) for the majority class but much higher (increases from 0.3 to 0.81) for other classes, using only the structure of the graphs. The use of other available information in the learning process may enhance these results.

How to cite: Belghaddar, Y., Delenne, C., Chahinian, N., Begdouri, A., and Seriai, A.: Missing data completion in wastewater network databases: the added-value of Graph Convolutional Neural Networks., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8350, https://doi.org/10.5194/egusphere-egu21-8350, 2021.

15:37–15:39
|
EGU21-3516
|
ECS
Andrew Bennett and Bart Nijssen

Machine learning (ML), and particularly deep learning (DL), for geophysical research has shown dramatic successes in recent years. However, these models are primarily geared towards better predictive capabilities, and are generally treated as black box models, limiting researchers’ ability to interpret and understand how these predictions are made. As these models are incorporated into larger models and pushed to be used in more areas it will be important to build methods that allow us to reason about how these models operate. This will have implications for scientific discovery that will ensure that these models are robust and reliable for their respective applications. Recent work in explainable artificial intelligence (XAI) has been used to interpret and explain the behavior of machine learned models.

Here, we apply new tools from the field of XAI to provide physical interpretations of a system that couples a deep-learning based parameterization for turbulent heat fluxes to a process based hydrologic model. To develop this coupling we have trained a neural network to predict turbulent heat fluxes using FluxNet data from a large number of hydroclimatically diverse sites. This neural network is coupled to the SUMMA hydrologic model, taking imodel derived states as additional inputs to improve predictions. We have shown that this coupled system provides highly accurate simulations of turbulent heat fluxes at 30 minute timesteps, accurately predicts the long-term observed water balance, and reproduces other signatures such as the phase lag with shortwave radiation. Because of these features, it seems this coupled system is learning physically accurate relationships between inputs and outputs. 

We probe the relative importance of which input features are used to make predictions during wet and dry conditions to better understand what the neural network has learned. Further, we conduct controlled experiments to understand how the neural networks are able to learn to regionalize between different hydroclimates. By understanding how these neural networks make their predictions as well as how they learn to make predictions we can gain scientific insights and use them to further improve our models of the Earth system.

How to cite: Bennett, A. and Nijssen, B.: Searching for new physics: Using explainable AI to understand deep learned parameterizations of turbulent heat fluxes, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3516, https://doi.org/10.5194/egusphere-egu21-3516, 2021.

15:39–15:41
|
EGU21-14781
|
ECS
Claire Brenner et al.

Global land-atmosphere energy and carbon fluxes are key drivers of the Earth’s climate system. Their assessment over a wide range of climates and biomes is therefore essential (i) for a better understanding and characterization of land-atmosphere exchanges and feedbacks and (ii) for examining the effect of climate change on the global water, energy and carbon cycles. 

Large-sample datasets such as the FLUXNET2015 dataset (Pastorello et al., 2020) foster the use of machine learning (ML) techniques as a powerful addition to existing physically-based modelling approaches. Several studies have investigated ML techniques for assessing energy and carbon fluxes, and while across-site variability and the mean seasonal cycle are typically well predicted, deviations from mean seasonal behaviour remains challenging (Tramontana et al., 2016). 

In this study we examine the importance of memory effects for predicting energy and carbon fluxes at half-hourly and daily temporal resolutions. To this end, we train a Long Short-Term Memory (LSTM, Hochreiter and Schmidthuber, 1997), a recurrent neural network with explicit memory, that is particularly suited for time series predictions due to its capability to store information over longer (time) sequences. We train the LSTM on a large number of FLUXNET sites part of the FLUXNET2015 dataset using local meteorological forcings and static site attributes derived from remote sensing and reanalysis data. 

We evaluate model performance out-of-sample (leaving out individual sites) in a 10-fold cross-validation. Additionally, we compare results from the LSTM with results from another ML technique, XGBoost (Chen and Guestrin, 2016), that does not contain system memory. By analysing the differences in model performances of both approaches across various biomes, we investigate under which conditions the inclusion of memory might be beneficial for modelling energy and carbon fluxes.

 

References:

Chen, Tianqi, and Carlos Guestrin. "Xgboost: A scalable tree boosting system." Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. 2016.

Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.

Pastorello, Gilberto, et al. "The FLUXNET2015 dataset and the ONEFlux processing pipeline for eddy covariance data." Scientific data 7.1 (2020): 1-27

Tramontana, Gianluca, et al. "Predicting carbon dioxide and energy fluxes across global   FLUXNET sites with regression algorithms." Biogeosciences 13.14 (2016): 4291-4313.

How to cite: Brenner, C., Frame, J., Nearing, G., and Schulz, K.: Predicting energy and carbon fluxes using LSTM networks, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14781, https://doi.org/10.5194/egusphere-egu21-14781, 2021.

15:41–15:43
|
EGU21-2708
Nanée Chahinian et al.

Urbanization has been an increasing trend over the past century (UN, 2018) and city managers have had to constantly extend water access and sanitation services to new peripheral areas. Originally these networks were installed, operated, and repaired by their owners (Rogers et al. 2012). However, as concessions were increasingly granted to private companies and new tenders requested regularly by public authorities, archives were sometimes misplaced and event logs were lost. Thus, part of the networks’ operational history was thought to be permanently erased. The advent of Web big data and text-mining techniques may offer the possibility of recovering some of this knowledge by crawling secondary information sources, i.e. documents available on the Web. Thus, insight might be gained on the wastewater collection scheme, the treatment processes, the network’s geometry and events (accidents, shortages) which may have affected these facilities and amenities. The primary aim of the "Megadata, Linked Data and Data Mining for Wastewater Networks" (MeDo) project (http://webmedo.msem.univ-montp2.fr/?page_id=223&lang=en), is to develop resources for text mining and information extraction in the wastewater domain. We developed a specific Natural Language Processing (NLP) pipeline named WEIR-P (WastewatEr InfoRmation extraction Platform) which allows users to retrieve relevant documents for a given network, process them to extract potentially new information, assess this information also by using an interactive visualization and add it to a pre-existing knowledge base. The system identifies the entities and relations to be extracted from texts, pertaining network information, wastewater treatment, accidents and works, organizations, spatio-temporal information, measures and water quality. We present and evaluate the first version of the NLP system. The preliminary results obtained on the Montpellier corpus (1,557 HTML and PDF documents in French) are encouraging and show how a mix of Machine Learning approaches and rule-based techniques can be used to extract useful information and reconstruct the various phases of the extension of a given wastewater network. While the NLP and Information Extraction (IE) methods used are state of the art, the novelty of our work lies in their adaptation to the domain, and in particular in the wastewater management conceptual model, which defines the relations between entities.

How to cite: Chahinian, N., Bonnabaud La Bruyère, T., Conrad, S., Delenne, C., Frontini, F., Julien, M., Panckhurst, R., Roche, M., Sautot, L., Deruelle, L., and Teisseire, M.: WEIR-P: An Information Extraction Pipeline for the Wastewater Domain, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2708, https://doi.org/10.5194/egusphere-egu21-2708, 2021.

15:43–15:45
|
EGU21-1136
|
ECS
Sebastian Drost et al.

The application of Deep Learning methods for modelling rainfall-runoff have reached great advances in the last years. Especially, long short-term memory (LSTM) networks have gained enhanced attention for time-series prediction. The architecture of this special kind of recurrent neural network is optimized for learning long-term dependencies from large time-series datasets. Thus, different studies proved the applicability of LSTM networks for rainfall-runoff predictions and showed, that they are capable of outperforming other types of neural networks (Hu et al., 2018).

Understanding the impact of land-cover changes on rainfall-runoff dynamics is an important task. Such a hydrological modelling problem typically is solved with process-based models by varying model-parameters related to land-cover-incidents at different points in time. Kratzert et al. (2019) proposed an adaption of the standard LSTM architecture, called Entity-Aware-LSTM (EA-LSTM), which can take static catchment attributes as input features to overcome the regional modelling problem and provides a promising approach for similar use cases. Hence, our contribution aims to analyse the suitability of EA-LSTM for assessing the effect of land-cover changes.

In different experimental setups, we train standard LSTM and EA-LSTM networks for multiple small subbasins, that are associated to the Wupper region in Germany. Gridded daily precipitation data from the REGNIE dataset (Rauthe et al., 2013), provided by the German Weather Service (DWD), is used as model input to predict the daily discharge for each subbasin. For training the EA-LSTM we use land cover information from the European CORINE Land Cover (CLC) inventory as static input features. The CLC inventory includes Europe-wide timeseries of land cover in 44 classes as well as land cover changes for different time periods (Büttner, 2014). The percentage proportion of each land cover class within a subbasin serves as static input features. To evaluate the impact of land cover data on rainfall-runoff prediction, we compare the results of the EA-LSTM with those of the standard LSTM considering different statistical measures as well as the Nash–Sutcliffe efficiency (NSE).

In addition, we test the ability of the EA-LSTM to outperform physical process-based models. For this purpose, we utilize existing and calibrated hydrological models within the Wupper basin to simulate discharge for each subbasin. Finally, performance metrics of the calibrated model are used as benchmarks for assessing the performance of the EA-LSTM model.

References

Büttner, G. (2014). CORINE Land Cover and Land Cover Change Products. In: Manakos & M. Braun (Hrsg.), Land Use and Land Cover Mapping in Europe (Bd. 18, S. 55–74). Springer Netherlands. https://doi.org/10.1007/978-94-007-7969-3_5

Hu, C., Wu, Q., Li, H., Jian, S., Li, N., & Lou, Z. (2018). Deep Learning with a Long Short-Term Memory Networks Approach for Rainfall-Runoff Simulation. Water, 10(11), 1543. https://doi.org/10.3390/w10111543

Kratzert, F., Klotz, D., Shalev, G., Klambauer, G., Hochreiter, S., & Nearing, G. (2019). Towards learning universal, regional, and local hydrological behaviors via machine learning applied to large-sample datasets. Hydrology and Earth System Sciences, 23(12), 5089–5110. https://doi.org/10.5194/hess-23-5089-2019

Rauthe, M, Steiner, H, Riediger, U, Mazurkiewicz, A &Gratzki, A (2013): A Central European precipitation climatology – Part I: Generation and validation of a high-resolution gridded daily data set (HYRAS), Meteorologische Zeitschrift, Vol 22, No 3, 235–256. https://doi.org/10.1127/0941-2948/2013/0436

How to cite: Drost, S., Netzel, F., Wytzisk-Ahrens, A., and Mudersbach, C.: The Impact of Land Cover Data on Rainfall-Runoff Prediction Using an Entity-Aware-LSTM, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1136, https://doi.org/10.5194/egusphere-egu21-1136, 2021.

15:45–15:47
|
EGU21-7041
|
ECS
Moritz Feigl et al.

The Function Space Optimization (FSO) method, recently developed by Feigl et al. (2020), automatically estimates the transfer function structure and coefficients to parameterize spatially distributed hydrological models. FSO is a symbolic regression method, searching for an optimal transfer function in a continuous optimization space, using a text generating neural network (variational autoencoder).

We apply our method to the distributed hydrological model mHM (www.ufz.de/mhm), which is based on a priori defined transfer functions. We estimate mHM transfer functions for the parameters “saturated hydraulic conductivity” and “field capacity”, which both influence a range of hydrologic processes, e.g. infiltration and evapotranspiration.

The FSO and standard mHM approach are compared using data from 229 basins, including 7 large training basins and 222 smaller validation basins, distributed across Germany. For training, 5 years of data of 7 gauging stations is used, while up to 35 years, with a median of 32 years, are used for validation. This setup is adopted from a previous study by Zink et al. (2017), testing mHM in the same basins and which is used as a benchmark. Maps of soil properties (sand/clay percentage, bulk density) and topographic properties (aspect, slope, elevation) are used as possible inputs for transfer functions.

FSO estimated transfer functions improved the mHM model performance in the validation catchments significantly when compared to the benchmark results, and only show a small decrease in performance compared to the training results. Results demonstrate that an automatic estimation of parameter transfer functions by FSO is beneficial for the parameterization of distributed hydrological models and allows for a robust parameter transfer to other locations.

 

Feigl, M., Herrnegger, M., Klotz, D., & Schulz, K. (2020). Function Space Optimization: A symbolic regression method for estimating parameter transfer functions for hydrological models. Water resources research, 56(10), e2020WR027385.

Zink, M., Kumar, R., Cuntz, M., & Samaniego, L. (2017). A high-resolution dataset of water fluxes and states for Germany accounting for parametric uncertainty. Hydrol. Earth Syst. Sci, 21, 1769-1790.

How to cite: Feigl, M., Schweppe, R., Thober, S., Herrnegger, M., Samaniego, L., and Schulz, K.: Catchment to model space mapping – learning transfer functions from data by symbolic regression, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7041, https://doi.org/10.5194/egusphere-egu21-7041, 2021.

15:47–15:49
|
EGU21-3013
|
ECS
Sascha Flaig et al.

In order to prevent possible negative impacts of water abstraction in an ecologically sensitive moor south of Munich (Germany), a “predictive control” scheme is in place. We design an artificial neural network (ANN) to provide predictions of moor water levels and to separate hydrological from anthropogenic effects. As the moor is a dynamic system, we adopt the „Long short-term memory“ architecture.

To find the best LSTM setup, we train, test and compare LSTMs with two different structures: (1) the non-recurrent one-to-one structure, where the series of inputs are accumulated and fed into the LSTM; and (2) the recurrent many-to-many structure, where inputs gradually enter the LSTM (including LSTM forecasts from previous forecast time steps). The outputs of our LSTMs then feed into a readout layer that converts the hidden states into water level predictions. We hypothesize that the recurrent structure is the better structure because it better resembles the typical structure of differential equations for dynamic systems, as they would usually be used for hydro(geo)logical systems. We evaluate the comparison with the mean squared error as test metric, and conclude that the recurrent many-to-many LSTM performs better for the analyzed complex situations. It also produces plausible predictions with reasonable accuracy for seven days prediction horizon.

Furthermore, we analyze the impact of preprocessing meteorological data to evapotranspiration data using typical ETA models. Inserting knowledge into the LSTM in the form of ETA models (rather than implicitly having the LSTM learn the ETA relations) leads to superior prediction results. This finding aligns well with current ideas on physically-inspired machine learning.

As an additional validation step, we investigate whether our ANN is able to correctly identify both anthropogenic and natural influences and their interaction. To this end, we investigate two comparable pumping events under different meteorological conditions. Results indicate that all individual and combined influences of input parameters on water levels can be represented well. The neural networks recognize correctly that the predominant precipitation and lower evapotranspiration during one pumping event leads to a lower decrease of the hydrograph.

To further demonstrate the capability of the trained neural network, scenarios of pumping events are created and simulated.

In conclusion, we show that more robust and accurate predictions of moor water levels can be obtained if available physical knowledge of the modeled system is used to design and train the neural network. The artificial neural network can be a useful instrument to assess the impact of water abstraction by quantifying the anthropogenic influence.

How to cite: Flaig, S., Praditia, T., Kissinger, A., Lang, U., Oladyshkin, S., and Nowak, W.: Prognosis of water levels in a moor groundwater system influenced by hydrology and water extraction using an artificial neural network , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3013, https://doi.org/10.5194/egusphere-egu21-3013, 2021.

15:49–15:51
|
EGU21-9714
|
ECS
|
Martin Gauch et al.

Rainfall–runoff predictions are generally evaluated on reanalysis datasets such as the DayMet, Maurer, or NLDAS forcings in the CAMELS dataset. While useful for benchmarking, this does not fully reflect real-world applications. There, meteorological information is much coarser, and fine-grained predictions are at best available until the present. For any prediction of future discharge, we must rely on forecasts, which introduce an additional layer of uncertainty. Thus, the model inputs need to switch from past data to forecast data at some point, which raises several questions: How can we design models that support this transition? How can we design tests that evaluate the performance of the model? Aggravating the challenge, the past and future data products may include different variables or have different temporal resolutions.

We demonstrate how to seamlessly integrate past and future meteorological data in one deep learning model, using the recently proposed Multi-Timescale LSTM (MTS-LSTM, [1]). MTS-LSTMs are based on LSTMs but can generate rainfall–runoff predictions at multiple timescales more efficiently. One MTS-LSTM consists of several LSTMs that are organized in a branched structure. Each LSTM branch processes a part of the input time series at a certain temporal resolution. Then it passes its states to the next LSTM branch—thus sharing information across branches. We generalize this layout to handovers across data products (rather than just timescales) through an additional branch. This way, we can integrate past and future data in one prediction pipeline, yielding more accurate predictions.

 

[1] M. Gauch, F. Kratzert, D. Klotz, G. Nearing, J. Lin, and S. Hochreiter. “Rainfall–Runoff Prediction at Multiple Timescales with a Single Long Short-Term Memory Network.” Hydrology and Earth System Sciences Discussions, in review, 2020.

How to cite: Gauch, M., Kratzert, F., Nearing, G., Lin, J., Hochreiter, S., Brandstetter, J., and Klotz, D.: Multi-Timescale LSTM for Rainfall–Runoff Forecasting, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9714, https://doi.org/10.5194/egusphere-egu21-9714, 2021.

15:51–15:53
|
EGU21-15103
|
ECS
|
Reyhaneh Hashemi et al.

In the field of deep learning, LSTM lies in the category of recurrent neural network architectures. The distinctive capability of LSTM is learning non-linear long-term dependency structures. This makes LSTM a promising candidate for prediction tasks in non-linear time dependent systems such as prediction of runoff in a catchment. This work presents a comparative framework between an LSTM model and a proven conceptual model, namely GR4J. Performance of the two models is studied with respect to length of study period, surface area, and hydrological regime of 491 gauged French catchments covering a wide range of geographical and hydroclimatic conditions.  

Meteorological forcing data (features) include daily time series of catchment-averaged total precipitation, potential evapotranspiration, and air temperature. The hydrometric data consists of daily time series of discharge (target variable). The length of study period varies within the sample depending on the availability of full-record of discharge and, on average,  is 15 [years].

In equivalent experimental scenarios, features are kept same in both models and the target variable is predicted for each catchment by both models. Their performance is then evaluated and compared. To do this, the available time series are split into three independent subsequent subsets, namely, training set, validation set, and evaluation set, constituting, respectively, 50%, 20%, and 30% of the study period. The  LSTM model is trained based on the training and validation sets and predicts the target on the evaluation set. The four parameters of GR4J model are calibrated using the training set and the calibrated model is then used to estimate discharges corresponding to the evaluation set. 

The results suggest that the hydrological regime of catchment is the main factor behind the performance pattern of the LSTM model. According to the results, in the hydrological regimes Uniform and Nival, involving flow regimes with dominant long-term processes, the LSTM model outperforms GR4J model. However, in Pluvial-Mediterranean and Pluvial-Nival regimes characterised with pluri-season peaks, the LSTM model underperforms GR4J model.

How to cite: Hashemi, R., Brigode, P., Garambois, P.-A., and Javelle, P.: Runoff predictive capability of a simple LSTM model versus a proven conceptual model between diverse hydrological regimes, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15103, https://doi.org/10.5194/egusphere-egu21-15103, 2021.

15:53–15:55
|
EGU21-2778
|
ECS
Thomas Lees et al.

Techniques from the field of machine learning have shown considerable promise in rainfall-runoff modelling. This research offers three novel contributions to the advancement of this field: a study of the performance of LSTM based models in a GB hydrological context; a diagnosis of hydrological processes that data-driven models simulate well but conceptual models struggle with; and finally an exploration of methods for interpreting the internal cell states of the LSTMs. 

In this study we train two deep learning models, a Long Short Term Memory (LSTM) Network and an Entity Aware LSTM (EALSTM), to simulate discharge for 518 catchments across Great Britain using a newly published dataset, CAMELS-GB. We demonstrate that the LSTM models are capable of simulating discharge for a large sample of catchments across Great Britain, achieving a mean catchment Nash-Sutcliffe Efficiency (NSE) of 0.88 for the LSTM and 0.86 for the EALSTM, where no stations have an NSE < 0. We compare these models against a series of conceptual models which have been externally calibrated and used as a benchmark (Lane et al., 2019). 

Alongside robust performance for simulating discharge, we note the potential for data-driven methods to identify hydrological processes that are present in the underlying data, but the FUSE conceptual models are unable to capture. Therefore, we calculate the relative improvement of the LSTMs compared to the conceptual models, ∆NSE. We find that the largest improvement of the LSTM models compared to our benchmark is in the summer months and in the South East of Great Britain. 

We also demonstrate that the internal “memory” of the LSTM correlates with soil moisture, despite the LSTM not receiving soil moisture as an input. This process of “concept-formation” offers three interesting findings. It provides a novel method for deriving soil moisture estimates. It suggests the LSTM is learning physically realistic representations of hydrological processes. Finally, this process of concept formation offers the potential to explore how the LSTM is able to produce accurate simulations of discharge, and the transformations that are learned from inputs (temperature, precipitation) to outputs (discharge).

References:
Lane, R. A., Coxon, G., Freer, J. E., Wagener, T., Johnes, P. J., Bloomfield, J. P., Greene, S., Macleod, C. J., and Reaney, S. M.: Benchmarking the predictive capability of hydrological models for river flow and flood peak predictions across over 1000 catchments in Great Britain, Hydrology and Earth System Sciences, 23, 4011–4032, 2019.

How to cite: Lees, T., Buechel, M., Anderson, B., Slater, L., Reece, S., Coxon, G., and Dadson, S.: Rainfall-Runoff Simulation and Interpretation in Great Britain using LSTMs, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2778, https://doi.org/10.5194/egusphere-egu21-2778, 2021.

15:55–15:57
|
EGU21-8393
|
ECS
Panagiotis Mavritsakis et al.

Large parts of the world rely on rainfed agriculture for their food security. In Africa, 90% of the agricultural yields rely only on precipitation for irrigation purposes and approximately 80% of the population’s livelihood is highly dependent on its food production. Parts of Ghana are prone to droughts and flood events due to increasing variability of precipitation phenomena. Crop growth is sensitive to the wet- and dry-spell phenomena during the rainy season. To support rural communities and small farmer in their efforts to adapt to climate change and natural variability, it is crucial to have good predictions of rainfall and related dry/wet spell indices.

This research constitutes an attempt to assess the dry-spell patterns in the northern region of Ghana, near Burkina Faso. We aim to develop a model which by exploiting satellite products overcomes the poor temporal and spatial coverage of existing ground precipitation measurements. For this purpose 14 meteorological stations featuring different temporal coverage are used together with satellite-based precipitation or cloud top temperature products.

We will compare conventional copula models and deep-learning algorithms to establish a link between satellite products and field rainfall data for dry-spell assessment. The deep-learning architecture used should be able to both have the feature of convolution (Convolutional Neural Networks) and the ability to capture a sequence (Recurrent Neural Networks). The deep-learning architecture used for this purpose is the Long Short-Term Memory networks (LSTMs). Regarding the copula modeling, the Archimedean, the Gaussian and the extreme copulas will be examined as modeling options.

Using these models we will attempt to exploit the long temporal coverage of the satellite products in order to overcome the poor temporal and spatial coverage of existing ground precipitation measurements. Doing that, our final objective is to enhance our knowledge about the dry-spell characteristics and, thus, provide more reliable climatic information to the farmers in the area of Northern Ghana.

How to cite: Mavritsakis, P., ten Veldhuis, M.-C., Schleiss, M., and Taormina, R.: Dry-spell assessment through rainfall downscaling comparing deep-learning algorithms and conventional statistical frameworks in a data scarce region: The case of Northern Ghana, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8393, https://doi.org/10.5194/egusphere-egu21-8393, 2021.

15:57–15:59
|
EGU21-9803
|
ECS
|
Modou Mbaye et al.

Traditional field calibration of cosmic-Ray neutron sensors (CRNS) for area-wide soil moisture monitoring is based on time-consuming and often expensive soil sample collection and conventional soil moisture measurement. This calibration requires two field campaigns, one under dry and one under wet soil conditions. However, depending on the agro-ecological context more field campaigns may be required for calibration, due to for instance crop biomass water interference. In addition, the current calibration method includes corrections considering several parameters influencing neutron counts, the proxy for soil moisture, such as soil lattice water, organic carbon, and biomass which need to be measured.

The main objective of this study is to investigate and develop an alternative calibration method to the currently available field calibration method. To this end, a Deep Learning model architecture under the TensorFlow machine learning framework is used to calibrate the Cosmic-Ray sensor.

The Deep Learning model is built with more than 8 years of CRNS data from Petzenkirchen (Austria) and consists of four hidden layers with activation function and a succession of batch normalization. Prior to build the Deep Learning model, data analysis consisting of pertinent variables selection was performed with multivariate statistical analysis of correlation. Among nine features, five were effectively pertinent and included in the machine learning artificial neural network architecture. The five input variables were the raw neutrons counts (N1 and N2), humidity (H), air pressure (P4) and temperature (T7).

The preliminary results show a linear regression with an R2 of 0.97 and the model predicted the soil moisture with less than 1% error.

These preliminary results are encouraging and proved that a machine learning based method could be a valuable alternative calibration method for CRNS against the current field calibration method.

More investigation will be performed to test the model under different agro-ecological conditions, such as Nebraska, USA. Further, additional input variables will be considered in the development of machine learning based models, to bring in agro-ecological information, such as crop cover, growth stage, precipitation related to the CRNS footprint. 

How to cite: Mbaye, M., Said, H., Franz, T., Weltin, G., Dercon, G., Heng, L. K., Fulajtar, E., Strauss, P., Rab, G., and Ndiaye, M.: Deep learning approach for calibrating Cosmic-Ray Neutron Sensors (CRNS) in area-wide soil moisture monitoring, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9803, https://doi.org/10.5194/egusphere-egu21-9803, 2021.

15:59–16:01
|
EGU21-13900
|
ECS
|
Paul Muñoz et al.

Current efforts on Deep Learning-based modeling are being put for solving real world problems with complex or even not-fully understood interactions between predictors and target variables. A special artificial neural network, the Long Short-Term Memory (LSTM) is a promising data-driven modeling approach for dynamic systems yet little has been explored in hydrological applications such as runoff forecasting. An aditional challenge to the forecasting task arises from the uncertainties generated when using readily-available Remote Sensing (RS) imagery aimed to overcome lack of in-situ data describing the runoff governing processes. Here, we proposed a runoff forecasting framework for a 300-kmmountain catchment located in the tropical Andes of Ecuador. The framework consists on real-time data acquisition, preprocessing and runoff forecasting for lead times between 1 and 12 hours. LSTM models were fed with 18 years of hourly runoff, and precipitation data from the novel PERSIANN-Dynamic Infrared Rain Rate near real-time (PDIR-Now) product. Model efficiencies according to the NSE metric ranged from 0.959 to 0.554, for the 1- to 12-hour models, respectively. Considering that the concentration time of the catchment is approximately 4 hours, the proposed framework becomes a useful tool for delivering runoff forecasts to decision makers, stakeholders and the public. This study has shown the suitability of using the PDIR-Now product in a LSTM-modeling framework for real-time hydrological applications. Future endeavors must focus on improving data representation and data assimilation through feature engineering strategies.

Keywords: Long Short-Term Memory; PDIR-Now; Hydroinformatics; Runoff forecasting; Tropical Andes

How to cite: Muñoz, P., Muñoz, D. F., Orellana-Alvear, J., Moftakhari, H., Moradkhani, H., and Célleri, R.: Long Short-Term Memory Networks for Real-Time Runoff Forecasting using Remotely Sensed Data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13900, https://doi.org/10.5194/egusphere-egu21-13900, 2021.

16:01–16:03
|
EGU21-4400
|
Takeyoshi Nagasato et al.

Nowadays, a convolutional neural network (CNN), which is a kind of deep neural network has been shown to have high applicability to precipitation downscaling in previous studies. CNN has various hyperparameters, which greatly affect the estimation accuracy. In the field of computer science, researches on hyperparameter settings have been conducted, especially for image recognition. However, there are few studies that investigated the sensitivity analysis of hyperparameters on precipitation downscaling by means of CNN. Therefore, this study conducted a sensitivity analysis of the hyperparameters of CNN for precipitation downscaling. For this study, atmospheric reanalysis data were used as the inputs for precipitation downscaling by means of CNN, and daily average precipitation at the basin level were used as the target data. Then, this study focused on the hyperparameters of CNN that have a great influence on the feature extraction of input data (such as kernel size, number of output channels in the convolutional layer, etc.). Considering that the learning process of CNN has randomness, CNN was trained 200 times for the setting conditions of each hyperparameter and evaluated the estimation accuracy. As the results of detailed sensitivity analysis, it was shown that the estimation accuracy may not be improved even if the structure of CNN deeper. Contrarily, It was also shown that initial conditions such as batch selection and bias in the CNN learning process may have relatively large effects on the learning results.

How to cite: Nagasato, T., Ishida, K., Yokoo, K., Kiyama, M., and Amagasaki, M.: Sensitivity Analysis of the Hyperparameters of CNN for Precipitation Downscaling, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4400, https://doi.org/10.5194/egusphere-egu21-4400, 2021.

16:03–16:05
|
EGU21-9091
|
ECS
Annika Nolte et al.

Groundwater level dynamics are very sensitive to groundwater withdrawal, but their effects and magnitude – especially in combination with natural fluctuations – must be often estimated due to missing or inaccurate information of all local pumping activities in an area. This study examines the potential of deep learning applications at large spatial scales to estimate the parts of local withdrawal activities and natural impacts – meteorological and environmental – on groundwater level dynamics. We will use big data elements from a newly constructed global groundwater database in a single long-term short-term memory (LSTM) model to examine scale-dependent impacts. The data used in the model consists of continuous groundwater level observations and catchment attributes – spatially heterogeneous but temporally static catchment attributes (e.g. topography) and continuous observations of the meteorological forcing (e.g. precipitation) – from several hundred catchments of shallow coastal aquifers of different continents. Our approach is to use only freely accessible data sources covering the global scale as catchment attributes. We will test how relationships between groundwater level dynamics and catchment attributes, at different scales, can improve interpretability of groundwater level simulations using deep learning techniques.

How to cite: Nolte, A., Bender, S., Hartmann, J., and Baltruschat, S.: Scale-dependent impacts of natural and anthropogenic drivers on groundwater level dynamics – analysis of shallow coastal aquifers using deep learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9091, https://doi.org/10.5194/egusphere-egu21-9091, 2021.

16:05–16:07
|
EGU21-49
|
ECS
|
Timothy Praditia et al.

Artificial Neural Networks (ANNs) have been widely applied to model hydrological problems with the increasing availability of data and computing power. ANNs are particularly useful to predict dynamic variables and to learn / discover constitutive relationships between variables. In the hydrology field, a specific example of the relationship takes the form of the governing equations of contaminant transport in porous media flow. Fluid flow in porous media is a spatio-temporal problem and it requires a certain numerical structure to solve. The ANNs, on the other hand, are black-box models that lack interpretability especially in their structure and prediction. Therefore, the discovery of the relationships using ANNs is not apparent. Recently, a distributed spatio-temporal ANN architecture (DISTANA) was proposed. The structure consists of transition kernels that learn the connectivity between one spatial cell and its neighboring cells, and prediction kernels that transform the transition kernels output to predict the quantities of interest at the subsequent time step. Another method, namely the Universal Differential Equation (UDE) for scientific machine learning was also introduced. UDE solves spatio-temporal problems by using a Convolutional Neural Network (CNN) structure to handle the spatial dependency and then approximating the differential operator with an ANN. This differential operator will be solved with Ordinary Differential Equation (ODE) solvers to administer the time dependency. In our work, we combine both methods to design an improved network structure to solve a contaminant transport problem in porous media, governed with the non-linear diffusion-sorption equation. The designed architecture consists of flux kernels and state kernels. Flux kernels are necessary to calculate the connectivity between neighboring cells, and are especially useful for handling different types of boundary conditions (Dirichlet, Neumann, and Cauchy). Furthermore, the state kernels are able to predict both observable states and mass-conserved states (total and dissolved contaminant concentration) separately. Additionally, to discover the constitutive relationship of sorption (i.e. the non-linear retardation factor R), we regularize its training to reflect the known monotonicity of R. As a result, our network is able to approximate R generated with the linear, Freundlich, and Langmuir sorption model, as well as the contaminant concentration with high accuracy.

How to cite: Praditia, T., Oladyshkin, S., and Nowak, W.: Universal Differential Equation for Diffusion-Sorption Problem in Porous Media Flow, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-49, https://doi.org/10.5194/egusphere-egu21-49, 2020.

16:07–16:09
|
EGU21-16108
Chaopeng Shen et al.

Watersheds in the world are often perceived as being unique from each other, requiring customized study for each basin. Models uniquely built for each watershed, in general, cannot be leveraged for other watersheds. It is also a customary practice in hydrology and related geoscientific disciplines to divide the whole domain into multiple regimes and study each region separately, in an approach sometimes called regionalization or stratification. However, in the era of big-data machine learning, models can learn across regions and identify commonalities and differences. In this presentation, we first show that machine learning can derive highly functional continental-scale models for streamflow, evapotranspiration, and water quality variables. Next, through two hydrologic examples (soil moisture and streamflow), we argue that unification can often significantly outperform stratification, and systematically examine an effect we call data synergy, where the results of the DL models improved when data were pooled together from characteristically different regions and variables. In fact, the performance of the DL models benefited from some diversity in training data even with similar data quantity. However, allowing heterogeneous training data makes eligible much larger training datasets, which is an inherent advantage of DL. We also share our recent developments in advancing hydrologic deep learning and machine learning driven parameterization.

How to cite: Shen, C., Rahmani, F., Fang, K., Wei, Z., and Tsai, W.-P.: On the data synergy effect of large-sample multi-physics catchment modeling with machine learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-16108, https://doi.org/10.5194/egusphere-egu21-16108, 2021.

16:09–16:11
|
EGU21-10266
|
ECS
Radosław Szostak et al.

Unmanned Aerial Vehicles (UAVs) can be an excellent tool for environmental measurements due to their ability to reach inaccessible places and fast data acquisition over large areas. In particular drones may have a potential application in hydrology, as they can be used to create photogrammetric digital elevation models (DEM) of the terrain allowing to obtain high resolution spatial distribution of water level in the river to be fed into hydrological models. Nevertheless, photogrammetric algorithms generate distortions on the DEM at the water bodies. This is due to light penetration below the water surface and the lack of static characteristic points on water surface that can be distinguished by the photogrammetric algorithm. The correction of these disturbances could be achieved by applying deep learning methods. For this purpose, it is necessary to build a training dataset containing DEMs before and after water surfaces denoising. A method has been developed to prepare such a dataset. It is divided into several stages. In the first step a photogrammetric surveys and geodetic water level measurements are performed. The second one includes generation of DEMs and orthomosaics using photogrammetric software. Finally in the last one the interpolation of the measured water levels is done to obtain a plane of the water surface and apply it to the DEMs to correct the distortion. The resulting dataset was used to train deep learning model based on convolutional neural networks. The proposed method has been validated on observation data representing part of Kocinka river catchment located in the central Poland.

This research has been partly supported by the Ministry of Science and Higher Education Project “Initiative for Excellence – Research University” and Ministry of Science and Higher Education subsidy, project no. 16.16.220.842-B02 / 16.16.150.545.

How to cite: Szostak, R., Wachniew, P., Zimnoch, M., Ćwiąkała, P., Puniach, E., and Pietroń, M.: Denoising of river surface photogrammetric DEMs using deep learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10266, https://doi.org/10.5194/egusphere-egu21-10266, 2021.

16:11–16:13
|
EGU21-9145
|
ECS
|
Sadegh Sadeghi Tabas and Vidya Samadi

Deep Learning (DL) is becoming an increasingly important tool to produce accurate streamflow prediction across a wide range of spatial and temporal scales. However, classical DL networks do not incorporate uncertainty information but only return a point prediction. Monte-Carlo Dropout (MC-Dropout) approach offers a mathematically grounded framework to reason about DL uncertainty which was used here as random diagonal matrices to introduce randomness to the streamflow prediction process. This study employed Recurrent Neural Networks (RNNs) to simulate daily streamflow records across a coastal plain drainage system, i.e., the Northeast Cape Fear River Basin, North Carolina, USA. We employed MC-Dropout approach with the DL algorithm to make streamflow simulation more robust to potential overfitting by introducing random perturbation during training period. Daily streamflow was calibrated during 2000-2010 and validated during 2010-2014 periods. Our results provide a unique and strong evidence that variational sampling via MC-Dropout acts as a dissimilarity detector. The MC-Dropout method successfully captured the predictive error after tuning a hyperparameter on a representative training dataset. This approach was able to mitigate the problem of representing model uncertainty in DL simulations without sacrificing computational complexity or accuracy metrics and can be used for all kind of DL-based streamflow (time-series) model training with dropout.

How to cite: Sadeghi Tabas, S. and Samadi, V.: Model Uncertainty in Deep Learning Simulation of Daily Streamflow with Monte Carlo Dropout, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9145, https://doi.org/10.5194/egusphere-egu21-9145, 2021.

16:13–16:15
|
EGU21-8638
|
ECS
Timothy Tiggeloven et al.

In order to better understand current coastal flood risk, it is critical to be able to predict the characteristics of non-tidal residuals (from here on referred to as surges), such as their temporal variation and the influence of coastal complexities on the magnitude of storm surge levels. In this study, we use an ensemble of Deep Learning (DL) models to predict hourly surge levels using four different types of neural networks and evaluate their performance. Among deep learning models, artificial neural networks (ANN) have been popular neural network models for surge level prediction, but other DL model types have not been investigated yet. In this contribution, we use three DL approaches - CNN, LSTM, and a combined CNN-LSTM model- , to capture temporal dependencies, spatial dependencies and spatio-temporal dependencies between atmospheric conditions and surges for 736 tide gauge locations. Using the high temporal and spatial resolution atmospheric reanalysis datasets ERA5 from ECMWF as predictors, we train, validate and test surge based on observed hourly surge levels derived from the GESLA-2 dataset. We benchmark our results obtained with DL to those provided by a simple probabilistic reference model based on climatology. This study shows promising results for predicting the temporal evolution of surges with DL approaches, and gives insight into the capability to gain skill using DL approaches with different Architectures for surge prediction. We therefore foresee a wide range of advantages in using DL models for coastal applications: probabilistic coastal flood hazard assessment, rapid prediction of storm surge estimates, future predictions of surge levels.

How to cite: Tiggeloven, T., Couasnon, A., van Straaten, C., Muis, S., and Ward, P.: Exploring deep learning approaches to predict hourly evolution of surge levels, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8638, https://doi.org/10.5194/egusphere-egu21-8638, 2021.

16:15–16:17
|
EGU21-4398
|
ECS
|
Kazuki yokoo et al.

In recent years, deep learning has been applied to various issues in natural science, including hydrology. These application results show its high applicability. There are some studies that performed rainfall-runoff modeling by means of a deep learning method, LSTM (Long Short-Term Memory). LSTM is a kind of RNN (Recurrent Neural Networks) that is suitable for modeling time series data with long-term dependence. These studies showed the capability of LSTM for rainfall-runoff modeling. However, there are few studies that investigate the effects of input variables on the estimation accuracy. Therefore, this study, investigated the effects of the selection of input variables on the accuracy of a rainfall-runoff model by means of LSTM. As the study watershed, this study selected a snow-dominated watershed, the Ishikari River basin, which is in the Hokkaido region of Japan. The flow discharge was obtained at a gauging station near the outlet of the river as the target data. For the input data to the model, Meteorological variables were obtained from an atmospheric reanalysis dataset, ERA5, in addition to the gridded precipitation dataset. The selected meteorological variables were air temperature, evaporation, longwave radiation, shortwave radiation, and mean sea level pressure. Then, the rainfall-runoff model was trained with several combinations of the input variables. After the training, the model accuracy was compared among the combinations. The use of meteorological variables in addition to precipitation and air temperature as input improved the model accuracy. In some cases, however, the model accuracy was worsened by using more variables as input. The results indicate the importance to select adequate variables as input for rainfall-runoff modeling by LSTM.

How to cite: yokoo, K., ishida, K., nagasato, T., and Ercan, A.: Effect of input variables on rainfall-runoff modeling using a deep learning method, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4398, https://doi.org/10.5194/egusphere-egu21-4398, 2021.

16:17–17:00
Meet the authors in their breakout text chats

A chat user is typing ...