It took a couple of years, but I finally have the two cornerstones of my research. First, an idea of what resilience means for the wastewater infrastructure and how to measure it; and second, a calibrated and validated model of a full-scale treatment plant, complex enough to model resilience against some type of stressors. Details of this research can be accessed through 2 publications: here, and in the proceedings here (soon to be published).
So, we are all set up! You are probably wondering, what stressors did I look into, now that we have the capabilities? The answer came with a very successful training course in Birmingham, where several water utilities were invited to learn more about TreatRec and provided feedback on our research. I have been studying the effects of stormwater, power outages and flooding on wastewater resilience.
Since it is a little early to disclose all the findings of the modelling study, I will rather focus on a related topic: how to ensure the robustness of the simulations to draw valid conclusions. For example, a critical part of a wastewater treatment plant model is the settling sub-model. These models, no matter if you use the simplest approach or a 10-layer Takacs model with biological processes, are not predictive. That means that regardless of the calibration, any change in settleability needs to be manually inputted. To verify that our hypothesis is valid within any conditions, we decided to introduce a sensitivity analysis of the Sludge Volume Index (SVI, an indicator of the sludge setteability). In other words, each scenario must be modelled for a range of SVI values.
However, one of the strategies we are testing to mitigate sludge washout during storms is varying the external recirculation of sludge during the storm, for which we are already running a sensitivity analysis on a range of RAS ratios. That means now we have to carry a simulation for each pair of values in a combination of RAS-SVI, for each scenario we want to try. In other words, lots of simulations that can take days to finish.
Apart from the time to compute the simulations and other caveats, an important challenge is how to analyse the data output. Firstly, a single simulation can produce hundreds of Megabytes of data if we want to take everything. We need to decide which data to extract beforehand. In this case it was important to extract data on the suspended solid’s concentration of the clarifier layers, and the production of N2 gas, which might indicate settling problems. This requires foresight (and usually repeating simulations too).
So, after a few days you get 20 excels with 10 tabs each with dozens of columns of data in time-series. Now is time to bring out the Python skills and analyse the output, perform some statistics, run an algorithm to find out compliance time, recovery time, costs, etc. And then some data crunching to plot the data in a meaningful way.
Overall, the modelling process is quite complex, and includes a lot of data analysis, statistics and expert knowledge to ask the right questions and interpret the answers.