Author Archives: tonyladson

About tonyladson

I work as a hydrologist. I'm a director of a specialist consulting company, Moroka Pty Ltd and have teaching and research appointments at Monash University, the University of Melbourne and University of Adelaide.

Log-normal flood frequency analysis

Although the log-normal distribution is not usually used for flood frequency estimation in Australia it can be useful to compare results with more common approaches.

The general equation for flood quantile estimation (from Chow, 1951) is:

x_t = \mu+K_t\sigma      (1)

Where K_t is the frequency factor that depends on the distribution of the flood data and the probability of interest, \mu and \sigma are the mean and standard deviation.

Frequency factors

The classical frequency factor for the log-normal distribution is:

K_t = \left( \frac{1}{C_v} \right) \left[ \exp \left\{ Z\left( \log \left(1 + C_v^2 \right) \right)^\frac{1}{2} - 0.5\left( \log \left( 1 + C_v^2 \right) \right) \right\} -1\right]      (2)

Where, Z is a standard normal deviate and C_v is the coefficient of variation = \frac{\mu}{\sigma}.  The standard normal deviate can be calculated, for example, using the qnorm function in R.  So for the 100-year flood, exceedance probability = 1/100, z = 2.326.

An R function to calculate this log-normal frequency factor is:

# Frequency factor for the log-normal distribution
# m - mean
# s - standard deviation
# p - exceedance probability

FF_LogNormal <- function(m, s, p) {
cv <- s/m
z <- qnorm(1 - p)
(1/cv) * (exp(sqrt(log(1 + cv^2)) * z - 0.5 * log(1 + cv^2)) - 1)
}

 

Example 1

Kite (2004) provides an example we can use for testing.

Annual maximum discharges of the Saint John River at Fort Kent, New Brunswick for the 37 years from 1927 to 1963 have a mean of 81,000 cfs and a standard deviation of 22,800. Estimate the 100-year flood.

As, calculated below, Q100 = 148,000 cfs

m <- 81000
s <- 22800
p <- 0.01

Kt <- FF_LogNormal(81000, 22800, p) #2.943

(Q100 <- m + Kt * s)
#[1] 148221.3

If flood data are log-normally distributed, that means the logarithms of the flood data are normally distributed.  This suggests there are two ways of calculating the flood quantiles.  We can use the data as-is along with the log-normal frequency factor (as in the above example), or take the logs of the data and use the frequency factor from the normal distribution.

Example 2

Continuing with the example from Kite.  The mean and standard deviation of the logarithms of the 37 events from the Saint John River are 11.263 and 0.284 (in cfs units).

The frequency factor for the 100-year flood, based on the normal distribution, will be 2.326 so the 100-year flood estimate is 150,800 cfs.  Similar to the previous example but not exactly the same.

m.log <- 11.263
s.log <- 0.284

Kt <- qnorm(1 - 0.01) #2.326348

(Q100 <- exp(m.log + Kt*s.log))
#[1] 150795.9

Example 3

Lets repeat this analysis using some Australian data.  An annual flood series for the Hunter River at Singleton is available here.

The two estimates for the 100-year flood are 10,500 cumec using data as-is and 13,800 cumec when calculating in the log domain.

# Hunter River at Singleton
library(repmis)
my.url <- 'https://dl.dropboxusercontent.com/u/10963448/Singleton.csv'

singleton <- source_data(my.url)
str(singleton)

m <- mean(singleton$`Peak (cumec)`) # 1401.7
s <- sd(singleton$`Peak (cumec)`)   # 2312.9
Kt <- FF_LogNormal(m, s, 0.01) # 3.917
(Q100 <- m + Kt * s)
# 10460.5

m.log <- mean(log(singleton$`Peak (cumec)`)) # 6.425
s.log <- sd(log(singleton$`Peak (cumec)`))   # 1.336
Kt <- qnorm(1 - 0.01) # 2.326
(Q100 <- exp(m.log + Kt*s.log))
# 13818.9

Bayesian frequency factor

Kuczera (1999) derives a frequency factor for the log-normal distribution which takes account of the uncertainty in the parameters – the mean and standard deviation.  These parameters must be estimated from the flood data.  Assuming nothing is known about these parameters other that what we learn from the flood measurements (i.e. the Bayesian prior is noninformative), the frequency factor is:

K_t = t_{n-1} \times \sqrt{1 + \frac{1}{n}}

Using this frequency factor will result in a more conservative estimate of flood quantiles. As the sample size increases the the Bayesian frequency factor will approach the frequency factor based on the normal distribution.

An R function to calculate this Bayesian log-normal frequency factor is:

# Frequency factor for the log-normal distribution
# n - number of data points
# p - exceedance probability

FF_LogNormal_Bayes <- function(n, p) {

qt( 1-p, n-1) * sqrt(1 + 1/n)

}

Example 4

Calculate the Bayesian estimate for the 100-year flood quantile using the data for the Hunter River at Singleton.

Calculations are below.  The estimate is 17,350 cumecs.

library(repmis)
my.url <- 'https://dl.dropboxusercontent.com/u/10963448/Singleton.csv'

singleton <- source_data(my.url)

m.log <- mean(log(singleton$`Peak (cumec)`)) # 6.425
s.log <- sd(log(singleton$`Peak (cumec)`)) # 1.336
Kt <- FF_LogNormal_Bayes(nrow(singleton), 0.01) # 2.4966

(Q100 <- exp(m.log + Kt*s.log))
# 17348

Testing

So we have three flood quantile estimators for log-normally distributed data:

  1. Take logs and use the frequency factor from the normal distribution
  2. Use data as-as and calculate the frequency factor using equation 2.
  3. Take logs and use the Bayesian frequency factor.

If the data set is large, all the results are the same.

set.seed(2016)
my.flood <- rlnorm(1e6, 6, 1)  # generate 1 flood values from a log-normal distribution.

# True value

exp(6 + 1*qnorm(1 - 0.01)) # 4131.302

# Direct calculation of quantile
quantile(my.flood, probs = 0.99, type = 8 ) # 4114.213

# Taking logs and using normal distribution frequency factor
Q100.log <- mean(log(my.flood)) + sd(log(my.flood))*qnorm(1 - 0.01)
exp(Q100.log) # 4130.316

# Use frequency factor from equation 2
Q100 <- mean(my.flood) + sd(my.flood) * FF_LogNormal(mean(my.flood), sd(my.flood), 0.01)
Q100 # 4127.146

# Bayes frequency factor
Q100.Bayes <- mean(log(my.flood)) + sd(log(my.flood))*FF_LogNormal_Bayes(length(my.flood), 0.01)
exp(Q100.Bayes) #4130.336

For small datasets, the estimates vary substantially.

First, let’s make a function that simulations 30 years of log-normal flood peaks and calculates the 100-year quantiles.

Test_f <- function(i, p) {

# i = dummy variable so the function can be included in a loop
# p = exceedance probability

my.flood <- rlnorm(30, 6, 1) # Generate 30 years of flood data
Q100.normal <- mean(log(my.flood)) + sd(log(my.flood))*qnorm(1 - p)
Q100.eq2 <- mean(my.flood) + sd(my.flood) * FF_LogNormal(mean(my.flood), sd(my.flood), p)
Q100.Bayes <- mean(log(my.flood)) + sd(log(my.flood))*FF_LogNormal_Bayes(length(my.flood), p)
data.frame(Q100.normal = exp(Q100.normal), Q100.eq2 = Q100.eq2, Q100.Bayes = exp(Q100.Bayes))


}

Now, we’ll call this multiple times and compare the average quantiles for the three methods with the true quantile.

set.seed(5)
out <- lapply(1:10000, Test_f, p = 0.01)
out.df <- do.call(rbind, out)
colMeans(out.df)

Q100.normal Q100.eq2 Q100.Bayes
4334.727 3678.353 5204.641

# True quantile 4131

So, for for this data set, on average, the quantile based on taking the logs and using the frequency factor from the normal distribution is about right.  The quantile based on equation 2 is too low, and the quantile using the Bayesian frequency factor is too high.

In another post, we’ll look at quantifying the uncertainty in quantile estimates.

References

Chow, V. T. (1951) A general formula for hydrologic frequency analysis.  Transactions, American Geophysical Union 32:231-237.

Kite, G. W. (2004) Frequency and risk analysis in hydrology.  Water Resources Publications, LLC.

Kuczera (1999) Comprehensive at-site flood frequency analysis using Monte Carlo Bayesian inference.  Water Resources Research 35(5): 1551-1557.

Graphing a water balance

The water balance for a urban catchment equates the change in storage during a certain period, with the difference between water inputs (precipitation and mains water) and water outputs (evaporation, stormwater runoff and wastewater discharge).

\Delta s = (P+I) - (E_a + R_s + R_w)
where:

\Delta s change in catchment storage
P precipitation
I  imported water
E_a actual evaportranspiration
R_s stormwater runoff
R_w wastewater discharge

Mitchell et al., (2003) provides data on the water balance for Curtin, ACT for 1979 to 1996.  The water balance for the average, wettest and driest years are shown in the table below.

Water-balance_table.png

When presenting financial statements, a common approach is to use a waterfall chart which shows how the components of a financial balance contribute to an overall result.  Here I’ve used a waterfall chart to show the water balance for Curtin for the driest and wettest year as reported by Mitchell et al., (2003).

Water_balance_driest

Water_balance_wettest

Figure 1: Water balance for Curtin, ACT in (A) and driest and (B) the wettest years as estimated by Mitchell et al., (2003).

Does this approach to visualising a water balance help understanding?  A few things stand out:

  • In the driest year, more water was input from the mains than from rainfall
  • In the driest year, actual evapotranspiration was larger than rainfall and mains inputs.
  • Evapotranspiration and stormwater change with climate, with large variation between the wet and dry years.  Wastewater doesn’t change all that much.
  • Precipitation is highly variable, ranging from 247 mm to 914 mm.

There is a guide to making a waterfall chart in excel here.  The R code to produce the graphs shown in this blog is available as a gist, which draws on this blog.

References

Mitchell, V. G., T. A. McMahon and R. G. Mein (2003) Components of the Total Water Balance of an Urban Catchment. Environmental Management 32(6): 735-746. (link)

Munging rating tables

The Victorian water monitoring site includes rating tables for stream gauges but they are in a format that is not easy to work with.   An example is shown in Figure 1 below.

Rating table

Figure 1: Extract of rating table

The following steps can be used to extract and convert the data into a useable format.

1. Download and save rating table.  Click the button shown to get the rating table as a text file.

Rating_table_download

Figure 2: Save the rating table

2. Re-format the data to create columns of levels and flows.  You’ll need to use your favourite tool for this munging step.  An example using R is available as a gist.

3. Plot and compare with the online version

Rating_table_404216_warehouse

Figure 2: Rating plot (source: data.water.vic.gov.au/monitoring.htm?ppbm=404216&rs&1&rscf_org)

Rating_table_404216_R

Figure 3: Rating plot using data from rating table

4. Save as a csv file for further use.

R code is available here.

Related posts:

Converting between EY, AEP and ARI

The latest version of Australian Rainfall and Runoff (ARR2016) proposes new terminology for flood risk (see Book 1, Chapter 2.2.5).  Preferred terminology is provided in Figure 1.2.1 which is reproduced below.

Definitions:

  • EY – Number of exceedances per year
  • AEP – Annual exceedance probability
  • AEP (1 in x) – 1/AEP
  • ARI – Average Recurrence Interval (years)
ARR-Terminology

Australian Rainfall and Runoff preferred terminology

For floods rarer than 5%, the relationship between the various frequency descriptors can be estimated by the following straightforward equations.

\mathrm{EY} = \frac{1}{\mathrm{ARI}}
\mathrm{EY} = \mathrm{AEP}
\mathrm{AEP(1\; in\; x \;Years)} = \frac{1}{\mathrm{AEP}}
\mathrm{ARI} = \mathrm{AEP(1\; in \; x \; Years)}
\mathrm{AEP} = \frac{1}{\mathrm{ARI}}

For common events, more complex equations are required (these will also work for any frequency):

\mathrm{EY} = \frac{1}{\mathrm{ARI}}
\mathrm{AEP(1\; in\; x \;Years)} = \frac{1}{\mathrm{AEP}}
\mathrm{AEP(1\; in\; x \;Years)} = \frac{\exp(\mathrm{EY})}{\left( \exp(\mathrm{EY}) - 1 \right)}
\mathrm{ARI} =\frac{1}{-\log_e(1-AEP)}
\mathrm{AEP} = \frac{\exp(\frac{1}{\mathrm{ARI}}) - 1}{\exp(\frac{1}{\mathrm{ARI}})}

A key result is that we can’t use the simple relationship ARI = 1/AEP for frequent events.  So, for example, the 50% AEP event is not the same as the 2-year ARI event.

Example calculations

For an ARI of 5 years, what is the AEP:

\mathrm{AEP} = \frac{\exp(\frac{1}{\mathrm{5}}) - 1}{\exp(\frac{1}{\mathrm{5}})} = 0.1813

For an AEP of 50%, what is the ARI?

\mathrm{ARI} =\frac{1}{-\log_e(1-0.5)} = 1.443

R functions and example calculation available as a gist.

 

Highlights from ARR Book 7

Book 7 of Australian Rainfall and Runoff is titled Application of Catchment Modelling Systems.  It has been written by experienced people and there is some great information. A few, paraphrased, highlights follow.

  • Its often challenging to get good calibrations for all the available historical events and there may good reasons why.

Difficulties in calibrating a model to observed flood events of different magnitude should be taken as an indication of the changing role of processes.

In many cases a significant change occurs between floods that are mostly contained within the stream channel and floods in which floodplain storage plays an important role in the routing process.

If the model has only been calibrated to in-bank floods, confidence in its ability to represent larger floods will be lower.

  • Calibration needs to focus on what the model is to be used for, not just ensuring past events are well represented.

The focus of model calibration is not just to develop a model that is well calibrated to the available flood data.  Application of the model to the design requirements must be the primary focus.

It is often the case that calibration floods are relatively frequent while design applications require much rarer floods.  In this case, work in refining the model calibration to the frequent floods may not be justified.

Parameter values should account for the expected future design conditions, rather than an unrepresentative calibration event.

Calibration usually works with historic flood events while the design requirements are for probabilistic events.  The parameters calculated for the historic events may not be applicable to the design flood events.

  • On using all available data.

Even if the data is of poor quality or incomplete, it is important that the model calibration be at least consistent with the available information.

Even poor quality observations may be sufficient to apply a ‘common sense test’.

…at least ensure that model performance is consistent with minimal data [available]…

  • On inconsistent data

Effort should be concentrated on resolving the source of the inconsistency rather than pursing further calibration.

  • Dealing with poor calibration.

It is far more important to understand why a model may not be calibrating well at a particular location than to use unrealistic parameter values to ‘force’ the model to calibrate.

  • Don’t expect your model to provide a good fit to all data.

It is extremely unlikely that your simple model is perfectly representing the complex real world well, all your data has been collected without error, or is unaffected by local factors.

  • The appearance of great calibrations may mean:

The model has been overfitted to the data with unrealistic parameter values, or

Some of the data, that does not fit well, has been ignored or not presented.

  • Checking adopted parameters.

Calibration events should be re-run with adopted parameters and results should show at least reasonable performance for all of the calibration events.

  • Confirming model suitability for design events

Model performance, for design events, should be confirmed using Flood Frequency Analysis results, if available, or regional flood frequency information.

Book 7 also has worthwhile guidance on uncertainty analysis, model checking and reporting.

Flood frequency plots using ggplot

This post provides a recipe for making plots like the one below using ggplot2 in R.  Although it looks simple, there are a few tricky aspects:

  • Superscripts in y-axis labels
  • Probability scale on x-axis
  • Labelling points on the x-axis that are different to the plotted values i.e. we are plotting the normal quantile values but labelling them as percentages
  • Adding a title to the legend
  • Adding labels to the legend
  • Positioning the legend on the plot
  • Choosing colours for the lines
  • Using commas as a thousand separator.

FFA_plot

 

Code is available as a gist, which also shows how to:

  • Enter data using the tribble function, which is convenient for small data sets
  • Change the format of data to one observation per row using the tidyr::gather function.
  • Use a log scale on the y-axis
  • Plot a secondary axis showing the AEP as 1 in X years
  • Use the Probit transformation for the AEP values

Links for more information:

Climate change and flood investigations

One surprising finding from the review of the state of hydrologic practice in Victoria, is that climate change impacts on flooding are not being widely considered. Only half the studies reviewed (10 of 20), mention climate change.  Similar findings are reported in other work that shows some Victorian flood managers are not keeping up with their national and international colleagues in considering the additional flood risk predicted with a change in climate.

There is already evidence that rainfall intensity for short duration storms is increasing, which could lead to more frequent and larger flash floods.  This is a particular issue in towns and cities because small urban catchments are especially vulnerable.

In the corporate world, consideration of climate change is being taken seriously.   The recent Hutley opinion found that many climate change risks “would be regarded by a Court as being foreseeable at the present time” and that Australian company directors “who fail to consider ‘climate change risks’ now, could be found liable for breaching their duty of care and diligence in the future”.

The Task Force on Climate Related Financial Disclosures (TCFD), chaired by Michael Bloomberg, has recently released recommendations on how companies should report on climate change risks.  This includes the need to report on risks of “Increased severity of extreme weather events such as cyclones and floods” and “Changes in precipitation patterns and extreme weather variability”.

In the Australian flood scene, the latest Handbook 7Managing the floodplain: a guide to best practice in flood risk management in Australia – provides advice on assessing and reporting on climate change risk.  But the accompanying project brief template and guide, describe climate change aspects of a flood investigation as optional.  The latest version of Australian Rainfall and Runoff provides recommended approaches to assessing climate change impacts on flooding but recent research  argues these methods are too conservative.

On a positive note for Victoria, the Floodplain Management Strategy does encourage consideration of climate change (Policy 9A):

Flood studies prepared with government financial assistance will consider a range of floods of different probabilities, and the rarer flood events will be used to help determine the location’s sensitivity to climate change. Further climate change scenarios may be considered where this sensitivity is significant.

 

FloodedRetirmentHome

Figure 1: Flooding in Creswick 4 Aug 2010 (link to source)

Flood investigations lead on to decisions about land use zoning and design of mitigation works.  Are climate change risks to these measures foreseeable at the present time?  If so, then they should be considered and reported on.

Clearly this is an area where knowledge and ideas are changing rapidly. Practising hydrologists need to keep up with latest methods, and managers and boards of floodplain management authorities need to be aware of the latest thinking on governance, risk management, and disclosure.