Losses for a catchment are an important but variable quantity. In hydrologic modelling we often choose constant, “typical”, values for initial and continuing loss (IL and CL) but it is worth considering how they vary and how this variation can be incorporated into models using Monte Carlo or ensemble approaches.

In Australian Rainfall and Runoff, there is an empirical distribution of initial loss. The generic version is shown in Figure 1 with specific values for initial and continuing loss provided in ARR Table 5.3.13 (shown in Table 1 below). The original source of this data is ARR Project 6 (Hill et al. 2014). Note that the initial losses referred to here are for storms rather than bursts. For a discussion of the difference between burst and storm losses see this blog.

Percentage of time loss is exceeded | Initial Loss | Continuing Loss |
---|---|---|

0 | 3.19 | 3.85 |

10 | 2.26 | 2.48 |

20 | 1.71 | 1.88 |

30 | 1.4 | 1.5 |

40 | 1.2 | 1.24 |

50 | 1 | 1 |

60 | 0.85 | 0.79 |

70 | 0.68 | 0.61 |

80 | 0.53 | 0.48 |

90 | 0.39 | 0.35 |

100 | 0.14 | 0.15 |

A couple of things about the data in Table 1.

- The losses are standardised by dividing the measured loss (initial or continuing) by the median of the measured loss.
- I’ve used “Percentage of time loss is exceeded” as the heading to column 1. In ARR this is referred to as “Percentile”. Calling this a percentile is potentially confusing because percentiles are usually around the other way i.e. a large percentile refers to large values – if you scored on the 95th percentile in your year 12 exam that’s a high score. See the USGS definition of percentile and percent exceeded here.

The plot in Figure 1 is a like a flow duration curve. We can also plot the data in Table 1 as an empirical cumulative distribution function (Percentile v Standardised Initial Loss) (Figure 2). Here percentile is taken as 1 – percent of time loss is exceeded.

Randomly generating data from an empirical distribution is discussed in the ARR supporting document, Monte Carlo Simulation Techniques (See Section 7.2). We can apply these methods to the data in Table 1. A histogram of 10,000 initial loss values is shown in Figure 3. The median loss is 1, as expected; the mean is about 1.15. Also, as expected, the data are right skewed (long tail to the right). IL can’t be less than zero so there is a hard boundary to the left, but can be large if a catchment is very dry. Most of the vales are clustered around 1 with a few large values. Only 27% of values are greater than 1.5.

The population of IL values is probably more highly skewed than this sample, and the empirical distribution in Table 1, suggests. To get large initial loss values, there would need to be large rainfalls on dry catchments. Both of these things are rare, so to observe both at the same time is rarer still. It is unlikely many of these situations occurred as part of the study to generate the data for Table 1.

Consider a real example, Toomuc Creek is a site where losses have been estimated (Hill et al., 2014). The median losses are IL = 24 mm and CL = 2.5 mm/hr for events selected by 24 hour bursts. The largest standardised initial loss is 3.19 which corresponds to an initial loss of 3.19 x 24 = 93 mm. Observing an initial loss of 93 mm means there would need to be at least 93 mm of rain on a dry catchment. At Toomuc Creek, 93 mm of rain is about a 5% AEP event for 24 hours duration and that would need to be combined with unusually dry conditions.

The distribution of continuing loss values is similar but more highly skewed. The median is about 1 and the mean is 1.24 (Figure 4). Similar to the case with initial loss, it is challenging to observe large continuing loss values. At Toomuc Creek, the median continuing loss is 2.5 mm/h and the largest standardised loss value is 3.85 corresponding to a continuing loss of 9.62 mm/h. A 24 hour storm with this rainfall rate is much larger than a 1% AEP event. The losses report states that the “storm durations used in the analysis were typically a few days”. The occurrence of high continuing losses for storms of this length requires an unusually large amount of rain combined with conditions conducive to most of it being lost. In combination, this is a very rare occurrence.

The losses report does have a section on loss sensitivity to burst duration (Section 6.3 p 37) but this doesn’t consider the duration of the complete storms used to derive the loss values. Instead it looks at the duration of the most intense bursts used to select the storms.

Another artefact leading to underestimation of the highest loss values is that any storms in the data set that yield little or no runoff were not used (Hill et al., 2014, Section 5.3, page 27). Events were excluded if runoff was low (typically less than 1 m^{3}/s) and the fitted value of continuing loss was abnormally high (typically grater than 20 mm/h). Both these conditions serve to bias the loss estimates towards smaller values.

The upshot is that the difficulty of observing rare events that have large losses, means the empirical distributions of initial and continuing loss probably underestimate the most extreme values shown in Table 1 and in the figures. This could mean flood estimates based on these values are biased high, but that would depend on all the other model inputs such as rainfall depths, and spatial and temporal patterns. It would be challenging to improve the analysis so a conservative bias is reasonable.

Work by WMA Water suggest that ARR loss values, available from the data hub, are too high when flood estimates are compared to flood frequency analysis but that could be for other reasons.

Analysis and graphs are available as a gist.

**References**

Nathan, R. J. and Weinmann, P. E. (2013) Monte Carlo Simulation Techniques. Australian Rainfall and Runoff Discussion Paper (link).

Hill, P., Graszkiewicz, Z, Taylor, M., Nathan, R. Project 6: Loss models for catchment simulation: Phase 4 analysis of rural catchments. Australian Rainfall and Runoff Revision Projects. Engineers Australia (link).

Rob SwanInterestingly Tony, my analysis of the 2011 Toomuc Creek flood event provided an initial loss of around 70mm in the catchment with continuing losses of 1.5 mm/h. This was combined with a rainfall event of 194mm over effectively 24 hours which was in excess of the 0.2% AEP event for both the 1 hour and 24 hour events. The model analysis and calibration flows measured were in the order of a 0.5% AEP event. At the neighbouring, Cardinia Creek in the same event, the initial loss was estimated at 100mm with the continuing loss at 1.5mm.

I’m reflecting at this point that perhaps we need to be very careful assuming independence of these variables. There had been more than 100mm of evaporation in and no rain in the weeks preceeding this major storm event. How much of the loss was a result of surface storages being empty?

Interestingly, the calibration data at Toomuc Creek for two earlier storm events – 1984 and 1996, the IL was 0mm and the CL was 1.6 and 1.08 in these events. These events were in the order of a 3% AEP event and a 6% AEP event.

Always worth considering some real data in these assessments!

tonyladsonPost authorThanks Rob, you are right. It is always important to check the data. It’s also important to take account of the distinction between design losses and the losses that are measured during real floods. Generally losses during flood events will provide values that are smaller then are appropriate to use for design, but not always.