Using expert judgement and uncertainty analysis for biodiversity management

The following summary of the presentation given by Professor Mark Burgman, Director of the Australian Centre of Excellence for Risk Analysis (ACERA) for the EIANZ Conference on 31 October 2008 was produced by Simon Mustoe. Download the presentation powerpoint as a PDF (1.9MB).

Decisions based only on quantitative methods such as Monte Carlo do not deal with sources of non-statistical uncertainty. Examples include absolute lack of knowledge, or linguistic uncertainties, where what we say can be construed in many different ways (vagueness, ambiguity, generality or failure to express a context for what is said). Altogether, it creates an illusion of false precision.

Nevertheless, biodiversity management is inherently uncertain, so how do we deal with the problem of expert opinion, in the face of uncertainty?

First, the nature of uncertainty is well understood, if not always well presented. There is an existing 'taxonomy' of uncertainty and a better understanding of it will help us make better decisions. We should not be afraid of uncertainty as it is always going to be there. The following example shows how the right approach can reveal more certainty than expected, despite some lack of knowledge or variation in data.

Sindh Ibex [read full article online at Google Books]

Population Viability Modelling (PVA) was used to evaluate the risks faced by Sindh Ibex in a remote region of Pakistan. One of the questions the authorities wished to explore was whether removing 30 animals per year for trophy shooting could be sustained in the long term.

Image: Sindh Wildlife Department

The information about the ecology of the species to inform a model was scarce but basic biology and a dialogue with locals revealed estimates for key parameters that were sufficient to make plausible estimates. In this case, the experts were people living within the community who had, for many years, observed the animals in the wild.

Conventional modelling, for example using Monte Carlo, might depend on choosing one distribution e.g. the mean and variance of a normal distribution. In this case, PVA modelled a broad range of alternative assumptions about the species biology and its interactions with the environment. The result revealed that, despite a very broad range of uncertainty, the likelihood of the population reaching 1,000 animals (the threshold for "vulnerable" under IUCN threat catagories) would remain extremely low (P<0.002 style="font-weight: bold;">

The Problem With Expert Evidence

Irrespective of the situation, the type of expert and the information gathered, human nature is to be overly-confident about one’s own opinion. Ecologists have the same tendencies in this regard as experts in other fields such as engineering and medicine. Ecologists tend to over-estimate confidence based on small samples.

Image: Experts decide how many quadrats are needed to adequately survey forest in Australia (Baran 2000)

In one study, engineers were asked to estimate the height to failure of an earth embankment within 90% confidence. Comparison of results showed that none of the experts' estimates overlapped with the true value (Hynes and Vanmarche 1977). Baran (2000) asked experts at a national Australian meeting to estimate the number of 0.1ha quadrats required to be 90% certain of encountering 95% of woody plants in a 40Ha area. Most experts estimated the number to be 7 or less, far below the correct value of 58 quadrats.

Similarly Regan et al (2005) gave numerous experts the same information for a set of species and, according to the criteria used by the IUCN to determine conservation status, asked them to assign values from Least Concern to Critically Endangered. For almost half of the species, the estimates spanned two or more categories. Reasons for the range of results included an unwillingness to draw inferences (some experts feel bound by the terms of the criteria, even if they feel the species should be differently ranked); parameter estimates spanning thresholds (some parameter estimates vary naturally or due to stochastic uncertainty); and variation in language interpretation (literally, the words of the criteria are interpreted differently by different people).

Conclusion

The essential message from all these studies is that expert evidence is vital but we need to rethink our approach to find ways to better understand inherent problems, so we can make better decisions for the environment.

Overconfidence describes a situation in which the expert’s own confidence in their judgment or estimate does not correspond to the accuracy of that estimate. One question is, how do we as 'experts' to close this gap between our own opinion and reality.

Modelling is one particularly useful tool. The benefit is that they can be used to develop simplified abstractions that capture the essential elements of a system. However, we do not always use them properly. We may rely on only one type of model when we should be using many; we may fail to capture variation in data and rely on only the "mean" estimate; or we may over-emphasise the uncertainty that cannot be measured statistically.

Particularly when applied to risk assessments, our estimates of uncertainty are very important and integral to producing a defensible and in a very real sense, meaningful decision.

No comments:

Recent Articles

Search EIANZ Ecology

Receive Alerts

Sign up to receive alerts by sending an email to eianzecology+subscribe@googlegroups.com

EIANZ Ecology on Twitter

YouTube Videos

Paulette Jones from BAAM: "The Ascent of the Ecological Practitioner" David Francis on CEnvP Visit the EIANZ Ecology YouTube channel.