About the Author
Dr. Sara Walker is currently an Associate Director and Co-Investigator for the £20m National Centre for Energy Systems Integration (EP/P001173/1). Dr Walker’s research focus is regarding renewable energy technology and transitions to low carbon systems, with a particular focus on policy and building scale solutions.
Dr. Walker has experience of multi-disciplinary projects. For example, her PhD involved an analysis of the renewable energy sector with regards impact of electricity industry deregulation. This was a multi-disciplinary study which involved analysis of the willingness to pay for renewable electricity products, an economic model of the UK in order to estimate future scenarios of electricity demand growth, and a technical renewable resource assessment.
Dr. Walker was recently a Co-Investigator for a European Interreg funded project, led by University of Hamburg. As part of this €6.5m project, she delivered a review of European funded electric vehicle projects and was involved in the evaluation of vehicle-to-grid technical viability. Dr. Walker has also delivered research to third sector clients Gentoo on Retrofit Reality project funded by Housing Corporation, evaluating impact of energy efficiency retrofit on behaviour and evaluating performance of solar thermal systems. The client went on to use the findings of the work to inform their Pay As You Save and Green Deal offerings.
Expert opinion on energy system models
CESI plans to undertake co-evolution cycles, a currently vaguely defined process of bringing together sectors and energy vectors to consider how they interact. Part of the co-evolution cycle involves consulting with experts to review the work done by the partnership. So how can the expert opinion be beneficial to our consideration of energy systems?
What makes an ‘expert’?
Krueger et al (2012) summarise definitions of an expert as “someone having specialist knowledge acquired through practice (also called training), study (also called education) or experience”. The temptation with energy system models is to ask other energy system modelers for their expert view. However, there are others with local experience and practice who can make a valuable contribution, such as the system operator or system user. Equally, some stakeholders may be considered non-expert but be significantly affected by outcomes of models, and be able to contribute an opinion on the impact of model findings.
Why use the term expert ‘opinion’?
In the modelling literature, expert knowledge, expert opinion, expert judgment and expert elicitation are commonly used to describe some process of obtaining feedback from an expert on some shared information. I have deliberately used the term ‘opinion’ since, in many cases, we are asking for an expert to express a view on a relationship, process or parameter which is unknown, to a greater or lesser extent, and which has an associated uncertainty. This is particularly the case when developing scenarios of the future. What value of population should we use, and how confident are we in that value, for example. Modelling involves many choices around the model structure and the processes represented within it, the parameter values, and the boundary conditions. Often, the model developer as the expert makes choices which are implicit and undocumented. This is something we are seeking to avoid in CESI through the process of gathering expert opinion.
How can expert opinion be collected and collated?
There are a number of methods available for gathering expert opinion. Sadly, none are perfect. A key choice is with respect to eliciting individual expert opinion or eliciting opinion in a group context.
A common method used to reach consensus is the Delphi method, for example, where several iterations of opinion are discussed until consensus is reached. “A Delphi is extremely efficient in obtaining consensus, but this consensus is not based on genuine agreement; rather, it is the result of the same strong group pressure to conformity” (Woudenberg, 1991).
|Individual expert opinion gathering||Group expert opinion gathering|
|Can allow more targeted questioning, explanation and feedback||Time cost||Sharing knowledge can help discount unnecessary information||Can be dominated by small number of people|
|Enables a range of views to be expressed individually||Interviewer influence on expert||Can help with aggregation of opinions||Can over-emphasise consensus|
|No individual learning or challenge to the expert||Interviewer influence on group|
|Group motivation to reach quick agreement|
|Group tendency to be more overconfident and overestimate|
The framing of the questions is very important in collecting expert opinion. Research has shown that expert opinion contains bias, based on two cognitive heuristics; ‘availability’ and ‘anchoring and adjustment’. ‘Availability’ is when the opinion of the likelihood of something is formed based on personal experience of such instances being recalled. ‘Anchoring and adjustment’ are when a value (anchor) is presented as a first guess at a parameter and the expert is asked to adjust – they typically do not adjust enough and the original anchor influences the decision on the final value (Morgan, 2014).
When discussing expert opinion, we should also consider eliciting each expert’s uncertainty in their opinion, which can be more easily expressed for quantitative rather than qualitative or conceptual issues. Uncertainty is sometimes expressed using the NUSAP approach: Numeral (the best guess value); Unit (the units of the parameter); Spread, Assessment (descriptions of uncertainty about the numeral); and Pedigree (a subjective evaluation of confidence in the evidence which support the assumptions within the other elements) (Morgan, 2014; Krueger et al, 2012). Consensus is a method of aggregation of both opinion and uncertainty. In some situations, expert opinion may differ and so mathematical aggregation may be appropriate in order to represent a range of views. Sometimes it is best not to aggregate at all, and to take the diverse views as different case study values to produce a range of scenarios. This way it is possible to identify how much the difference in expert opinion can affect the outcome of the model.
How many experts does it take to check a model?
Sample sizes vary for expert opinion gathering but are typically 3-12. Higher than this, and you get a very little gain in the robustness of findings (i.e. no new knowledge), but for low numbers of experts, it becomes difficult to get a representative sample. In identifying experts, a suggested process is to: “follow a formal nomination and selection process; ensure diversity of opinion, credibility and result reliability; minimise redundancy of information; and have a balanced and broad spectrum of viewpoints, expertise, technical points of view and organisational representation” (Krueger et al, 2012).