Sustainable Development Goal Indicators are technical, but also political

This is the first of a blog series from Newcastle University Societal Challenge Theme Institutes on the UN Sustainable Development Goals (SDGs), exploring the targets and indicators of sustainable development that have been mapped out for the United Nations post-2015. The Theme Institutes are well placed to contribute to the SDGs, which aim to address the social, economic and environmental aspects of sustainable development. Dr Graham Long is Senior Lecturer in Politics, in the School of Geography, Politics and Sociology at Newcastle University, and he introduces the series, hosted by the Institute for Sustainability, with a political context for the SDGs, arguing that the goals in practice may differ from what has been set out on paper.

Earth sunrise North America with light clouds

SDG indicators: the technical track

The sustainable development goals (SDGs) currently under negotiation at the UN have reached the ‘science bit’. A dedicated technical track is in place to decide upon the indicators to accompany the goals and targets – that is, what will be (and indeed what can be) measured. This exercise will extend into March 2016.  The UN Statistical Commission (UNSC) and National Statistical Commissions are charged with arriving at an account of how progress towards the goals will be measured. Alongside a set of global indicators, particular national and even regional indicators might also emerge. When, say, David Hulme – a leading international expert on the Millennium Development Goals – calls for academic engagement, the coming months may be a decisive moment for just that.

This is all good, technical stuff on which academics have the knowledge and the mindset to engage – assessing weighty issues of methodology and measurability, science and statistics, proxies and paradigms, disaggregation and ‘data revolution’. Via the Sustainable Development Solutions Network (SDSN) and the Scientific and Technological UN Major Group, as well as other expert groups and networks, academics have already had input into this process. Indeed, the Scientific and Technological Major Group’s core role is to facilitate the participation of the scientific community on matters of sustainable development. The UNSC, SDSN and the Independent Expert Advisory Group (IEAG) on the Data Revolution, have all recently run open consultations on these kinds of technical questions. As new drafts of the indicators are prepared, we can expect opportunities for input to continue.

However, just because this is a ‘technical’ exercise, doesn’t mean that it’s not also political – that is, it is fundamentally about “who gets what, when and how”[1]. Indeed, the United Nations Statistical Commission (UNSC) states that it expects “broad political guidance” from states on questions of indicators. States (and other actors) involved in negotiating the SDGs are acutely aware of how important the indicators are for the framework that results. Given very broad goal areas, and targets (currently) of varying quality and effectiveness, the indicators will bear a lot of the burden of the SDG framework. They can, in effect, ‘make or break’ the agreement that results. What we choose to measure will dictate where states’ activities are directed as states are keen on saying, ‘what gets measured gets done’.  The concrete indicators will be taken to indicate, amongst other things, what these broad and aspirational goals were really driving at in the first place.

Indicators and the review process

Accurate data – and the right data – will be important for the review and follow up framework for the goals. Data is indeed “the raw material of accountability”, as the IEAG proclaims. However, there is a lot more to accountability than just data – notably, the responsiveness of actors and the presence of standards and sanctions. The SDG agenda is not even really about accountability – even though it will be accompanied by a monitoring mechanism of some stripe. These are “aspirational” and “voluntary” goals, and their complexity tells against attempts to allocate responsibilities to particular actors.

Even if we are speaking of ‘monitoring’ or ‘follow up’ rather than accountability in a strict sense, indicators are but raw materials of a process. They have to be assessed in appropriate structures and forums. Whilst the indicators themselves are technical, the arenas in which they will be used are decidedly not. And without institutions that allow for scrutiny, all the scientifically valid indicators and successful measurement in the world will not give us effective review or monitoring, let alone accountability. This framework for monitoring and review is up for discussion at the next set of intergovernmental negotiations in May. Received wisdom indicates that state, regional and global institutions will have a role, with the recently-established “High Level Political Forum”. However, much of how this will operate is still to be decided.

Financial graph and red pen

Reflecting goals and targets

On the one hand, a broad and complex agenda to apply to every country suggests that comprehensive coverage would require a large number of indicators. On the other, there is a clear limit on the number that will be practicable. In the context of these conflicting imperatives, which indicators are finally chosen is a question with great political significance for the goals. It looks important to select indicators that at least reflect the spirit, intent or guiding idea of each goal area. Indicators must strive for technical rigour. But if they do not accurately capture the key aspirations for each goal, then the goal in practice – come March 2016 – will not reflect the goal on paper in September 2015. Again, this demonstrates how important the formulation and selection of indicators will be. States, through negotiation, will decide on the essence of the goals and exercise final control over how this judgement will be made, something that will surely prove to be difficult and controversial.

The limit to the scope for “technical” assessment is clearly indicated by the way that, even as the indicator process was confirmed as technical, many states vigorously rejected technical proofing of the targets, even though the targets are very mixed in quality and just as crucial. For some states, evidently, the targets are too political to be technical. Other states invoked technical inputs precisely to make the opposite political point. When the Scientific and Technological Major Group – offering “the science perspective” – reported that only 29% of the targets are “well-formulated and based on latest scientific evidence”[2], this finding was widely invoked in favour of proofing and pruning of targets.

No escape from politics

We should proceed with caution about any assumption that the indicator debate, by virtue of being “technical” or “scientific”, is not also political. For those stepping into such issues, ‘forewarned is forearmed’. But also, the SDGs offer a much broader agenda for study by almost every branch of the sciences and social sciences – from assessments of their ultimate ends and assumptions, or their place in a wider history of ‘development’ initiatives, down to the detailed content of every indicator. The SDGs need expert scrutiny in every root and branch. Not only where such academic input would be welcomed by states, but also precisely where it might not be.

[1] To adapt Harold Lasswell’s phrase from his book Politics: who gets what, when, how (New York: Whittlesey House. 1936).

[2] http://www.icsu.org/publications/reports-and-reviews/review-of-targets-for-the-sustainable-development-goals-the-science-perspective-2015/SDG-Report.pdf

These are the author’s personal views, and do not necessarily reflect the position of any larger organisation. (Contact graham.long@newcastle.ac.uk to find out more)

Newcastle University Societal Challenge Theme Institutes:

Freeing public service to perform

Dr Toby Lowe from the Centre for Knowledge, Innovation, Technology and Enterprise (Newcastle University Business School) presents his Idea for an Incoming Government: make public services more effective. He urges us to move away from a ‘Payment by Results’ approach, and suggests alternatives that would cope better with the messiness of the problems we face in our society.

What is the problem?

Governments have attempted to make public services more accountable for producing desirable social outcomes. From reducing reoffending to helping the long-term unemployed to find work  increasing numbers of programmes are commissioned using a ‘Payment by Results’ approach (PbR).

The rationale behind seems compelling – we should only pay for work undertaken that is effective in solving the problems that society has identified. Unfortunately, the evidence suggests that PbR creates a paradox – programmes commissioned on this basis produce worse results, particularly for those with the most complex needs.

There are two reasons why this is the case:

Firstly, real life is complex and messy. But PbR programmes need life to be simple and measurable. They require that desired ‘outcomes’ can be easily measured – because payments are triggered by these measures. Unfortunately, the complex social issues which social interventions most often deal with are frequently those that are most difficult to measure. Take, for example, tackling obesity. Body Mass Index (BMI) is the measure that is most frequently used to measure obesity.  It is used a measure of obesity because it is easy to measure: it is a simple measure of weight in proportion to height. Anyone with a BMI of more than 25 is overweight. Anyone with a BMI of more than 30 is obese.

Unfortunately, real obesity is much more complicated than that. BMI doesn’t effectively measure obesity for children, individuals with different body shapes, with different exercise regimes, and with certain medical conditions.  According to BMI measures, this woman, Anita Albrecht, who is a personal trainer, is very overweight, and is only one BMI point short of being clinically obese.

Anita Albrecht

So what happens if you use what is measurable as mechanisms to pay by results? You end up wasting time and money targeting obesity programmes on people like Anita – because that’s what simple targets, which are abstracted from the intrinsic messiness and complexity of life make people do.

The second reason that PbR makes it more difficult to create good outcomes is that they work on very simple cause and effect logic: if you are going to pay a person or organisation for producing a result, then you need to know that it was that person or that organisation which did it. How else do you know who to pay?

Unfortunately, the messiness of real life gets in the way once more. Real outcomes are emergent properties of complex systems. Look at the complex system which lies behind obesity as an issue:

Causes of obesity

This is the reality of what causes obesity. If you pay an organisation to undertake obesity activity, they can only influence a small part of this system.  Whether people actually end up obese or not is the result of the interaction of a hundred other factors.

If you pay people or organisations on the basis of whether they achieve particular results, you are asking them to be accountable for things they don’t control. As a result, they learn to manage what they can control – which is the production of data. This is the evidence about what people do:  They reclassify what counts as success (for example, counting trolleys as beds in hospitals in order to meet waiting time targets). They only work on clients who they know will provide the desired results, and they ignore the more difficult clients. They ‘teach to the test’ – only doing things which relate to what is measured, ignoring people’s needs that don’t fit into the simple measurement framework. And if all else fails, they simply make up data.

They do this because Payment by Results is nothing of the sort. Payment by Results should really be called ‘Payment for Data Production’. It changes the purpose of people’s job from helping those in need to producing the data which gets them paid.

All this is an enormous waste of resources. We end up paying huge fees to organisations who can play the data production game well, rather than those who are good at helping people. We waste resources paying organisations not to help those most in need.

The solution

The evidence is clear. If you want to achieve good outcomes, don’t pay by results. Evidence shows that these alternative approaches are successful:

1) Use systems thinking and invest in relationships. Design systems around people’s needs. Invest in organisations that build relationships with clients, and so who understand their needs authentically. Commission locally, so that the organisations have a connection with the people they serve.

2) Promote horizontal accountability. Make practitioners accountable to one another for the quality of their work. Create mechanisms for peer-based critical reflection, such as Learning Communities.

3) Create positive error cultures. Create cultures in which people talk honestly about uncertainty and mistakes – because this is how people learn and improve.