Efficiency savings: are they simply cuts in disguise?
The pressure on local bodies to run services better and save money is constant, and growing. But when councils and other agencies start changing services to achieve so-called 'efficiency savings', how can they be sure they are doing the right thing? Experts are increasingly worried that many programmes have little evidence to suggest they will work, and are not being assessed in the right way. As a result, they warn, changes may simply result in worse services and fewer people being helped. Eilis Lawlor, a researcher at the New Economics Foundation (Nef), wrote a report in 2008 looking at the pressure on childcare providers to work more efficiently. "What seemed to be happening," she says, "is that this really translates into cuts. "The idea is that they will reduce back-office functions and let go of administrative staff, but in reality, what people [at councils] were experiencing was greater pressure on unit costs – the cost of services they were commissioning – and passing on the cuts to providers." Little systematic checking The heart of the problem, she says, is that there's little systematic checking, after the fact, of which changes have really made services more efficient and which have simply cut them back. Cheryl Hopkins, Birmingham city council's commissioner of children's services, agrees. "A lot of the government initiatives that are rolled out, number one aren't evidence-based, and number two, have no cost-benefit analysis assigned to them" she says. For a different point of view, Hopkins and her colleagues are looking overseas – in particular, to a man called Steve Aos, who works for the Washington state government in the US. He helps run the state's institute for public policy, an agency dedicated to testing the effectiveness of supposedly 'evidence-based' programmes. Aos takes a tough line on the claims made for many schemes. Taking crime reduction as an example, he says he has no time for studies that simply look at how much crime individuals commit before and after they go through a particular programme. "Too many other things can cause changes in someone's offending behaviour. We throw out that kind of study," he says. Unless the evaluation compares one programme against another, replicating the scientific standard of a randomised, controlled test, "we don't even consider that real research ... we don't go there." Aos is only slightly more enthusiastic about research carried out in the "rarefied" setting of academia. Such programmes tend to be run by highly motivated individuals with a level of ability that can't be replicated "in the normal labour market" – and so he discounts their predicted benefits by 50%. (That does make some researchers "angry". But he has a thick skin, he says.) Programmes have to be monitored once underway, too. Aos cites a Washington State programme that aimed to reduce juvenile violence through therapy. The therapists following the programme's instructions "by the book" got the expected results, he says. "But the ones doing something else were not getting the expected effect. When you looked at the whole thing, the effects cancelled out and it looked like it was achieving nothing." Lawlor says the UK has no equivalent of Aos's work. "Some [agencies] do, but it's more sporadic and it is not done in a systematic way." She cautions, however, that his work is "quite a narrow" cost-benefit analysis focussed on whether programmes save money. Measuring outputs "The problem with measuring outputs is [often] you are measuring something that's irrelevant or perverse," Lawlor says. For instance, if GP workloads fall, it is often seen as a sign that people are becoming healthier. "Actually, it could mean they have completely disengaged from the system and have become homeless and that's why they are not accessing the GP services. It [measuring outputs] is telling you things have changed. It doesn't tell you things have improved." In response, Nef has developed a measure called the social return on investment, which rates programmes on harder to define outcomes, such as the health benefit of keeping people in work. Charities use it, but as yet few councils do. Hopkins says Birmingham is now carrying out its own randomised tests on, for instance, community-based justice programmes. The council wants to set up a dedicated unit, modelled on Aos's, to carry out such scrutiny. There's even a chance it could host a national centre for evaluating public programmes, which the Conservative social justice spokesman, Iain Duncan Smith, has indicated he would like to set up if the Tories win power at the next election. Unless these moves towards better measurement become more widespread, efficiency programmes remain "a blunt instrument", Lawlor says. "There's a lot of public services that don't work, but at the moment we don't have a clear enough sense of which ones do and which ones don't. Any cuts will just be removing the good with the bad."
Market Reactions
Price reaction data not yet calculated.
Available after full seed + reaction pipeline runs.
Similar Historical Events
No strong historical parallels found (score < 0.65).