This report covers a study commissioned by DFID entitled ‘Broadening the Range of Designs and Methods for Impact Evaluations’.
Impact Evaluation (IE) aims to demonstrate that development programmes lead to development results, that the intervention as cause has an effect. Accountability for expenditure and development results is central to IE, but at the same time as policy makers often wish to replicate, generalise and scale up, they also need to accumulate lessons for the future. Explanatory analysis, by answering the ‘hows’ and ‘whys’ of programme effectiveness, is central to policy learning.
IE must also fit with contemporary development architecture that post Paris Declaration is decentralised, works through partnership and where developing countries are expected to be in the lead. These normative principles have practical implications for IE. For example, working through partners leads to multi-stage, indirect causal chains that IE has to analyse; and using a country’s own systems can limit access to certain kinds of data.
Up to now most investment in IE has gone into a narrow range of mainly experimental and statistical methods and designs that according to the study’s Terms of Reference, DFID has found are only applicable to a small proportion of their current programme portfolio. This study is intended to broaden that range and open up complex and difficult to evaluate programmes to the possibility of IE.
The study has considered existing IE practice, reviewed methodological literatures and assessed how state-of-the art evaluation designs and methods might be applied given
contemporary development programmes.