An extract from the forthcoming ethics workshop report:

The Centre for Development Impact’s recent panel discussion, ‘Inclusion, Ethics and Evaluation’, was supported by the Institute of Development Studies’ (IDS) UK Department for International Development (DFID) Accountable Grant, with a view to continue a dialogue around the use and application of ethics in impact evaluation. CDI’s working assumption is that all practice – whether evaluations or development interventions – is underpinned by particular value systems. In recent years, the field of impact evaluation within international development has become largely driven by methodology and empiricism. To some extent, this has meant that it has lost touch with the ‘value’ dimension of evaluation, with values being primarily understood in relation to rigour: 'the scientific generation of facts or truths which are assumed to be self-evident and universally valid’ (see Munslow and Barnett, 2015).

The conduct of evaluators is often presumed to be guided by moral and ethical principles and guidelines. Yet who these moral principles relate to, and whether they go far enough is subject to much debate. Questions still remain around what ethics is, where evaluators draw their ethics principles from, and, what are the challenges are moving forwards. The panel makes a modest contribution to these debates drawing on the perspectives of five diverse actors in the field of evaluation research: Rob D. van den Berg; Leslie Groves; Laura Camfield; Chris Barnett and John Gaventa.

Ethics has not been well established within impact evaluation and we don’t fully understand how ethics are applied.

Chair of the meeting, Rob D. van den Berg (President of the International Development Evaluation Association and Visiting Fellow at CDI), outlined the findings from a recent review of 31 evaluation ethics guidelines, principles and standards. One important finding was that ethics guidance for evaluation often draws on clinical research, where there is huge concern for protecting human subjects from harm. Guidance on broader dimensions – such as social inclusion and other societal level issues – was lacking in the principles and standards. The emerging picture is one that is scattered. More needs to be done in practice to understand how guiding principles are applied in different situations.

Leslie Groves (Independent Consultant) shared the findings from a piece of research[1] conducted for the UK Department for International Development (DFID), reviewing principles and guidance on ethics in evaluation and research. Her research was based on a review of ethics guidelines, protocols, policies and practice documents across sectors; these are publically available. She prompted participants to ask, how, in practice, do we check that ethics are respected throughout the evaluation and research cycle? How do we hold those responsible for ethics accountable?

There is a complicated balance between empirical evidence and value judgements: how should we weigh up different forms of evidence?

 Chris Barnett (CDI’s Director) shared reflections from his experience in leading large, multi-stakeholder evaluations. He discussed complex structural issues between the evaluator and commissioner, including how the different interests, organisational incentives and information asymmetries often distort the evaluation process. He concluded by noting that to address this challenge, there is a trade-off to be had between rigour (i.e., evaluators achieving certainty through evidence) and inclusion (i.e., evaluators legitimising voice and different perspectives which might lead to change).

Laura Camfield (Univeristy of East Anglia Fellow and member of CDI) asked, as evaluators practicing inclusion, how do we both engage and maintain critical distance? Anthropologists often need to spend at least a year in a setting to be sure they understand what’s going on – why do we think we can generate a similar understanding of power dynamics in days or weeks? Where does rigour fit into all of this? Inclusion can be part of qualitative understandings of rigour, such as Lincoln and Guba’s principle of authenticity. Are these rejected in favour of other principles that resonate more with quantitative notions of rigour, like transferability; credibility; and, dependability?

Knowledge for whom?

John Gaventa (Director of Research at IDS) reflected on larger questions about knowledge and power. He maintained that these types of questions can’t easily be answered by looking at the guidelines. They are fundamental questions about knowledge and power, including how one respects the knowledge of those often left out of formal knowledge processes. Gaventa turned the debate around, and asked what are the ethics of non-inclusion? What is the ethical justification for exclusion?  Building on long standing themes at the Institute of Development Studies, including the work of Robert Chambers, he also asked whose knowledge counts? Who counts reality?

Watch a recording of the session.

The ethics workshop report will shortly be published on the CDI website. 

FURTHER READING

Barnett, C. and Munslow, T. (2014) Workshop Report: Framing Ethics in Impact Evaluation: Where are we? Which route should we take? IDS: Brighton see more at http://www.ids.ac.uk/publication/workshop-report-framing-ethics-in-impact-evaluation-where-are-we-which-route-should-we-take

Groves, L. (2016) Review of Ethics Principles and Guidance in Evaluation Research, DFID: London – see more at https://ethicsinevaluationandresearch.files.wordpress.com/2016/02/ethics-principles_report_-final1.pdf

Munslow, T. and Barnett, C. (2015) Event Report: Right or Wrong? What Values Inform Modern Impact Evaluation? IDS: Brighton http://www.ids.ac.uk/publication/background-report-right-or-wrong-what-values-inform-modern-impact-evaluation

Munslow, T. and Hale, K. (2015) Background Report: Right or Wrong? What Values Inform Modern Impact Evaluation? IDS: Brighton http://www.ids.ac.uk/publication/background-report-right-or-wrong-what-values-inform-modern-impact-evaluation