By Katharina Welle

There are quite a few ‘new kids on the block’ of impact evaluation designs and methods in Elliot Stern’s Impact Evaluation guide for Commissioners and Managers.  One of them is qualitative comparative analysis (QCA), a research method that was originally developed in the 1980s in the political sciences and sociology to carry out complex comparisons between different countries or societies. In essence, QCA applies a systematic comparison to case study research, and will be of interest to any evaluator who wants to better understand the factors that influenced a particular outcome of an intervention that was implemented in different settings. In her lecture, Wendy Olsen explains the basics of QCA in just about 30 minutes.

In a new CDI Practice Paper, we explore the benefits and pitfalls of applying QCA in an impact evaluation setting based on three applications of the method under Itad-led evaluations and research:

Below, we share three benefits and caveats for using QCA in impact evaluations. Do consult the CDI Practice Paper for our full set of lessons.

Three benefits of using QCA

  • QCA works well in areas of impact evaluation where you can’t apply a counterfactual and where you want to compare the different factors that affected outcomes among a number of different interventions. In such cases, QCA adds rigour via systematic comparative analysis. Yet, QCA is not the only approach that can be used. In his monitoring and evaluation blog, Rick Davies discusses alternative methods such as decision-tree models. He also suggests other approaches in a recent video on ‘working with “loose” theories of change’.
  • QCA allows comparison of a small to large number of cases (in theory, it can be applied to just a handful of cases to over 100 cases).
  • The method pushes the evaluator to apply a very transparent approach to the evaluation by making all assumptions and choices explicit. Barbara Befani and Carrie Baptist wrote a how-to guide to applying QCA for assessing impact that provides a brief step-by-step guide. Barbara Befani will also be publishing a more detailed guide later this year on the Swedish Expert Group for Aid Studies website.

Three caveats to using QCA

  • Make sure your evaluation team is well-versed in QCA: without going into the technical details of QCA, it will be important that the evaluation team leader has a good grasp of the complex approach before the evaluation starts to ensure that the design is appropriately framed. For example, if the variables of interest are not robustly defined, the analysis rests on shaky grounds and the findings will not be credible.
  • Don’t attempt QCA if there is no clear Theory of Change and related data to test it: QCA is a theory-based approach to evaluation, and if you are not able to establish key factors for success or outcomes of the project interventions to be evaluated, then the method cannot be effectively used. Similarly, if you can’t get comparable data on the key factors affecting change across your case studies, it may well be that case studies need to be excluded from the analysis – an issue that you don’t want to crop up during the evaluation!
  • Finally, clarify client expectations when suggesting QCA as the evaluation design: QCA identifies packages of conditions or factors that are associated with a particular outcome, but it does not measure the net effect of an intervention or explain the nuanced mechanism at play leading from an intervention to an outcome. If your evaluation commissioner expects this, then alternative methods need to complement QCA.

Should you add QCA to your impact evaluation toolbox? Yes, if you make sure not to underestimate the challenges of applying the approach. While it is worth keeping an eye on alternatives for evaluating theories of change (see for example, Rick Davies' recent presentation on the topic), QCA is definitely a valuable addition to impact evaluation design options, in particular if combined with other approaches.

Image: World Bank Photo Collection, cc on Flickr

Partner(s): Itad