Realist evaluation
Realist evaluation aims to identify the underlying generative causal mechanisms that explain how outcomes were caused and how context influences these.
Overview
Realist evaluation focuses on answering the question: “What works, for whom, in what respects, to what extent, in what contexts, and how?”.
Pawson and Tilley (1997), who initially developed the realist evaluation approach, argued that in order to be useful for decision makers, evaluations need to identify ‘what works in which circumstances and for whom?’, rather than simply asking ‘does this intervention work?’ Pawson and Tilley proposed that a realist evaluation should produce ‘context-mechanism-outcome’ (CMO) statements along the lines of “In this context, that particular mechanism fired for these actors, generating those outcomes. In that context, this other mechanism fired, generating these different outcomes.” Others using realist evaluation have incorporated features of the intervention as an additional element to the CMO configurations, to give the formulation C+I+M=O (Punton & Vogel, 2020), or ICAMO -Intervention-Context-Actor-Mechanism-Outcome (Mukumbank et al., 2018).
The realist understanding of how programmes work
Realist philosophy (Pawson and Tilley use the term ‘scientific realism’) considers that an intervention works (or not) because actors make particular decisions in response to the intervention (or not). The ‘reasoning’ of the actors in response to the resources or opportunities provided by the intervention is what causes the outcomes.
Strictly speaking, the term ‘generative mechanism’ refers to the underlying social or psychological drivers that ‘cause’ the reasoning of actors. For example, a parenting skills programme may have achieved different outcomes for fathers and mothers. The mechanism generating different ‘reasoning’ by mothers and fathers may relate to dominant social norms about the roles and responsibilities of mothers and fathers. Additional mechanisms may be situated in psychological, social or other spheres.
Context matters: firstly, it influences ‘reasoning’ and, secondly, generative mechanisms can only work if the circumstances are right. Going back to our example, there may be different social beliefs about the roles and responsibilities of mothers and fathers in different cultures, which may affect how parents respond to the parenting programme. Whether parents can put their new learning into practice will depend on a range of factors – perhaps the time they have available, their own beliefs about parenting, or their mental health. Finally, the context may provide alternative explanations of the observed outcomes, which need to be taken into account during the analysis.
Undertaking a realist evaluation
Developing the initial programme theory
Realist evaluation starts with theory and ends with theory. In other words, the purpose of a realist evaluation is as much to test and refine the programme theory as it is to determine whether and how the programme worked in a particular setting.
The programme theory describes how the intervention is expected to lead to its effects and in which conditions it should do so. The initial programme theory may be based on previous research, knowledge, experience, and the assumptions of the intervention designers about how the intervention will work. The difference between realist and other programme theory-based evaluation approaches is that a realist programme theory specifies what mechanisms will generate the outcomes and what features of the context will affect whether or not those mechanisms operate. Ideally, these elements (mechanisms, outcome, context) are made explicit at the evaluation design stage, as it enables the design of the data collection to focus on testing the different elements of the programme theory.
Choosing the evaluation methods
Realist evaluation is method-neutral (i.e., it does not impose the use of particular methods).
As with any evaluation, the choice of data collection, and analysis methods and tools should be guided by the types of data needed to answer the evaluation questions or, more specifically, to test the initial programme theory in all its dimensions.
Usually, both quantitative and qualitative data are collected in a realist evaluation, often with quantitative data being focused on context and outcomes and qualitative data on generative mechanisms. Because the realist analysis uses mainly intra-programme comparisons (i.e., comparisons across different groups involved in the same programme) to test the initial theory, a realist evaluation design does not need to construct comparison groups. Instead, the refined programme theory will be tested in a different context in a later study. The case study design is often used, whereby case selection is typically purposive, as the cases should enable ‘testing’ of the initial programme theory in all its dimensions.
Using a realist data analysis approach
Realist data analysis is driven by the principles of realism: realist evaluation explains change brought about by an intervention by referring to the actors who act and change (or not) a situation under specific conditions and under the influence of external events (including the intervention itself). The actors and the interventions are considered to be embedded in a social reality that influences how the intervention is implemented and how actors respond to it (or not). The context-mechanism-outcome (CMO) configuration is used as the main structure for realist analysis.
In the first phase of analysis, data are organised in relation to the initial programme theory – whether the data relate to what was done (the intervention activities) or to context, mechanism, outcome and (groups of) actors. Qualitative data are coded and appropriate methods for analysing quantitative data are applied. The data on outcomes are disaggregated by sub-groups (which were selected based on the programme theory).
Once patterns of outcomes are identified, the mechanisms generating those outcomes can be analysed, provided the right kinds of data are available. The contexts in which particular mechanisms did or did not ‘fire’ can then be determined. Contexts may relate to the sub-groups for whom outcomes were generated and to other stakeholders, implementation processes and organisational, socio-economic, cultural and political conditions.
The analytic process is not necessarily sequential but should result in a set of ‘context-mechanism-outcome’ (CMO) statements: “In this context, that particular mechanism fired for these actors, generating those outcomes. In that context, this other mechanism fired, generating these different outcomes.”
The last phase of the analysis consists of determining which CMO configuration(s) offers the most robust and plausible explanation of the observed pattern of outcomes. This resulting CMO configuration is then compared with the initial programme theory, which is modified (or not) in light of the evaluation findings.
Using the findings from realist evaluation
Both generative mechanisms and programme theories can be considered at different levels of abstraction, from very specific (particular individuals within particular programmes) to quite abstract (across different kinds of programmes). Pawson and Tilley argued that ‘middle range theories’* (MRT) are most useful. MRTs are specific enough to generate particular propositions to test and general enough to apply across different situations. Typically, MRTs develop over time, based on the accumulation of insights acquired through a series of studies allowing gradual specification of the realist findings. All kinds of theory in realist evaluation – programme theory, theories about particular mechanisms, CMOs, and formal theory– are most useful if developed at a middle level of abstraction.
Because realist evaluation uses the idea of generative causality (i.e. mechanisms only fire when the context is conducive), realists are modest in their claims, stating that an evaluation cannot produce universally applicable findings. At best, evaluation can make sense of the complex processes underlying programmes by formulating plausible explanations ex-post. It can indicate the conditions in which the intervention works (or not) and how they do so. This realistic specification allows decision makers to assess whether interventions that proved successful in one setting may be so in another setting and assists programme planners in adapting interventions to suit specific contexts.
*A middle range theory is understood as a “theory that lies between the minor but necessary working hypotheses …and the all-inclusive systematic efforts to develop a unified theory that will explain all the observed uniformities of social behavior, social organization and social change” (Merton, 1968, p 39). In essence, “middle range” refers to the degree of abstraction and can refer to programme theory, generative mechanisms, CMOs, formal theories etc.
Features consistent with a causal pathways perspective
A causal pathways perspective on evaluation focuses on understanding how, why, and under what conditions change happens or has happened. It is used to understand the interconnected chains of causal links which lead to a range of outcomes and impacts. These causal pathways are likely to involve multiple actors, contributing factors, events and actions, not only the activities associated with the program, project or policy being evaluated or its stated objectives.
Realist Evaluation can be used in ways which incorporate the following features of a causal pathways perspective:
A causal pathways perspective on evaluation focuses on understanding how, why, and under what conditions change happens or has happened. It is used to understand the interconnected chains of causal links which lead to a range of outcomes and impacts. These causal pathways are likely to involve multiple actors, contributing factors, events and actions, not only the activities associated with the program, project or policy being evaluated or its stated objectives.
Realist Evaluation can be used in ways which incorporate the following features of a causal pathways perspective:
- Valuing actors’ narratives: Realist evaluation explicitly tests hypothesised theories of change by collecting information from respondents about contexts, mechanisms and/or outcomes and the program theory.
- Addressing power and inclusion: Realist evaluation has the potential to address power and inclusion by ensuring that the sampling strategy is designed to involve disenfranchised communities and seek information about context and mechanisms leading to change.
- Articulates detailed and nuanced change pathways: Realist evaluation provides a detailed analysis of the outcomes that result from changes in actors' reasoning in response to an intervention, as well as the underlying generative mechanisms (psychological or social drivers) that influence changes in reasoning.
- Pays attention to a range of outcomes and impacts: Realist evaluation seeks to explain intended and unintended as well as positive and negative outcomes.
- Uses an iterative, bricolage approach to evaluation design: The data analysis is iterative, initial program theories are tested, and earlier stages of analysis are used to refine the evaluation design for later stages and/or the program theory.
- Helps understand contextual variation: Understanding context is central to realist evaluation, which asks, “What works, for whom, in what respects, to what extent, in what contexts, and how?”. Context influences causal mechanisms, and therefore, outcomes can only be achieved if the circumstances are right.
- Draws on a range of causal inference strategies: Realist evaluations can use many causal inference methods, including process tracing, pattern matching and comparing to predictive models.
- Taking a complexity-appropriate approach to evaluation quality and rigour: Clear descriptions and justifications of how the analysis has used data to develop, confirm, refute or refine one or more programme theories contribute to credibility and rigour. The findings of realist evaluations about why interventions work ( or don’t work) in specific contexts support decisions about transferring a program to a different setting.
Example
Realist evaluation of the Building Capacity to Use Research Evidence (BCURE) programme
The programme
The Building Capacity to Use Research Evidence (BCURE) programme was a £15.7 million initiative funded by the UK Department for International Development (DFID) from 2013-2017. Six implementing partners worked with governments in 12 low- and middle-income countries in Africa and Asia to strengthen the capacity of civil servants and parliamentarians to use research evidence in decision-making through building skills, incentives, and systems required to access, appraise and apply evidence.
BCURE used a range of interventions designed and combined in different ways – including training, mentoring, policy dialogues, and technical support to develop evidence tools and guidelines. Projects ranged in scope and scale, some based in single ministries and others spanning whole government systems.
The evaluation
DFID commissioned an evaluation to run in parallel with the programme, conducted by an independent team from Itad from 2014 to 2017. The evaluation aimed to understand whether BCURE worked and how and why capacity building can contribute to the increased use of evidence in policymaking in the very different contexts in which the programme operated, informing programming decisions within and beyond DFID.
This emphasis on ‘how and why’ BCURE worked implied the need for an evaluation approach that made use of theory. Realist evaluation was selected over other theory-based approaches because it is particularly useful in pilot programs where it is not clear exactly how and why the intervention will work, where programs are implemented across multiple settings and are likely to lead to different outcomes for different groups in diverse contexts, and where commissioners are interested in learning lessons about how to scale up or roll out an intervention.
How we applied a realist approach
The BCURE evaluation was conceptualised in terms of three broad stages: developing theory, testing theory, and refining theory. These stages were iterative rather than linear, with the theory developed, tested, refined, and tested again as knowledge accumulated.
Developing theory
A draft programme theory was developed at the beginning of the evaluation, drawing on the team’s existing knowledge and professional hunches about the nature of capacity building and how it can contribute to evidence use in policymaking. These insights shaped the research questions for a realist literature review, which identified additional theories and possible mechanisms in the wider literature. These were used to develop a more detailed programme theory and the first iteration of CMO configurations. Features of the intervention were incorporated as an additional element to the CMO configurations to give the formulation C+I+M=O. This helped to separate intervention factors that BCURE controlled (such as training design or length) from contextual factors that it did not (such as trainees' attitudes).
Testing theory
Data was collected in six of the twelve BCURE countries – Bangladesh, Kenya, Pakistan, Sierra Leone, South Africa, and Zimbabwe. The evaluation drew primarily on qualitative data in the form of semi-structured interviews with 567 stakeholders over three years, using tailored topic guides to test the latest set of CIMOs. The evaluation also drew on monitoring data, programme documentation, and, where possible, government documents, such as tools and policies developed in collaboration with BCURE partners.
Refining theory
CIMO configurations were revised through an iterative process. A customised Microsoft Excel database was used to analyse data at a country level, with rows representing the outcomes referenced by interview respondents and columns capturing stakeholders’ descriptions of how and why the outcome did or did not come about, as intervention, context, and mechanism factors. Each year, evidence from the country level was synthesised to explore how and why different interventions contributed to different patterns of outcomes in different contexts. The synthesis involved the whole evaluation team via analysis workshops to identify patterns, concepts, and metaphors that applied across the cases and interrogate differences. This process produced an evidence-based set of refined CIMOs.
Throughout the evaluation, the CIMOs evolved significantly through multiple rounds of testing and refinement. The literature review identified 19 CIMOs, refined to 15 following the first year of data collection. In the second year, some were dropped, others merged, and new ones were added, with 14 CIMOs tested. In the third year, the team realised the need to focus on fewer CIMOs to support a more in-depth investigation. A subset was prioritised, guided by DFID’s interests, and a final set of five tested CIMOs was presented in the final report. See below for an example:
Showcasing
Where technical support is provided to incorporate evidence within a policy process or develop a tool to improve evidence access, appraisal or use (I), this can generate high-quality policies or products (O) that showcase the value of evidence for quality, performance and delivery (M) and lead to adoption (O) and diffusion (O) of the procedure or tool. This is more likely where external actors ‘accompany’ government partners to co-produce policies or tools in a flexible, responsive and collaborative way (I), where policies are high priority or tools address a recognised problem (C), and where tools are intuitive and interactive (I) and genuinely facilitate officials to make decisions and do their jobs better and more efficiently (M).
Use and utility of the evaluation
The realist evaluation of BCURE impacted the policy and practice with and beyond DFID. The approach added value in two main ways:
- It offered a robust way to analyse causality. Developing and testing realist CIMO configurations added substantial precision and analytical depth to the findings. The iterative development, testing, and refinement of these theories helped ensure they were grounded in the experience of implementing partners and broader insights from the wider literature, which enhanced relevance, empirical weight, and portability to other settings.
- It helped generate useful, policy-relevant findings that informed follow-up programmes. This was partly because theories were iteratively refined over three years, informed by stakeholder perspectives and pragmatic considerations about how to frame theories in the most operational way. By pinpointing the contexts and intervention factors necessary for mechanisms to ‘fire,’ the evaluation was also able to generate intuitive and accessible lessons for future projects. This was helped by framing mechanisms in everyday labels and carefully considering how to communicate CIMOs in the final report to minimise realist jargon (discussed further in this practice paper).
All evaluation reports and other outputs are available on the Itad website.
Advice for CHOOSING this approach (tips and traps)
A realist evaluation design is well suited to assess how interventions in complex situations work because it allows the evaluator to deconstruct the causal web of conditions underlying such interventions.
A realist evaluation yields information that indicates how the intervention works (i.e., generative mechanism) and the conditions that are needed for a particular mechanism to work (i.e., specification of contexts) and, thus, it is likely to be more useful to policymakers than other types of evaluation.
As with any evaluation, the scope of the realist evaluation needs to be set within the boundaries of available time and resources. Using a realist approach to evaluation is not necessarily more resource or time-intensive than other theory-based evaluations, but it can be more expensive than a simple pre-post evaluation design.
It is generally not possible to explore every aspect of an intervention, especially a large and complex one, using a realist approach. Choices must be made about which parts of an intervention/programme theory to focus on. The needs and interests of the commissioner should guide these.
Advice for USING this approach (tips and traps)
Larger scale or more complicated realist evaluations are ideally carried out by interdisciplinary teams as this usually allows for a braoder consideration of likely mechanisms. However, it is possible to undertake realist evalaution with single practitioners, and in small-scale evaluations.
If the programme theory/MRT is made explicit together with the main actors, it can lead to a better, shared understanding of the intervention. This in turn could improve ownership and lead to more context-appropriate interventions.
Developing the causal theory may also contribute to a better definition of what needs to be evaluated and, thus, what the key evaluation questions are.
Allow sufficient time for assessing the interactions between intervention, actors and context.
Resources
The following books are recommended reading on the philosophical background and the general development of this approach:
- Realist evaluation
Pawson, R. & Tilley, N. (1997). Realistic Evaluation. London: Sage.
- The science of evaluation: A realist manifesto
Pawson, R. (2013). The science of evaluation: a realist manifesto, London, SAGE Publications.
Guides
- Quality standards for realist evaluation for evaluators and peer-reviewers: The RAMESES II project (PDF)
Wong, G., Westhorp, G., Greenhalgh, J., Manzano, A., Jagosh, J., Pawson R. & Trisha Greenhalgh (2017).
The quality standards cover:
1. The evaluation purpose
2. Understanding and applying a realist principle of generative causation in realist evaluations
3. Constructing and refining a realist programme theory or theories
4. Evaluation design
5. Data collection methods
6. Sample recruitment strategy
7. Data analysis
8. Reporting - Magenta book supplementary guide – Realist evaluation (PDF)
This brief guide covers:
1. Realist Evaluation: What it is and when to use it
2. Introducing Realist Evaluation
3. Developing realist theory: Context, mechanism and Outcome
4. A theoretical example
5. Applying the realist frame to evaluation
6. Further examples of Realist Evaluation
References and further resources This introduction will help evaluators and commissioners of evaluations to decide whether a realist approach is appropriate for evaluating the impact of a particular policy or programme.
Discussion Papers
Pawson, R. & Tilley, N. (2001). Realistic Evaluation Bloodlines. American Journal of Evaluation 22: 317-324.
This article analyses six different social science inquiries from around the globe that use a variety of methods and strategies to draw conclusions about realistic evaluations. They conclude by providing six key lessons that can be learned from their findings about realist evaluation.
- A realist diagnostic workshop
Pawson, R. & Manzano-Santaella, A. (2012). A realist diagnostic workshop. Evaluation 18:176-191.
This article explores how to conduct realist inquiry through a constructive critique of recently published ‘realist evaluations’.
- Is realist evaluation keeping its promise? A literature review of methodological practice in health systems research
Marchal, B., Van Belle, S., Van Olmen, J., Hoerée, T. & Kegels, G. (2012). Is realist evaluation keeping its promise? A literature review of methodological practice in health systems research. Evaluation, 18, 192-212.
This literature review identifies how realist evaluation concepts are applied in health systems research and the methodological problems encountered. The paper discusses the need for methodological guidance and more clarity concerning definitions of mechanisms and context and how the configuration of context, mechanism and intervention can be described and assessed.
- Unpacking Black Boxes: Mechanisms and Theory Building in Evaluation
Astbury, B. & Leeuw, F. (2010). Unpacking Black Boxes: Mechanisms and Theory Building in Evaluation. American Journal of Evaluation 31(3):363-381.
This article clarifies what is meant by mechanisms in the context of program evaluation. Three main characteristics of mechanisms are identified, and a possible typology of mechanisms is outlined. Theoretical and practical implications for evaluators are discussed, along with some precautions to consider when investigating mechanisms that might plausibly account for program outcomes.
- Understanding ‘context’ in realist evaluation and synthesis
Greenhalgh, J., & Manzano, A. (2021)
This paper reviewed 40 purposively selected studies to explore how context is conceptualised in realist reviews and evaluations. The review examines how context is defined and operationalised in the findings. Two context narratives were identified; context as observable features that set a chain of events in motion or context that operates in a dynamic, emergent way over time. The different implications of how context is conceptualised for the design, goals and impact of realist reviews and evaluations are discussed.
- Reality bites: Making realist evaluation useful in the real world
Punton, M., Vogel, I., Leavy, J., Michaelis, C. & Boydell, E. (2020). CDI Practice Paper 22, Brighton, IDS.
“This practice paper presents lessons from four large, multi-country realist evaluations of complex interventions conducted by Itad since 2013. It argues that realist evaluation can add value by enhancing the clarity, depth, and portability of findings, helping evaluators deal with context and complexity in pragmatic ways, and providing helpful tools and lenses for implementers to critically appraise their programmes and generate learning. However, novice realist evaluators face a number of potential pitfalls, especially in large-scale evaluations. This paper shares lessons on how Itad has navigated these challenges, which may be helpful to others working in similar contexts in international development and beyond.”
- Evaluating complex public health interventions: Theory, methods and scope of realist enquiry
Connelly, J. (2007). Evaluating complex public health interventions: theory, methods and scope of realist enquiry. Journal of Evaluation in Clinical Practice 13(6):935-941.
"This article argues that standard models used to study complex health interventions are inadequate and that critical realism “posits a substantive causal theory, an end to fact-value dualism, and a coherent and emancipatory model of social action; all of these features amount to a systematic and compelling account of public health practice and a coherent approach to evaluation of complex public health interventions”.
- The evaluation of health and social care partnerships: An analysis of approaches and synthesis for the future
Dickinson, H. (2006). The evaluation of health and social care partnerships: an analysis of approaches and synthesis for the future. Health Soc Care Community 14:375-383.
This paper suggests that theories of change and realistic evaluation approaches fulfil different and complementary roles in the evaluation of social care partnerships and that the practical limitations of these approaches may be addressed by interpreting these approaches through a framework of critical realism.
Websites
- RAMESES (Realist And Meta-narrative Evidence Syntheses: Evolving Standards)
The RAMESES (Realist And Meta-narrative Evidence Syntheses: Evolving Standards) project produced both reporting and quality standards for realist evaluations, as well as resources and training materials for realist evaluation that cover:
Frequently Asked Questions about Realist Evaluation
Realist evaluation, realist synthesis, realist research - what's in a name?
What is a mechanism? What is a programme mechanism?
What realists mean by context
Philosophies and evaluation design
Protocols and realist evaluation
A realist understanding of programme fidelity
Realist evaluation and ethical considerations
Theory in realist evaluation
Developing realist programme theories
Retroduction in realist evaluation
The realist interview
Realist evaluation interviewing - a 'starter set' of questions
Working with a librarian on a realist review - Theory-driven Inquiry (archived link)
The Institute of Tropical Medicine, Antwerp (Belgium) hosts the Theory-driven Inquiry website which includes a methodological overview of realist evaluation, realist synthesis and theory-driven evaluation, as well links to relevant publications.
Examples of realist studies in health care
- Benefits of realist evaluation for rapidly changing health service delivery
Jagosh, J, Stott, H., Halls, S., Thomas, R., Liddiard, C., Cupples, M., Cramp, F., Kersten, P., Foster, D. & Walsh, N.E. (2022)
This paper reflected on a realist evaluation of first-contact physiotherapy in primary care to identify five important considerations for improving the quality of realist evaluation outputs when studying rapidly changing health service delivery.
- Using Realistic Evaluation to Evaluate a Practice-level Intervention to Improve Primary Healthcare for Patients with Lo…
Byng, R., Norman, I. and Redfern, S. (2005). Using realistic evaluation to evaluate a practice-level intervention to improve primary healthcare for patients with long-term mental illness. Evaluation, 11, 69-93
Byng et al. (2005) applied realist evaluation principles to evaluate a mental health programme in the NHS.
- Exposing the key functions of a complex intervention for shared care in mental health: case study of a process evaluati…
Byng, R., Norman, I., Redfern, S. and Jones, R. (2008). Exposing the key functions of a complex intervention for shared care in mental health: case study of a process evaluation. BMC Health Serv Res 8: 274.
Byng et al. (2008) applied realist evaluation principles to evaluate a mental health programme in the NHS.
“Realistic Evaluation provided a useful theoretical framework for this process evaluation, by allowing a specific focus on context. Although complex interventions might appear 'out of control', due to their varied manifestation in different situations, context sensitive process evaluations can help identify the intervention's key functions.”
- Tackling health inequalities through partnership working: learning from a realist evaluation
Evans, D. and Killoran, A. (2000). Tackling health inequalities through partnership working: learning from a realist evaluation. Critical Public Health 10:125-140.
Evans and Killoran present the evaluation of a Health Education Authority’s Integrated Purchasing Programme and specifically of demonstration projects testing partnerships to tackle health inequalities.
- Unearthing how, why, for whom and under what health system conditions the antiretroviral treatment adherence club inter…
This paper discusses an evaluation that used the framework of ICAMO -Intervention-Context-Actor-Mechanism-Outcome to evaluate a program for adherence to anti-retroviral treatment.
- Keeping it real: using mechanisms to promote use in the realist evaluation of the building capacity to use research evi…
This paper discusses an evaluation that used the framework C+I+M=O - adding features of the intervention as well as identifying context, mechanism and outcomes in a realist evaluation of a program to build organisational capacity to use research evidence.
Sources
Merton, R.K. (1968). Social theory and social structure. New York: The Free Press
Mukumbang, F.C., Marchal, B., Van Belle, S. & van Wyk, B. (2018). Unearthing how, why, for whom and under what health system conditions the antiretroviral treatment adherence club intervention in South Africa works: A realist theory refining approach. BMC Health Serv Res 18, 343.
Punton, M. & Vogel, I. (2020). Keeping it real: using mechanisms to promote use in the realist evaluation of the building capacity to use research evidence program. New Directions for Evaluation 2020(167), 87-100.
Image source: Mobius Transform of a regular circle packing, by fdecomite on Flickr
Expand to view all resources related to 'Realist evaluation'
Resource
- Causal Pathways 2023 Symposium and 2024 introductory sessions
- Causal Pathways introductory session: Realist evaluation
- Introduction: Contribution, causality, context, and contingency when evaluating inclusive business programmes
- Lost causal: Debunking myths about causal analysis in philanthropy
- Making rigorous causal claims in a real-life context: Has research contributed to sustainable forest management?
- Qualitative impact assessment protocol (QuIP)
- Realist evaluation introductory resources
- Realist impact evaluation: An introduction
- Realistic evaluation bloodlines
'Realist evaluation' is referenced in:
Blog
Framework/Guide
- Communication for Development (C4D) :
- Communication for Development (C4D) :
- Rainbow Framework :
- Rainbow Framework :
Method
Theme