Can an exhaustive list of impact evaluation designs be developed, or is my mission on this futile?

I have set out on a mission as a part of outcomes theory to attempt to develop an exhaustive list of impact/outcome evaluation designs – evaluation designs which make a claim that changes in high-level outcomes can be attributed to a particular intervention. If we could pull off developing such a list that most people are happy with, it would be very powerful. First it could be used in evaluation planning to work out if all of the possible impact evaluation designs had been assessed for their appropriateness, feasibility and/or affordability. At the moment I think that almost every evaluation planner walks around wondering if there is some sort of impact evaluation design they have not considered.
Continue reading

Untangling evaluation terms – discussing evaluation 'types' with clients often more useful than evaluation 'approaches'

I have just put up a outcomes theory article based on a book chapter I wrote some time ago dividing the terminology used in evaluation into five groups of terms about five different ‘aspects’ of evaluation. These aspects are: evaluation approaches; evaluation types (based on the purpose of the evaluation); evaluation methods; evaluation information analysis techniques; and evaluation designs. Approaches tend to combine a range of different elements including general approaches to evaluation, philosophy of science views and for instance, quasi-political perspectives on the relationship between empowered and disempowered groups. Evaluation approaches are often not mutually exclusive from each other from a conceptual point of view. Evaluation approaches include such things as Scriven’s Goal Free Evaluation, Patton’s Utilization Focused Evaluation and Fetterman’s Empowerment Evaluation. While I find these very interesting from the point of view of stimulating my thinking about evaluation, I often (but not always) do not find them very useful when talking to a client about a specific evaluation.
Continue reading

Formative evaluation versus impact/outcome evaluation

In response to a posting on one of my outcomes theory articles by Marcus Pilgrim who ran the recent YEN Evaluation Clinic in Damascus, I have worked up an article on the difference between formative, process and impact/outcome evaluation. As Marcus points out in his posting, the term formative (or developmental) evaluation is not one which is widely known in all sectors. Formative evaluation is directed at optimizing program implementation. Process evaluation attempts to describe the course and context of a program. Impact/outcome evaluation looks at the intended and unintended, positive and negative outcomes of a program and whether they can be attributed to the program. Continue reading

Mapping indicators onto a logic model is obvious – but why haven't we always done it?

I was running a workshop today teaching policy analysts the basics of my approach to program evaluation (Easy Outcomes). One of the participants, when I talked about the importance of always mapping indicators back onto a visual model, commented that when you do it, it’s so obviously the right approach that you can’t understand why we’ve not been doing it for years.

The idea behind this approach is that the way we almost always approach indicator work is to eye-ball a list or table of indicators and ask the question of a group of busy people sitting around a table – ‘does this list of indicators look any good?’
Continue reading

Randomistas Rule

Just read and commented on an interesting article referred to on the 3IE site – a site dedicated to improving evidence about what works in international development. The article was by Martin Ravallion and was about the rise of the Randomistas in international development economics. Randomistas are those who promote much more use of randomized trials to try and work out what works in international development. It is a good article which points out the fact that randomized trials are not feasible in many important types of development interventions. This debate is the same one which is occurring in many sectors at the moment and one which has been debated on and off in the evaluation field for many years. My take on it is that we need to develop some underlying principle which we can debate and generally agree on so that we don’t need to have this debate endlessly without seemingly making much progress on it.
Continue reading

Flow of causality in outcomes models and feedback loops

A quick technical blog here. Fellow evaluator Rick Davies pointed out in a post on one of my outcomes theory articles (on how to best represent causal models), that strictly visualizing causality as flowing in one direction within an outcomes model (logic model, results map, logframe, theory of change etc.)  could be seen as preventing the representation of feedback loops. This is because if you are, as I usually do, representing causality as flowing from bottom to top within a model (others do it left to right) then when you want to represent a feedback loop it will, of necessity, have to flow back down the logic model against the direction in which causality is being represented. Continue reading

Damascus – YEN Evaluation Clinic

Apologies for not blogging for a while, I’ve been involved in considerable travel and lots of other work – but that’s really no excuse. Maybe I just got all blogged out. What with Knolling, Blogging here and Twittering, maybe it all just got too much. Anyway, I’m back in the saddle now as they say! Last month I was fortunate to be an evaluation expert at the YEN Evaluation Clinic in Damascus. YEN is the Youth Employment Network – an International Labour Organization, World Bank, United Nations collaboration. A site for the evaluation clinic has been set up at yenclinic.groupsite.co.

The Evaluation Clinic took two examples of youth employment programs and worked through designing an impact evaluation for them. It was a fascinating experience. I’ll blog about what it was like being the sole psychologist evaluator working with¬† five economists evaluation specialists (from the ILO and the World Bank) another day! Continue reading