The importance of 'looking behind the numbers' in performance management systems

A colleague Stan Capela recently highlighted the importance of ‘looking behind the numbers’ in performance management systems. Pointing out that, if this is not done, false conclusions can be drawn from such systems. I think that most people would agree with this sentiment. The key issue for me is what is the most effective way of us ‘looking behind the numbers’ when measuring people’s, project’s or organization’s performance. Continue reading

Moving past the debate about randomized experiments

A colleague Bob Williams recently drew attention to articles on the New Yorker about the use of randomized experiments and particularly one from an economist advocating their widespread use in a range of program areas.

I’ve been involved in a number of seemingly endless discussions and presentations about the pros and cons of randomized experiments and the rise of what are being called the Randomistas – those advocating for a much wider use of randomized experiments. In this post I want to talk about how we can move beyond these seemingly endless discussions. Continue reading

Is it the role of an evaluator to always 'value' what they are evaluating?

I’ve had occasion recently to need to think about whether or not the notion of ‘valuing’ something is always an essential part of evaluation. To question this may seem a heresy to some evaluators who see this as the defining aspect of evaluation (for instance as opposed to ‘research’ where they don’t see such valuing as needing to take place). I’m not definite in my thoughts on this issue and below just want to float the argument which has been rattling around in my head for a while and which I have not had a chance to get down in writing to see if it can be shot down – in which case I will change my mind. Continue reading

Can an exhaustive list of impact evaluation designs be developed, or is my mission on this futile?

I have set out on a mission as a part of outcomes theory to attempt to develop an exhaustive list of impact/outcome evaluation designs – evaluation designs which make a claim that changes in high-level outcomes can be attributed to a particular intervention. If we could pull off developing such a list that most people are happy with, it would be very powerful. First it could be used in evaluation planning to work out if all of the possible impact evaluation designs had been assessed for their appropriateness, feasibility and/or affordability. At the moment I think that almost every evaluation planner walks around wondering if there is some sort of impact evaluation design they have not considered.
Continue reading

Untangling evaluation terms – discussing evaluation 'types' with clients often more useful than evaluation 'approaches'

I have just put up a outcomes theory article based on a book chapter I wrote some time ago dividing the terminology used in evaluation into five groups of terms about five different ‘aspects’ of evaluation. These aspects are: evaluation approaches; evaluation types (based on the purpose of the evaluation); evaluation methods; evaluation information analysis techniques; and evaluation designs. Approaches tend to combine a range of different elements including general approaches to evaluation, philosophy of science views and for instance, quasi-political perspectives on the relationship between empowered and disempowered groups. Evaluation approaches are often not mutually exclusive from each other from a conceptual point of view. Evaluation approaches include such things as Scriven’s Goal Free Evaluation, Patton’s Utilization Focused Evaluation and Fetterman’s Empowerment Evaluation. While I find these very interesting from the point of view of stimulating my thinking about evaluation, I often (but not always) do not find them very useful when talking to a client about a specific evaluation.
Continue reading

Formative evaluation versus impact/outcome evaluation

In response to a posting on one of my outcomes theory articles by Marcus Pilgrim who ran the recent YEN Evaluation Clinic in Damascus, I have worked up an article on the difference between formative, process and impact/outcome evaluation. As Marcus points out in his posting, the term formative (or developmental) evaluation is not one which is widely known in all sectors. Formative evaluation is directed at optimizing program implementation. Process evaluation attempts to describe the course and context of a program. Impact/outcome evaluation looks at the intended and unintended, positive and negative outcomes of a program and whether they can be attributed to the program. Continue reading

Randomistas Rule

Just read and commented on an interesting article referred to on the 3IE site – a site dedicated to improving evidence about what works in international development. The article was by Martin Ravallion and was about the rise of the Randomistas in international development economics. Randomistas are those who promote much more use of randomized trials to try and work out what works in international development. It is a good article which points out the fact that randomized trials are not feasible in many important types of development interventions. This debate is the same one which is occurring in many sectors at the moment and one which has been debated on and off in the evaluation field for many years. My take on it is that we need to develop some underlying principle which we can debate and generally agree on so that we don’t need to have this debate endlessly without seemingly making much progress on it.
Continue reading

Reliability versus validity – read on it's important!

Now that Easter is over (and the yard gate has been built to keep in the dog that my wife and the kids have their hearts set on getting). I’m back blogging. Today I want to talk about the difference between reliability and validity. It sounds technical, but read on, its really important in a lot of results and outcomes areas. In psychology, where I come from, they spend a lot of time drumming this distinction into you. Reliability is whether measurements at different times and by different people will give you the same result. Validity is whether you are measuring the right thing. Continue reading

What exactly is 'best practice'?

Identifying and communicating best practice is widely recommended in many sectors and disciplines. But I’ve sometimes wondered as I’ve sagely recommended in a serious voice, ‘I think that we should use an approach based on identifying and implementing best practice here’ exactly what best practice is? I think that doing it is often a good idea and I can work out how to identify it and share it, and I will blog about that tomorrow, but what I’m not clear on is exactly how we define ‘best’ in the term ‘best practice’. It’s not clear whether best practice consists of: 1) claims that practitioners, from their own experience, believe the practices concerned to be feasible and ‘useful’ to implement; or 2) practices which have been proven to improve high-level outcomes (through making a strong outcome/impact evaluation claim of some sort such as is made using some of the types of designs listed here). Continue reading

Extraordinary circumstances and Dick Cheney's 'stuff happens'

In my last blog posting I commented on Jon Stewart’s critique of ex-Vice President Dick Cheney’s claim that the Bush administration should not be held accountable for the U.S. economic melt-down because a number of things happened during their term which affected the economy. The Vice President summarized this by saying that ‘stuff happens’ and this ‘stuff’ unexpectedly blew their budget. The ‘stuff’ included the wars in Afghanistan and Iraq, and Hurricane Katrina. In technical outcomes theory terms, the Vice-President was mounting an ‘extraordinary factors’ argument to reduce his administration’s accountability for the economic melt-down. Continue reading