Seamlessly moving from evaluation to strategy and back again

I’m currently in a discussion on the American Evaluation Association’s Linkedin page about the relationship between monitoring, evaluation and strategic planning. While different consultants may be involved in doing different aspects of these for a client, from a client’s point of view they’re all just parts of their organization’s work which they somehow need to integrate and align.

When working with clients, it really helps to have an approach which lets you move from doing monitoring and evaluation planning, for instance, back to strategic planning. You can then just track whatever their organizational focus is at any moment. From their point of view, it means that monitoring, evaluation etc are seamlessly aligned with strategic planning and other organizational functions.

For instance, working with a client yesterday, using our approach and software, we were building a DoView Visual M&E plan with them (http://doview.com/plan/evaluation.html). These plans are based on a DoView Visual Outcomes Model (http://doview.com/plan/draw.html). The client then said, ‘it’s great what we’ve just done about measurement, but we also need to work out what we’re going to say to our funders about what we want to do next – i.e. our forward strategy’.

So we immediately and seamlessly moved to doing this task for them within the same meeting. We just took the DoView Visual Outcomes Model we had already built with them for monitoring and evaluation planning purposes and went through it, marking up their priorities for future action. The next step will be to map their planned projects onto the DoView and check for ‘line-of-sight’ alignment between their priorities and their planned actions. (see http://doview.com/plan).

It’s great to have the flexibility to move in any direction along the: strategy – priority setting – project alignment – indicator monitoring – evaluation – outcomes-focused contracting spectrum, and to have a tool and approach that lets you immediately go wherever the client wants you to go. This is achieved by using the one visual model (a DoView Visual Outcomes Model drawn according to the 13 rules for drawing DoViews) to underpin all of these activities (http://doview.com/plan/draw.html).

Paul Duignan, PhD OutcomesBlog.org, Twitter.com/paulduignan, OutcomesCentral.org, DoView.com.

Putting the Planning back into M&E – PME or PM&E what’s the acronym going to be?

In a posting on Linkedin, Leslie Ayre-Jaschke talked about the growth of PME – or maybe it will end up being called PM&E, or something else. Regardless of the acronym, it’s the movement to put planning back into monitoring and evaluation. ‘Putting the P back into M&E’ was the subtitle of a workshop I ran in South Africa for UNFPA several years ago. I think that it’s a concept that’s going to get a lot more traction over the next few years.

It’s consistent with what evaluators like Michael Patton, and many of us in the evaluation community, have been talking about for years. We’ve been talking up the key role of formative evaluation – evaluation aimed at making sure that programs are optimized. And formative evaluation is all about making sure that programs are well planned.

The point of this approach within evaluation is that it’s often pointless to evaluate a badly planned program. Evaluation resources would be better spent on making sure that the program is better planned than on measuring the fact that it often will not achieve its outcomes due to the fact that planning has been poor.

The new PM&E movement is not just about evaluators and evaluation, it is much broader than that taking in people from a range of disciplines. This new integrated approach which is emerging needs an underlying theory which will appeal to all of the different disciplines involved – strategic planners, performance managers, evaluators, contract managers, policy analysts etc. The work I’ve been doing in outcomes theory has been designed to meet this need.

The purpose of outcomes theory is to provide an integrated conceptual basis for PM&E-type approaches. A common conceptual basis is needed if people across the different disciplines and sectors are going to be able to share conceptual insights about how they identify, measure, attribute and hold parties to account for outcomes when doing planning, monitoring and evaluation. Good theory is needed to help them quickly sort out the type of conceptual confusion that current characterizes much of the discussion of outcomes related issues. As the famous social scientist Kurt Lewin said – ‘there’s nothing so practical as a good theory’.

This aspiration of outcomes theory is summarized in the diagram below showing how it’s a meso-level theory reaching across strategic planning, monitoring, evaluation etc.

d131-2
(see http://www.outcomescentral.org/outcomestheory.html for more on this)

For people just working out in the field, who don’t  need to know much theory, outcomes theory principles have been hard-wired into the DoView Visual Planning, Monitoring and Evaluation approach http://doview.com/plan. Using the approach means that they will avoid many of the technical problems which are highlighted by outcomes theory.

Large-scale visual models of a program (drawn in the correct way, for instance as ‘DoViews’) provide the ideal foundation for the new fully integrated approach to planning, monitoring and evaluation which many are now seeking. http://doview.com/plan/draw.html.

Moving past the debate about randomized experiments

A colleague Bob Williams recently drew attention to articles on the New Yorker about the use of randomized experiments and particularly one from an economist advocating their widespread use in a range of program areas.

I’ve been involved in a number of seemingly endless discussions and presentations about the pros and cons of randomized experiments and the rise of what are being called the Randomistas – those advocating for a much wider use of randomized experiments. In this post I want to talk about how we can move beyond these seemingly endless discussions. Continue reading

Developing an M&E plan using a visual approach

On various lists I am on I often see requests by people wanting to develop what is called an M&E plan. This terminology is often used in the international development area. It refers to a Monitoring and Evaluation Plan. The way these requests are made makes me think that the way you should monitor and evaluate different projects varies a great deal. Continue reading

Can an exhaustive list of impact evaluation designs be developed, or is my mission on this futile?

I have set out on a mission as a part of outcomes theory to attempt to develop an exhaustive list of impact/outcome evaluation designs – evaluation designs which make a claim that changes in high-level outcomes can be attributed to a particular intervention. If we could pull off developing such a list that most people are happy with, it would be very powerful. First it could be used in evaluation planning to work out if all of the possible impact evaluation designs had been assessed for their appropriateness, feasibility and/or affordability. At the moment I think that almost every evaluation planner walks around wondering if there is some sort of impact evaluation design they have not considered.
Continue reading

Untangling evaluation terms – discussing evaluation 'types' with clients often more useful than evaluation 'approaches'

I have just put up a outcomes theory article based on a book chapter I wrote some time ago dividing the terminology used in evaluation into five groups of terms about five different ‘aspects’ of evaluation. These aspects are: evaluation approaches; evaluation types (based on the purpose of the evaluation); evaluation methods; evaluation information analysis techniques; and evaluation designs. Approaches tend to combine a range of different elements including general approaches to evaluation, philosophy of science views and for instance, quasi-political perspectives on the relationship between empowered and disempowered groups. Evaluation approaches are often not mutually exclusive from each other from a conceptual point of view. Evaluation approaches include such things as Scriven’s Goal Free Evaluation, Patton’s Utilization Focused Evaluation and Fetterman’s Empowerment Evaluation. While I find these very interesting from the point of view of stimulating my thinking about evaluation, I often (but not always) do not find them very useful when talking to a client about a specific evaluation.
Continue reading

Formative evaluation versus impact/outcome evaluation

In response to a posting on one of my outcomes theory articles by Marcus Pilgrim who ran the recent YEN Evaluation Clinic in Damascus, I have worked up an article on the difference between formative, process and impact/outcome evaluation. As Marcus points out in his posting, the term formative (or developmental) evaluation is not one which is widely known in all sectors. Formative evaluation is directed at optimizing program implementation. Process evaluation attempts to describe the course and context of a program. Impact/outcome evaluation looks at the intended and unintended, positive and negative outcomes of a program and whether they can be attributed to the program. Continue reading

Damascus – YEN Evaluation Clinic

Apologies for not blogging for a while, I’ve been involved in considerable travel and lots of other work – but that’s really no excuse. Maybe I just got all blogged out. What with Knolling, Blogging here and Twittering, maybe it all just got too much. Anyway, I’m back in the saddle now as they say! Last month I was fortunate to be an evaluation expert at the YEN Evaluation Clinic in Damascus. YEN is the Youth Employment Network – an International Labour Organization, World Bank, United Nations collaboration. A site for the evaluation clinic has been set up at yenclinic.groupsite.co.

The Evaluation Clinic took two examples of youth employment programs and worked through designing an impact evaluation for them. It was a fascinating experience. I’ll blog about what it was like being the sole psychologist evaluator working with  five economists evaluation specialists (from the ILO and the World Bank) another day! Continue reading

Tracking jobs created under the U.S. Recovery Act – when should the attempt at measurement be abandoned?

The default expectation in at least some sections of the U.S. public sector seems to be that it should always be feasible and affordable to both measure and attribute the results of interventions. This is using the term attribution to mean being able to actually demonstrate that a change in an outcome has been caused by a particular intervention rather than being the result of other factors (see here for more on attribution). The recent U.S. Recovery Act is a case in point.  While it’s reasonable to start from the position that you should routinely assess the possibility of measuring and attributing changes in outcomes of particular interventions, you can’t start by just assuming that it will always be feasible or affordable to do this. Clinging to such an assumption, where it is untrue, can result in you either measuring an outcome when the data you are collecting is not accurate, or acting as though what you are measuring (even if it is an accurate measurement of a change in an outcome) is demonstrably attributable to a particular program, when in fact it may not be.  Continue reading

Over-simplifications in outcomes, monitoring and evaluation

An evaluation colleague Patrica Rogers commented on an earlier blog posting of mine in which I was claiming that what I am trying to do it to make outcomes, monitoring and evaluation work ‘easier’. She challenged me on that idea and pointed out that often what we are having to deal with is over-simplification in the way people are working with outcomes, monitoring and evaluation. Her comment inspired me to work up an article on over-simplification in outcomes and evaluation and after getting underway with it I realized all of the different ways in which people approach outcomes, monitoring and evaluation with over-simplified approaches and the problems which these cause. Continue reading