Seamlessly moving from evaluation to strategy and back again

I’m currently in a discussion on the American Evaluation Association’s Linkedin page about the relationship between monitoring, evaluation and strategic planning. While different consultants may be involved in doing different aspects of these for a client, from a client’s point of view they’re all just parts of their organization’s work which they somehow need to integrate and align.

When working with clients, it really helps to have an approach which lets you move from doing monitoring and evaluation planning, for instance, back to strategic planning. You can then just track whatever their organizational focus is at any moment. From their point of view, it means that monitoring, evaluation etc are seamlessly aligned with strategic planning and other organizational functions.

For instance, working with a client yesterday, using our approach and software, we were building a DoView Visual M&E plan with them (http://doview.com/plan/evaluation.html). These plans are based on a DoView Visual Outcomes Model (http://doview.com/plan/draw.html). The client then said, ‘it’s great what we’ve just done about measurement, but we also need to work out what we’re going to say to our funders about what we want to do next – i.e. our forward strategy’.

So we immediately and seamlessly moved to doing this task for them within the same meeting. We just took the DoView Visual Outcomes Model we had already built with them for monitoring and evaluation planning purposes and went through it, marking up their priorities for future action. The next step will be to map their planned projects onto the DoView and check for ‘line-of-sight’ alignment between their priorities and their planned actions. (see http://doview.com/plan).

It’s great to have the flexibility to move in any direction along the: strategy – priority setting – project alignment – indicator monitoring – evaluation – outcomes-focused contracting spectrum, and to have a tool and approach that lets you immediately go wherever the client wants you to go. This is achieved by using the one visual model (a DoView Visual Outcomes Model drawn according to the 13 rules for drawing DoViews) to underpin all of these activities (http://doview.com/plan/draw.html).

Paul Duignan, PhD OutcomesBlog.org, Twitter.com/paulduignan, OutcomesCentral.org, DoView.com.

Putting the Planning back into M&E – PME or PM&E what’s the acronym going to be?

In a posting on Linkedin, Leslie Ayre-Jaschke talked about the growth of PME – or maybe it will end up being called PM&E, or something else. Regardless of the acronym, it’s the movement to put planning back into monitoring and evaluation. ‘Putting the P back into M&E’ was the subtitle of a workshop I ran in South Africa for UNFPA several years ago. I think that it’s a concept that’s going to get a lot more traction over the next few years.

It’s consistent with what evaluators like Michael Patton, and many of us in the evaluation community, have been talking about for years. We’ve been talking up the key role of formative evaluation – evaluation aimed at making sure that programs are optimized. And formative evaluation is all about making sure that programs are well planned.

The point of this approach within evaluation is that it’s often pointless to evaluate a badly planned program. Evaluation resources would be better spent on making sure that the program is better planned than on measuring the fact that it often will not achieve its outcomes due to the fact that planning has been poor.

The new PM&E movement is not just about evaluators and evaluation, it is much broader than that taking in people from a range of disciplines. This new integrated approach which is emerging needs an underlying theory which will appeal to all of the different disciplines involved – strategic planners, performance managers, evaluators, contract managers, policy analysts etc. The work I’ve been doing in outcomes theory has been designed to meet this need.

The purpose of outcomes theory is to provide an integrated conceptual basis for PM&E-type approaches. A common conceptual basis is needed if people across the different disciplines and sectors are going to be able to share conceptual insights about how they identify, measure, attribute and hold parties to account for outcomes when doing planning, monitoring and evaluation. Good theory is needed to help them quickly sort out the type of conceptual confusion that current characterizes much of the discussion of outcomes related issues. As the famous social scientist Kurt Lewin said – ‘there’s nothing so practical as a good theory’.

This aspiration of outcomes theory is summarized in the diagram below showing how it’s a meso-level theory reaching across strategic planning, monitoring, evaluation etc.

d131-2
(see http://www.outcomescentral.org/outcomestheory.html for more on this)

For people just working out in the field, who don’t  need to know much theory, outcomes theory principles have been hard-wired into the DoView Visual Planning, Monitoring and Evaluation approach http://doview.com/plan. Using the approach means that they will avoid many of the technical problems which are highlighted by outcomes theory.

Large-scale visual models of a program (drawn in the correct way, for instance as ‘DoViews’) provide the ideal foundation for the new fully integrated approach to planning, monitoring and evaluation which many are now seeking. http://doview.com/plan/draw.html.

Does Monitoring and Evaluation M&E Planning have to be so cumbersome and painful? Just finished Bangkok Conference Presentation

Bangkok Conference

I was invited to give a presentation to the 1st Pan Asia-Africa Monitoring and Evaluation (M&E) Forum: Results-Based Management & Evaluation (RBM&E) and Beyond: Increasing M&E Effectiveness held in Bangkok. I’ve just finished my presentation which was called: ‘Anyone Else Think the Way We Do Our M&E Work is Too Cumbersome and Painful?’

I’ve had to review many Monitoring and Evaluation Plans in the past and I’ve generally found them long and tedious documents. I’ve also had to write them myself and realize that the tedium is not only on the part of the reader! It’s usually really hard to quickly overview what the M&E Plan is going to measure and the evaluation questions that are going to be asked.

Normally once the plan has been used to get funding for the M&E work, it’s just put in a desk drawer and other documentation is used to control the implementation of the M&E Plan and make presentations on it.

In the presentation, I outlined the new DoView Visual M&E Planning. This approach takes the pain out of writing (and reading) M&E plans and creates major efficiencies.

It takes 1/2 the time to create an M&E plan; it’s entirely visually based, which makes it easy to see what is, and (just as important) what’s not, being measured; the same DoView file can be used to control the implementation of the M&E work; all presentations can be made just using the DoView M&E Plan (you don’t need to create additional Powerpoints); and you can, if you wish fully integrate project strategic planning into M&E planning (the Holy Grail of putting the ‘P” – ‘Planning’ – back into ‘M&E’).

The virtual presentation was in the form of a three short videos (about 6-7 minutes each) and a Skype question and answer session afterwards.

Check out the three short videos of the presentation here. The first video describes the reason we should move from the traditional approach and the second and third video show you how to do use the new DoView paradigm. If you want the resource page on the DoView website which shows you how to build a DoView Visual M&E Plan and gives an example you can download, it’s here.

Paul Duignan PhD. Blogs at OutcomesBlog.org, is at Twitter.com/PaulDuignan, You are welcome to participate in the DoView Community of Practice on Linkedin. Download a DoView trial at DoView.com.

The importance of 'looking behind the numbers' in performance management systems

A colleague Stan Capela recently highlighted the importance of ‘looking behind the numbers’ in performance management systems. Pointing out that, if this is not done, false conclusions can be drawn from such systems. I think that most people would agree with this sentiment. The key issue for me is what is the most effective way of us ‘looking behind the numbers’ when measuring people’s, project’s or organization’s performance. Continue reading

Moving past the debate about randomized experiments

A colleague Bob Williams recently drew attention to articles on the New Yorker about the use of randomized experiments and particularly one from an economist advocating their widespread use in a range of program areas.

I’ve been involved in a number of seemingly endless discussions and presentations about the pros and cons of randomized experiments and the rise of what are being called the Randomistas – those advocating for a much wider use of randomized experiments. In this post I want to talk about how we can move beyond these seemingly endless discussions. Continue reading

New How-To Guides on DoView Site – What's and outcomes (results) model

I have not been blogging for a while as I’ve been caught up in preparing multiple resources on outcomes models and also actually developing many outcomes models for clients. I now have many great examples which I want to share with you in the coming months. It’s only now that a number of these projects are coming to a conclusion and clients are becoming willing to share them with others. So watch this space.

In the meantime, on the DoView site some new How-To Guides are starting to be put up. The first is on What’s a DoView Outcomes (Results) Model and Why Should I Use One? This is in response to requests from DoView enthusiasts who want to be able to refer people to a quick article about what an outcomes model is and why people should use one for all of their project and organizational planning. Continue reading

Developing an M&E plan using a visual approach

On various lists I am on I often see requests by people wanting to develop what is called an M&E plan. This terminology is often used in the international development area. It refers to a Monitoring and Evaluation Plan. The way these requests are made makes me think that the way you should monitor and evaluate different projects varies a great deal. Continue reading

Getting outcomes creds and saving time!

Public sector organizations these days have two important imperatives: establishing that they are truly ‘results and outcomes-focused’ while also becoming more efficient in their internal organizational activity. The really good news in the outcomes area is that using a central tool of outcomes work – outcomes models (a particular type of visual model of all of the high-level outcomes the organization is seeking to achieve and the steps it is taking to do so) is that organizations and programs can do both at the same time. Continue reading

Using an outcomes modeling approach to action research

Will get back to blogging on the Australasian Evaluation Society Conference when I get a moment (may not be for a few days). In the meantime had to prepare an article about using outcomes modeling as a basic tool within an action research approach. Because outcomes modeling – developing visual outcomes models (like a type of logic model, or theory of change model) – according to the outcomes theory set of standards for building such models is a generic process, such models can be used for a wide range of purposes. They can, for instance, be used within an action research approach. Action research is an approach which attempts to work in cycles of research/action/research. It has the great virtue of ensuring that research is connected to action and action is connected to research.
Continue reading

The Taxi Driver and 'why don't you just measure outcomes' – on the way to AES conference

On my way to the Australasian Evaluation Society Conference in Canberra my taxi driver in from the airport asked me what I do. When I explained that I ‘measure whether programs, often government programs, work or not so the taxpayer gets value for money’, he was right into the concept. Although I think he thought that I was over complicating things a little. He said: ‘shouldn’t it just be a matter of using statistics to measure if things are getting better or not about a program.’ What he was talking about was one aspect of monitoring and evaluation – an important piece – but just one of the Five Building Blocks I see lying behind all monitoring and evaluation systems (outcomes systems). Continue reading