The importance of 'looking behind the numbers' in performance management systems

A colleague Stan Capela recently highlighted the importance of ‘looking behind the numbers’ in performance management systems. Pointing out that, if this is not done, false conclusions can be drawn from such systems. I think that most people would agree with this sentiment. The key issue for me is what is the most effective way of us ‘looking behind the numbers’ when measuring people’s, project’s or organization’s performance. Continue reading

Moving past the debate about randomized experiments

A colleague Bob Williams recently drew attention to articles on the New Yorker about the use of randomized experiments and particularly one from an economist advocating their widespread use in a range of program areas.

I’ve been involved in a number of seemingly endless discussions and presentations about the pros and cons of randomized experiments and the rise of what are being called the Randomistas – those advocating for a much wider use of randomized experiments. In this post I want to talk about how we can move beyond these seemingly endless discussions. Continue reading

New How-To Guides on DoView Site – What's and outcomes (results) model

I have not been blogging for a while as I’ve been caught up in preparing multiple resources on outcomes models and also actually developing many outcomes models for clients. I now have many great examples which I want to share with you in the coming months. It’s only now that a number of these projects are coming to a conclusion and clients are becoming willing to share them with others. So watch this space.

In the meantime, on the DoView site some new How-To Guides are starting to be put up. The first is on What’s a DoView Outcomes (Results) Model and Why Should I Use One? This is in response to requests from DoView enthusiasts who want to be able to refer people to a quick article about what an outcomes model is and why people should use one for all of their project and organizational planning. Continue reading

Theory of Change Versus Theory of Action

What’s the difference between a Theory of Change and a Theory of Action? I’m just clarifying my thoughts on this issue and how it relates to my work thinking about how we conceptualize outcomes models (logic models) within outcomes theory. In summary, at the moment – apart from a Theory of Action just being an outcomes model drawn at a lower level – I can’t see a major difference. However I’m happy to be contradicted on this and will change my view if there are convincing arguments for making the distinction. My current thinking is as set out below. Continue reading

The evolution of the logic model

I’ve just posted an article on the evolution of the logic model within evaluation. Over the last couple of decades, increasing numbers of evaluators have started using logic models. For those not familiar with what logic models are – they are simply tabular or visual representations of all of the lower-level steps needed to achieve high-level outcomes for a program, organization or other intervention. They go by different names, for instance: program logics, intervention logics, results maps, theories of change, program theories, results hierarchies, strategy maps, end-means diagrams etc.). A traditional way of drawing logic models has evolved (known as the inputs, outputs, intermediate outcomes, final outcomes structured logic model) which often attempts to restrict logic models to a single page. However, many evaluators are now breaking away from the constraints of this traditional format and exploring various alternative ways of representing logic models. Continue reading

How many evaluators does it take to change a light bulb?

In response to a series of ‘How many evaluators does it take to change a light bulb?’ jokes on the evaluators list EVALTALK, I whipped up an outcomes model (logic model) for a Changing Light Bulbs Project (some days one does wonder if this is what evaluators do for fun, it must be some sort of illness!).

Anyway here it is

Paul Duignan, PhD. (Follow me on my Outcomes Blog; Twitter; or via my E-Newsletter).

Developing an M&E plan using a visual approach

On various lists I am on I often see requests by people wanting to develop what is called an M&E plan. This terminology is often used in the international development area. It refers to a Monitoring and Evaluation Plan. The way these requests are made makes me think that the way you should monitor and evaluate different projects varies a great deal. Continue reading

Getting outcomes creds and saving time!

Public sector organizations these days have two important imperatives: establishing that they are truly ‘results and outcomes-focused’ while also becoming more efficient in their internal organizational activity. The really good news in the outcomes area is that using a central tool of outcomes work – outcomes models (a particular type of visual model of all of the high-level outcomes the organization is seeking to achieve and the steps it is taking to do so) is that organizations and programs can do both at the same time. Continue reading

Using an outcomes modeling approach to action research

Will get back to blogging on the Australasian Evaluation Society Conference when I get a moment (may not be for a few days). In the meantime had to prepare an article about using outcomes modeling as a basic tool within an action research approach. Because outcomes modeling – developing visual outcomes models (like a type of logic model, or theory of change model) – according to the outcomes theory set of standards for building such models is a generic process, such models can be used for a wide range of purposes. They can, for instance, be used within an action research approach. Action research is an approach which attempts to work in cycles of research/action/research. It has the great virtue of ensuring that research is connected to action and action is connected to research.
Continue reading

The Taxi Driver and 'why don't you just measure outcomes' – on the way to AES conference

On my way to the Australasian Evaluation Society Conference in Canberra my taxi driver in from the airport asked me what I do. When I explained that I ‘measure whether programs, often government programs, work or not so the taxpayer gets value for money’, he was right into the concept. Although I think he thought that I was over complicating things a little. He said: ‘shouldn’t it just be a matter of using statistics to measure if things are getting better or not about a program.’ What he was talking about was one aspect of monitoring and evaluation – an important piece – but just one of the Five Building Blocks I see lying behind all monitoring and evaluation systems (outcomes systems). Continue reading