More fun with the accountants – this time Integrated Reporting Standards

Today I’ve been at another iteration of my gig with the road-show accountants’ professional development conference in another city (see my last blog on what my presentation has been about – Key Performance Indicators (KPIs)). I’ll try to blog later in more detail about KPIs for those who are obsessed with their technical characteristics like I am.

In the meantime, another really engrossing topic (for people like me) – Integrated Reporting Standards. I heard a presentation by Mark Hucklesby on the newly developed Integrated Reporting Standards which are currently out for consultation until July.

A quick aside on standards. One of the great things about accountants is that they’re obsessed with standard setting. They have standards for everything and technical committees meeting all the time figuring out new standards.

Standards are great because they bring about consistency, they also get the best minds in the business focused on the technical trade-offs which come up when reporting and how these are best dealt with.

In the broader area of outcomes systems – the way we identify and measure outcomes of any type in any sector –  I really wish there was a parallel structure to the various official and unofficial standard setting that goes on in accounting. Instead of the order the accountants have in their world, our area of broader outcomes reporting is really like the Wild West. Of course the accountants have had about 500 years to get their area sorted while we’ve only been focusing on outcomes in the modern sense of the term for maybe 30 years or so.

The Integrated Reporting Standards are a new initiative which can be seen as a sort of reinvented Tripple Bottom Line (economic, social and environmental). More information on the initiative at

They have come up with a set of six ‘capitals’

  • Financial
  • Manufactured
  • Intellectual
  • Human
  • Social and relationship
  • Natural

I think that calling them ‘capitals’ is maybe a bit obscure for the average person. I would see them as ‘outcome areas’ or something. However, I can see how they ended up using the term capital. They wanted to have the concept that companies take aspects of these six capitals and add value to them. The concept is set out in the diagram below from their draft standards document.

I raised two issues with Mark in the discussion time. The first was whether their had been any consideration of distinguishing between controllable and not-necessarily controllable indicators in the integrated reporting framework. This is a crucial distinction I draw in my outcomes theory work –

The purpose of integrated reporting is to give investors and others a crystal clear picture of the risks and opportunities a company is involved in. Confusing controllable with not-necessarily controllable indicators lies at the heart of many of the problems arising from misunderstanding of the true underlying risk profile one is exposed to in both the private and public sector. Mark agreed with the importance of the controllability issue. My second point was whether the standards would allow for a range of reporting approaches. He said that the standards did not stipulate any one way of actually presenting an integrated report. This is good news for someone like myself who thinks that the only way of reporting these days is to use a visual approach because of its clear advantages.

Anyway, sometime when I’m wanting a little light reading I’ll delve into the standards and report back in this blog what’s interesting from the point of view of those of us interested in outcomes theory, measurement and strategy.


Find out http://About.Me/PaulDuignan

Accountants, KPIs and dry topics

I’ve just got back from doing a presentation to an accountants’ professional development conference. I’m on a gig where I do several of the same presentation in different cities. The conference organizers gave the presentation the rather mind-numbing title of Using KPI* Reports to Enhance Organizational Performance.

Someone once told me that the way I get on in life is that I’m prepared to spend my time thinking about things (he was actually referring to analyzing KPI lists at the time) which most normal human-beings would find painfully boring.

Now, the great thing about accountants is that they’re a bit like that too –  you can’t scare them with a dry little title like the one above, so I had plenty of people turn up to my session.

The fact is that KPI lists (in various forms) are the central mechanism by which we translate our ideas about what should happen in the world into what actually happens on the ground. They’re a major determinant of the way the world turns out in the end. The accountants are right on the money with this one, sparing 50 minutes or so to talk about how to get KPI lists right is time well spent.

I started off my presentation by critiquing two of the most popular sayings in the KPI world – ‘what gets measured is what gets done’ and ‘organizational objectives should always be SMART – Specific, Measurable, Achievable, Relevant and Timebound’.

The problem with the first is that it results in: ‘what doesn’t get measured, ends up being absent from strategic discussions’. And the second (SMART ) can lead to a nasty organization problem – PM – Premature Measurement. Moving to measurement too fast before you’ve defined your strategy.

The take away points from my presentation were: 1) we need to identify our strategy before we focus just on measurement; 2) the best way to talk about strategy is to do it visually; and, 3) once we’ve developed a visual version of our strategy, we can then simply map our indicators (KPIs) directly back onto this map. This ensures that we have alignment between what we’re measuring and the priorities we’re trying to achieve.

One of the participants asked a key question, which is, ‘what it the best way of working out which indicators, out of a mass of indicators we might have, we should track?’

The simple answer is that the indicators we select should focus on our priorities. Working the way I suggested in my presentation is an ideal way of doing ensuring this. However there are some very interesting complexities around the question of indicator selection which I’ll try to get time to blog about in a few days time.

I’ll post the KPI presentation after I tweak it and do the next presentation.

*Key Performance Indicators, if any of the uninitiated are reading this blog.

What could the politician at the party claim credit for?

I was at a party the other night talking with a group of people about what I do in the outcomes area. The normal reaction I get when I tell them that I’m a psychologist is straight forward.  However, when I tell them that I’m  an outcomes strategist I usually get the following reaction – they look at me, gesticulate, roll their eyes and say, ‘Oh, it’s so  hard to prove that what you did  changed high-level outcomes’. Of course,  this is what happens in the capital city where I work  because just about everyone here is either a policy wonk, or in a relationship with one. And we all know that the whole international wonkery is  obsessed with measuring outcomes.

In the rest of the country I usually get blank stares and people tend to quickly move on to the next guest to talk about something that makes sense. But sometimes I get people who just don’t perceive that there’s any problem to be solved in measuring outcomes. It’s always a little disturbing to have someone implying that there’s no real basis for a whole area of work you’ve involved in. I got this some time ago from a taxi driver on the way to an evaluation conference. I also got it again the other day the other night at the party.

A guest, who I later found out was a local government politician, heard me talking about being an outcomes strategist. He launched into something along the lines of: ‘I would have thought it was very easy, just measure the dollars’. Initially presuming he worked in the private sector, I gave my usual speel about the private sector and outcomes. In comparison to the public sector, it has the huge advantage that its outcomes are always measured (well the ones that people mostly focus on) and the measure is a common one (the dollar) which is used right across the entire sector, regardless of the type of work people are involved in. There’s also some more complicated stuff about the sector tending to have a more relaxed attitude towards attribution (proving exactly what caused what) than the public sector. I’ll blog about that second part sometime in the future.

When I introduced the point that non-financial outcomes, rather than financial outcomes, are at the heart of what’s done in the public sector, he then said something like: ‘well you just measure all that in surveys’. He thought that the whole problem of outcomes was simply solved by tracking outcomes over time. I pointed out that whether things were getting better in the district where he was in charge  said nothing about whether this was caused by his work. Things might be getting better in every city in the world because of general positive trends affecting everyone.

Up until this point, in my view, he was simply committing the basic outcomes fallacy of thinking that measuring a not-necessarily controllable indicator somehow shows that one has improved it. (see Duignan’s Six Type of Evidence That a Program Works diagram).

When I told him as politely as I could that I though he was not actually proving anything about what he was personally making happen, he introduced a more sophisticated argument which cannot be dismissed so easily. This argument was that he ‘hears from the people all the time’ and that he gets feedback from the different encounters he has with the people who live in his district. He also added that ultimately they would tell him if he wasn’t doing a good job.

Our conversation got interrupted about this time so I didn’t  get to continue talking to him. However, thinking in formal outcomes theory terms, in this second part of the conversation, he could have been making two somewhat different arguments. One is that his immersion in the nitty-gritty of working with the people in his district brought him into direct contact with the lower-levels of the outcomes model he was seeking to achieve (the model of the steps needed to achieve high-level outcomes – which can be operationalized in the form of a visual DoView). Being able to directly ‘see’ that the lower-level steps were being put in place (e.g. new environmental regulations), and having a sound logic of the intervention at hand (environmental regulation leading to a better environment), plus a measure that environmental issues were improving,  it was reasonable for him to claim that he had established he was having an impact. In Duignan’s Types of Impact Evaluation Designs, this is the seventh type of design: Intervention Logic (Program Theory/Theory of Change) Based Designs. It can be accepted as a credible impact design by stakeholders in some situations. Of course there’s always the question of who the observer is who is making the claim that lower-level steps have been achieved. But presumably we could get some independent assessment as to whether the lower-level steps were, as he was claiming, happening, so the logic of the design makes theoretical sense as a way of attempting to prove impact.

An alternative argument he could have been mounting, if the wanted to be very pragmatic, is that the fact that he keeps getting re-elected is what ‘hearing from the people all the time’ means in practice. Looking at it this way, he would be defining his outcomes as not changing things in his community (which he may well wish to do) but just as a matter of him getting re-elected. If this is the case, then the fact that he is regularly re-elected means that, by definition, he is achieving his ‘outcome’. And this outcome could be translated into something like ‘keeping the people satisfied’. The argument then would be that keeping the people satisfied was the best way of achieving outcomes for the community within a democracy. I think that this is an example of pulling the ‘outcome’ you’re trying to show you changed back down the outcomes model so they get to some lower-level where its easier to prove attribution.

So while, in my view, his initial claims about it being easy to figure out what is causing outcomes were weak and did not establish anything actually about him having an effect on outcomes, his second round of argument had more substance to it.

Want to know more? http://About.Me/PaulDuignan

Throwing people in jail because they won’t give us the information we want? – The price of indicator collection

detainedpersonsmallAs I’m writing this, we have an interviewer from our Statistics Department sitting in the other room asking detailed questions about  our income and expenditure. It’s part of a nation-wide Household Economic Survey collecting information on household expenditure and income. It’s her second visit to us and she’s been here this time for just on two hours – the first visit took about the same time. Over the last two weeks I’ve been filling in an expenditure diary where I’ve had to record all my daily expenditure. Fortunately our interviewer knows what she’s doing and she has stepped us efficiently through the complex questionnaire – but it’s still a lot of work.

We actually don’t have any choice but to put several hours of our time to one side and fill in the questionnaire with her – participation is not voluntary. It’s mandatory, required by the same legal provisions that demand we fill in our Census forms (something that we, coincidentally, also had no choice but to do just a few weeks ago!). Presumably you would be talking about a fine rather than being thrown in jail (but if you refused to pay your fine, I presume that in the end you could end up in jail in the fullness of time).

This is a little personal example of the cost and infrastructure needed to collect indicator information. Being an outcomes wonk I don’t begrudge putting the time aside because I understand how crucial it is to collect information which can be used for indicator and other types of outcomes work. But the cost is something which is often lost on people who blithely demand that programs and organizations – ‘collect comprehensive outcomes indicator information’ – without any thought to how much it’s going to cost to do so.

It also illustrates the point that collecting accurate information can require more than just spending money – it can involve having to use the power of the State to make sure that such indicator information is collected from the people it needs to be collected from. One of the most dramatic examples we have of this is in the road safety area where drivers can be forced to give a sample measuring their blood alcohol level. Again as an outcomes wonk, I love this sort of data. But there are serious limits to any exercise of State power. A mandatory requirement to collect information needs to be used very carefully to avoid serious push-back from those who have to give their time to fill in the information (for example  discussions like the one here about someone complaining about having to fill in a mandatory survey).

Of course the types of examples that people hold up as providing best practice indicator and outcomes data collection tend to be ones where there’s a large data collection infrastructure and often mandatory data collection  (e.g. road safety, recidivism data in the corrections area). They then expect us to come up with similar information about trends in outcomes and causality in areas where we have much less ability to collect information and can’t turn to the backing of the law to force people to provide information.

So the next time someone demands that you collect more indicator information on your program, it’s reasonable to ask the question: 1) how much are they willing to spend (or do they want you to spend) on collecting this information; and, 2) if required, are they prepared to support making the collection of information on the indicators which are relevant to your program a mandatory legal requirement?

Follow me on Twitter Discuss outcomes issues on the DoView Linkedin Community of Practice.

Does avoiding regulatory enforcement represent a success or failure? A Chameleon Indicator.

There’s no doubt, some indicators are a lot more fun than others (although, I must note that people in my trade have a fairly low threshold for ‘fun’). I particularly enjoy ones which can be interpreted in any way you like. They can be called Chameleon indicators.

When developing outcomes DoViews (visual outcomes models) and performance management frameworks for organizations, I often run into a particularly ambiguous type of indicator – the number of regulatory interventions being undertaken by an organization. At first sight, what better indicator for an organization which includes a regulatory outcome as part of its mandate? But there are problems in interpreting these sorts of indicators reflected in a media exchange I heard this morning.

Our national department of conservation is currently embroiled in media controversy over reductions in staff and budgets. As part of the media spotlight focused on it, I heard a discussion this morning about a reduction in the number of times it involves itself in a regulatory process – the number of times it makes legal representations to Conservation Resource Management Consent Hearings.

A media interviewer, interpreting the reduction in the number of regulatory interventions as a failure of the department to achieve one of its outcomes due to it not pursuing it aggressively due to staff shortages,  asked the department’s head: ‘…on the face of it, is it a lower priority? [the regulatory intervention – the department getting involved in Conservation Resource Management Consent Hearings]. The department’s chief (interpreting the drop in the measure in the opposite way) replied:

‘What you are falling into is the trap of judging and measuring our success by the number of cases we take regardless of the outcome. We see [Conservation Resource Management Hearings] [the regulatory intervention] as a last resort. We would rather sit down without spending money on lawyers and work out issues if we can and confine the [Conservation Resource Management Hearings] issues to ones that we really can’t reach agreement on’.*

This is a classic example of the outcomes theory principle: Ambiguity in interpreting outcomes or performance measures/indicators of regulatory intervention when also seeking prevention.

Not having looked into this particular issue, I don’t want to come down on one side or another. I think that both sides are making reasonable ‘face value’ interpretations of the change in the indicator.

How can people setting up performance management systems deal with these regulatory intervention Chameleon Indicators? While people will continue to take different positions in interpreting them in the cut and thrust of media debate, there is a technical approach to the problem which is suggested by outcomes theory.  In order to actually interpret what’s going on with this indicator, we would need to have further information about other indicators. For instance whether there has been an increase in departmental activity focused on getting the parties together prior to potential Resource Management Consent Hearings. I’ve DoViewed it (built an outcomes model) below so that we can get a clearer picture of what’s going on. d388We would need to get information about the indicator in red in order to be able to interpret the regulatory intervention indicator in black. Even then we could not be certain, just from the indicator in red that the department had been successful in reducing the number of contentious issues going to  hearings (which is what this DoView aspires to). So we would really need to answer the evaluation question which also appears within the DoView.

So the technical answer to dealing with Chameleon Regulatory Intervention Indicators is to always interpret them against the underlying outcomes model (e.g. DoView) of the logic of what the organization is trying to do.  For the theory on showing whether an organization is achieving its outcomes see Duignan’s Types of Evidence That a Program ‘Works’ Diagram and for a practical visual approach see here.

So, I the lesson from all this is that we should never just look at a Chameleon Indicator like the number of regulatory interventions on its own. We should always visualize it in the context of the logic of what it is that the intervention consists of and see what surrounding indicators we need to measure and what impact evaluation questions we need to answer in order to really clearly understand whether or not an organization is achieving its outcomes.

*Reference to the interview can be found on in the outcomes theory article linked above.

Are expert and key informant judgment evaluation designs types of ‘impact evaluation’

Up on the American Evaluation Association Linkedin group, I’ve started a discussion about what are the range of evaluation designs which can be regarded as impact evaluation designs.

I have a typology of seven major impact evaluation design types used in Duignan’s Impact Evaluation Feasibility Check.

At least two of those design types – expert judgment and key informant judgment design types – are not seen by some as being appropriate to be called ‘impact evaluation’ designs. Some want to restrict the definition of impact evaluation designs to types such as Randomized Controlled Trials. Key informant designs are where groups of people ‘in the know’ about a program are asked questions about the program.

My definition of an impact evaluation design is one where someone is making a claim that they believe a program has changed high-level outcomes. In my Types of Evidence That a Program ‘Works’ Diagram (, impact evaluation is conceptually distinguished from implementation evaluation on the basis of it making such a claim.

In contrast, non-impact, implementation evaluation (where you do evaluation for program improvement even in situations where you cannot measure impact) is not trying to make such a claim. I am not saying here that every type of key informant or expert design is impact evaluation, just ones where a question is asked along the lines of: ‘In your opinion did the program improve high-level outcomes’.

I think that if this question is asked, then the evaluation is trying to ‘make a claim about whether a program changed high-level outcomes’. The question of whether particular stakeholders believe this to be a credible claim in a particular situation is a conceptually different questions. And there are many stakeholders who would not regard it as such. However, this does not detract from the conceptual point that, if you can find stakeholders who in some situations would regard key informant or expert judgement designs as sufficiently credible for their purposes, then these designs can be regarded as a type of impact evaluation.

My broader purpose with this thinking within outcomes theory is to get the full list of possible impact evaluation designs considered in the case of any program so that we don’t just get obsessed with a limited range of impact evaluation designs, useful though things like Randomized Controlled Trials (RCTs) may be in some circumstances.

Seamlessly moving from evaluation to strategy and back again

I’m currently in a discussion on the American Evaluation Association’s Linkedin page about the relationship between monitoring, evaluation and strategic planning. While different consultants may be involved in doing different aspects of these for a client, from a client’s point of view they’re all just parts of their organization’s work which they somehow need to integrate and align.

When working with clients, it really helps to have an approach which lets you move from doing monitoring and evaluation planning, for instance, back to strategic planning. You can then just track whatever their organizational focus is at any moment. From their point of view, it means that monitoring, evaluation etc are seamlessly aligned with strategic planning and other organizational functions.

For instance, working with a client yesterday, using our approach and software, we were building a DoView Visual M&E plan with them ( These plans are based on a DoView Visual Outcomes Model ( The client then said, ‘it’s great what we’ve just done about measurement, but we also need to work out what we’re going to say to our funders about what we want to do next – i.e. our forward strategy’.

So we immediately and seamlessly moved to doing this task for them within the same meeting. We just took the DoView Visual Outcomes Model we had already built with them for monitoring and evaluation planning purposes and went through it, marking up their priorities for future action. The next step will be to map their planned projects onto the DoView and check for ‘line-of-sight’ alignment between their priorities and their planned actions. (see

It’s great to have the flexibility to move in any direction along the: strategy – priority setting – project alignment – indicator monitoring – evaluation – outcomes-focused contracting spectrum, and to have a tool and approach that lets you immediately go wherever the client wants you to go. This is achieved by using the one visual model (a DoView Visual Outcomes Model drawn according to the 13 rules for drawing DoViews) to underpin all of these activities (

Paul Duignan, PhD,,,

Putting the Planning back into M&E – PME or PM&E what’s the acronym going to be?

In a posting on Linkedin, Leslie Ayre-Jaschke talked about the growth of PME – or maybe it will end up being called PM&E, or something else. Regardless of the acronym, it’s the movement to put planning back into monitoring and evaluation. ‘Putting the P back into M&E’ was the subtitle of a workshop I ran in South Africa for UNFPA several years ago. I think that it’s a concept that’s going to get a lot more traction over the next few years.

It’s consistent with what evaluators like Michael Patton, and many of us in the evaluation community, have been talking about for years. We’ve been talking up the key role of formative evaluation – evaluation aimed at making sure that programs are optimized. And formative evaluation is all about making sure that programs are well planned.

The point of this approach within evaluation is that it’s often pointless to evaluate a badly planned program. Evaluation resources would be better spent on making sure that the program is better planned than on measuring the fact that it often will not achieve its outcomes due to the fact that planning has been poor.

The new PM&E movement is not just about evaluators and evaluation, it is much broader than that taking in people from a range of disciplines. This new integrated approach which is emerging needs an underlying theory which will appeal to all of the different disciplines involved – strategic planners, performance managers, evaluators, contract managers, policy analysts etc. The work I’ve been doing in outcomes theory has been designed to meet this need.

The purpose of outcomes theory is to provide an integrated conceptual basis for PM&E-type approaches. A common conceptual basis is needed if people across the different disciplines and sectors are going to be able to share conceptual insights about how they identify, measure, attribute and hold parties to account for outcomes when doing planning, monitoring and evaluation. Good theory is needed to help them quickly sort out the type of conceptual confusion that current characterizes much of the discussion of outcomes related issues. As the famous social scientist Kurt Lewin said – ‘there’s nothing so practical as a good theory’.

This aspiration of outcomes theory is summarized in the diagram below showing how it’s a meso-level theory reaching across strategic planning, monitoring, evaluation etc.

(see for more on this)

For people just working out in the field, who don’t  need to know much theory, outcomes theory principles have been hard-wired into the DoView Visual Planning, Monitoring and Evaluation approach Using the approach means that they will avoid many of the technical problems which are highlighted by outcomes theory.

Large-scale visual models of a program (drawn in the correct way, for instance as ‘DoViews’) provide the ideal foundation for the new fully integrated approach to planning, monitoring and evaluation which many are now seeking.

Does Monitoring and Evaluation M&E Planning have to be so cumbersome and painful? Just finished Bangkok Conference Presentation

Bangkok Conference

I was invited to give a presentation to the 1st Pan Asia-Africa Monitoring and Evaluation (M&E) Forum: Results-Based Management & Evaluation (RBM&E) and Beyond: Increasing M&E Effectiveness held in Bangkok. I’ve just finished my presentation which was called: ‘Anyone Else Think the Way We Do Our M&E Work is Too Cumbersome and Painful?’

I’ve had to review many Monitoring and Evaluation Plans in the past and I’ve generally found them long and tedious documents. I’ve also had to write them myself and realize that the tedium is not only on the part of the reader! It’s usually really hard to quickly overview what the M&E Plan is going to measure and the evaluation questions that are going to be asked.

Normally once the plan has been used to get funding for the M&E work, it’s just put in a desk drawer and other documentation is used to control the implementation of the M&E Plan and make presentations on it.

In the presentation, I outlined the new DoView Visual M&E Planning. This approach takes the pain out of writing (and reading) M&E plans and creates major efficiencies.

It takes 1/2 the time to create an M&E plan; it’s entirely visually based, which makes it easy to see what is, and (just as important) what’s not, being measured; the same DoView file can be used to control the implementation of the M&E work; all presentations can be made just using the DoView M&E Plan (you don’t need to create additional Powerpoints); and you can, if you wish fully integrate project strategic planning into M&E planning (the Holy Grail of putting the ‘P” – ‘Planning’ – back into ‘M&E’).

The virtual presentation was in the form of a three short videos (about 6-7 minutes each) and a Skype question and answer session afterwards.

Check out the three short videos of the presentation here. The first video describes the reason we should move from the traditional approach and the second and third video show you how to do use the new DoView paradigm. If you want the resource page on the DoView website which shows you how to build a DoView Visual M&E Plan and gives an example you can download, it’s here.

Paul Duignan PhD. Blogs at, is at, You are welcome to participate in the DoView Community of Practice on Linkedin. Download a DoView trial at