I’ve just posted an article on the two paradigms in regard to impact/outcome evaluation and full program roll-out. What this is about is making a distinction between designing an evaluation which can provide impact/outcome evaluation information about full program roll-out versus a paradigm where you do impact/outcome evaluation just on piloting and then in regard to full program roll-out you just make sure that best practice is implemented. I once was involved in the evaluation of an overall program which had over 900 component programs. The way that we went about evaluating it was, in my view, wrong.The attempt was being made to ‘evaluate the program’. What this translated into was an attempt to evaluate the impact of the whole roll-out of the 900 or so programs. Given the level of resources available it was not really able to say anything much about the impact of the 900 programs. Perhaps a better approach would have been to adopt what is described as the second paradigm in my article – impact/outcome evaluation just on piloting and then monitoring of best practice implementation on full program roll-out.
This is a much more pragmatic approach and less likely to result in what I call pseudo-outcome evaluations. These are evaluations which claim to establish that a program has improved high-level when it has not been able to establish that at all because doing so is not appropriate, feasible or affordable. This is all part of my push for evaluation to be more sector-centric rather than program-centric.
Anyway check out the article and leave any comments on the bottom of it or here on the blog.
Duignan, P. (2009). Full roll-out impact/outcome evaluation versus piloting impact/outcome evaluation plus best practice monitoring. Outcomes Theory Knowledge Base Article No. 248. http://knol.google.com/k/paul-duignan-phd/full-roll-out-impactoutcome-evaluation/2m7zd68aaz774/104).
Paul Duignan, PhD
Outcomes and Evaluation Blog (OutcomesBlog.org)