While writing this blog I thought how nice it was to feel the work cogs turning again and to read something other than parenting 101 blogs. Not recommended by the way, they have a frightening habit of making you feel like you’re doing it all wrong! In my short life as a mum (all nine months of it) I’ve learned that there is no perfect way to do anything but as long as you and baby are happy and thriving – you’re doing it right!
Anyway, to get my work groove on I thought I’d go back to the basics.
At its core, monitoring and evaluation as a practice is essentially about linking past, present and future actions through proper planning, goal setting, consistent data collection, analysis and effective reporting. And repeat.
If you follow this fairly straightforward formula, you have everything you need for robust story telling and feedback into the project, program or organisation.
In theory that is.
For the well over a decade I’ve worked in evaluation, I’ve felt that there has been a lack of focus on effective data collection. Which is crazy because this is the core of everything. We would see projects winding up, needing to conduct an evaluation and being unable to prove their impact properly because the right information hadn’t been collected in any sort of meaningful way. It was either not planned well, not carried out or collected in such a haphazard manner that making head or tail of it was almost impossible. Tough mudder for evaluators!
But there is change in the air.
What’s exciting is a distinct shift towards effective planning and implementation of data collection methods from the very start of a project (or continuously at the organisational level).
Action is being taken on a growing awareness that data needs to be robust and it needs to be targeted.
Collecting every possible piece of information is not only impractical but incredibly overwhelming when it comes to reporting on what matters.
Pressure is always increasing from funders, investors and regulators to prove that what you are doing and what they are throwing money at, is actually having an impact. They want to know if practices are changing, are natural resources are being managed better, has there been a return on investment etc etc etc.
As an evaluator it’s our job to tell your story, or to help you to tell yours in the best possible way. And I’m sure you’ll agree, that the richer and more targeted the data sources, the better.