Amy's Posts

Are you proving the value of your comms program?

  • Published: by Amy Samson | Permalink

Many rural, NRM and agricultural organisations invest sometimes significant amounts on communication programs which we all agree can play a vital role in growing next and end user knowledge, awareness and understanding. Why then are these clearly important activities not always measured as rigorously as the research and adoption and extension programs, with communication outcomes clearly linked to and measured against higher level organisational objectives?

An excellent journal article arrived in my inbox this morning from the latest Public Relations Journal by Alexis Bajalia (volume 13, issue 2) – Where Are We Now? Public Relations Professionals Discuss Measurement and Evaluation. Alexis looks at the current state of measurement and evaluation in the PR (and by extension the communications) industry by reviewing excellence and role theories in literature and interviewing PR professionals across different organisations in the US about how they “view measurement and evaluation and the extent to which they use measurement and evaluation findings to influence organizational effectiveness”.  This article has some great takeaways for any comms professional.

I like how Alexis notes that output-level metrics (e.g. impressions, circulation, no. of articles etc) are a necessary first step but the important part for continuing to advance the PR profession, is to link outputs and outtakes (e.g. reaction, tone, click throughs, engagement, followers) with outcomes (e.g. awareness, impact, attitudes, advocacy). Executive level PR professionals she spoke to expressed a high level of awareness of the need for making these connections, particularly noting the management function of PR and the importance of outcome-level metrics, building relationships, and satisfying stakeholder needs.

It’s also interesting how Alexis found through her interviews that PR professionals at junior and executive levels were more likely to engage with and advocate for measurement and evaluation if they had strong relationships with higher level management.

The article is worth a read in full, particularly if you are in comms/PR or interested in measurement and evaluation at all, but I wanted to pull out a few points worth pondering raised in the article.

  • Professionals (executive and junior) who reported using Advertising Value Equivalent or AVE (a metric which equates coverage to equivalent advertising spend) said they do not think the metric is strong or even valid.
  • Executives interviewed pointed out measurement and evaluation challenges as being a need for staff trained in data analytics; the need to develop stronger, more creative ways to measure (beyond outputs); and the need to integrate measurement and evaluation.
  • The findings of the study recommend these steps for PR/Comms professionals who want to improve their measurement and evaluation:
    • determining what success looks like up front by initially defining objectives and key performance indicators;
    • increased planning for and implementation of evaluation research at the formative, process, and summative levels;
    • integration of measurement among departments and platforms to gain clearer, deeper insight about audiences;
    • determining correlations between outputs and outcomes, rather than relying on anecdotal measures or inferences;
    • and a willingness to be critical of what has been done in the past, making room for stronger ways to measure and evaluate public relations.

While this article is based on a relatively small sample and is US based, Alexis points out that her findings line up with decades of literature and contribute new knowledge about the current state of PR (and comms) measurement.  And I agree that this study has a lot to offer and makes good suggestions for continuous improvement, which is what we all should be doing!

The International Association for the Measurement and Evaluation of Communications (AMEC) has developed an integrated logical framework which is a great comms planning tool and extremely helpful for developing meaningful and impact based reporting at the organisational level (e.g. KPIs and objectives).

If you are outcomes and objectives focused when planning your comms program, integrate data collection activities across your organisation, and deliver impact based reports, you are well on your way. If you’re not and want to be able to better plan for and prove the value of your comms program, we’re more than happy to have a chat.

5 key evaluation steps to consider before you hit crisis mode

  • Published: by Amy Samson | Permalink

Unless you’ve had your head buried in the sand, you would know about the food tampering crisis that recently engulfed the Queensland strawberry industry (in particular) and seen other industries affected as well. This has been, and is still, a crisis situation until it is resolved, and the perpetrator/s are discovered.

From an evaluation perspective, going through a crisis, by its very nature, reveals a whole lot of key learnings that can be taken forward and applied at all levels of the organisation/industry. So how do we capture these and use them moving forward

Alan Williams (2011) proposed a framework that can be applied to help assess crisis management initiatives and is intended to help better understand the difficulties and complexities of ‘success’ and ‘failure’ in crisis management. He recognises that there are shades of grey along the spectrum between what could be considered a successful outcome or a failure and offers a systematic approach in understanding and evaluating crisis episodes.

The crisis management success/failure spectrum.

His framework is based around three dimensions of crisis management – processes, decisions and policy. Williams then provides criteria around what could be considered a success/failure in each of these as well as the types of evidence that could be considered to assess the outcomes.

This approach makes a lot of sense and I would think that the ideas of what a successful outcome or failure looks like (and their key indicators) in an organisation / industry should be workshopped through as part of the crisis management planning phase. And then revisited as a part of the evaluation process.

I’m a strong believer in planning your evaluation from the start including your data collection processes. So as you are developing your crisis management plan, build in your crisis evaluation plan. Of course it depends on the nature and type of issue you are dealing with, but if you’ve thought through different scenarios, you’ll have a head start on collecting useful information that you can learn from.

Here then are the key evaluation steps that could be taken when developing a crisis management plan:

  1. Map out data collection opportunities and how/when to implement around planned response activities (e.g. media monitoring and analysis, social media monitoring and analysis, website analytics, online polls, interviews, structured observation, impact on sales, records and details of related incoming calls/emails, records of actions taken)
  2. Develop key evaluation questions around identified crisis scenarios and planned approaches (e.g. process related, decisions made, policies enacted/changed/impacted)
  3. Workshop your success and failure criteria. These will be different for individual situations.
  4. Develop post crisis debrief process – e.g. debrief workshop with key players / industry/organisation/stakeholder; broader survey (telephone or online) of relevant stakeholders (if relevant) on perception of crisis handling and impacts; case study based report (to be widely shared) of what happened, how it was handled and what has been learned; ongoing media/social media analysis of public sentiment / attitudes / key message delivery etc.
  5. Review and update crisis management plan

The above should also consider the organic response to the issue, that is, the broader public response as a result of influencers outside of your immediate control. Social media (and traditional media) is a major driver of this and can often sway public mood one way or the other (often within the space of an hour) simply on the trending of a hashtag.
In the strawberry industry’s case, social media (and the traditional media – particularly abc radio) really influenced public sentiment and actions in a positive way, seemingly without too much influence from the strawberry industry itself. Things don’t always work out this well, but that’s another article more about social media strategies for crisis management. Google should give you plenty of good reading around this.

Desktop reviews don’t have to be boring

  • Published: by Amy Samson | Permalink

Many evaluations I’ve worked on have either been focused around or include a desktop review. This could be of existing documents and/or trawling the web for other sources for information.

Trying to find relevant information is not always easy and can be very overwhelming, particularly when you have a big pile of documents to go through! It’s easy to fall into the trap of including too much information (just in case) into the report which then ends up huge and essentially just repeating what is already out there. This isn’t a very useful or analytical approach.

So I thought I’d share a few strategies which I have found helpful when having to trawl through a lot of information.

  1. Understand the end game. Is the review a main part of the evaluation or a contributing data source? What needs to come out of this report? An analytical conclusion or a summation of data?
  2. What are the key evaluation questions? Write them down. These guide your data collection and help keep you focussed on what information is important. Think of them as your interview questions and the data sources as your subject.
  3. Develop a report structure. This I find is the most important – it is your roadmap. By working out report headings (focused around the evaluation objectives) you’ll have a clear understanding of what information you need which means you won’t miss things and you won’t get lost. You should be able to intelligently read through your sources and address your headings. No one wants to have to go back over reports again and again re-checking data. But remember too that new information may emerge so you might need to add new headings as you go along – don’t become so focussed that you miss the little unintended outcome gems.
  4. Interrogate methodically. Go through your data sources methodically, one at a time, using your headings as a guide to pulling out the relevant information for your report. Don’t forget to attribute quotes and information (I find footnotes very useful for this) and include a list of all the data sources. Whether you state this up front or include in an appendix is up to you.
  5. Data presentation. Try to be as succinct as possible, use your own words and don’t data dump. By this I mean just cutting and pasting big sections of text from other reports etc. This isn’t useful and most likely, your audience has already seen this text many times and doesn’t need to see it again in your report! There may be different ways you can present the data. Could a table or graph be useful? Maybe an infographic could tell a great story?

While not exhaustive, hopefully these ideas give a little food for thought, help you to craft an interesting story and enjoy what is often seen as a hard slog!

Is doing some evaluation better than nothing?

  • Published: by Amy Samson | Permalink
Majorca Holiday


I was recently on holidays in Majorca (easy to do when living in London – not so easy from Australia!) where we stayed overlooking a lovely Mediterranean beach called Cala Esmerelda.

Now this is where I say something along the lines of while I was on holiday I was thinking about evaluation and had a ‘light bulb moment’ – except that didn’t happen at all! I was thinking about when I was going to have my next icecream, when the next swim was and did I want red or white wine with dinner.

That being said, the day we were leaving, a survey was left on the table for us to fill out about our experience at the resort. Being a sympathetic evaluator, I was compelled to fill it out, although I can imagine that many people intent on squeezing the last bit out of their holiday wouldn’t take the time. I wondered how many forms were filled out and whether the resort took action on the feedback they received? Was it worth their while?

Now I don’t know what other type of evaluation this resort does, but if this is it, at least they are doing something to find out what the end-user (us) thinks and hopefully acting on it. To me it demonstrates a frame of mind and it means that the organisation (hopefully) is thinking about improving.

In my opinion having some sort of evaluation is better than having none. Of course nothing beats a well-designed evaluation program feeding directly into organisational goals (and this may well be the case at the resort we visited), but being in the right headspace and seeking some sort of feedback is where it all starts.

Jeff has also updated his blog!

Keeping your ear to the ground – using social media as a listening post

  • Published: by Amy Samson | Permalink

As well as using social media to engage and build relationships with your audiences – it can be very useful as a listening post. While it might be interesting for project/program managers to understand what is being said online about relevant topics, this is probably more important for larger organisations and bodies who should be on top of what is being talked about online. Particularly those interested and impacted on by sensitive subjects.

To give you an example, in 2012/2013 (when a Senior Consultant with Gorkana) I worked with the Horizon Scanning and Futures team at the Department of Environment, Food and Rural Affairs (DEFRA) in England on their award winning social media horizon scanning program.

Among other things (including influencing policy decisions), this program looked to create an early warning system of emerging issues (e.g. environmental, wildlife etc) before the media gained interest. The aim was to pinpoint the conversations, understand who the influencers were and use this to inform the digital engagement strategy.

Using a social media scanning tool, and in constant consultation with the DEFRA team, we used keywords and phrases to monitor relevant issues and online conversations. This information was available live using online dashboards (allowing immediate action for evolving issues with daily/hourly reporting if necessary) and a written monthly report contributing to internal reporting to the highest levels of government. You can read an in-depth case study about it here.

Having some sort of social media listening post, even if it’s using free tools, is becoming more and more important. Organisations used to primarily monitor issues in the press using a clippings service (and they still do), but now the press is sourcing many of its stories from online

You’re behind, potentially ambushed and in reactive mode already, if you don’t know the issues and how they are being talked about in social media.

This area is only growing in influence and policy makers are having to pay attention as social movements gain momentum ( is a good example).

Should you be listening to social media?

Using Pinterest to collect and collate observational data

  • Published: by Amy Samson | Permalink

There are so many ways to collect data in today’s online world that simply weren’t available even 10 years ago. Information can now be collated in many different ways from even the remotest locations.

Observation is a data collection method that has been around for a while. One example of this methodology is the use of photos taken at regular intervals to record changes of land use over time. A particularly useful method for Natural Resource Management (NRM) projects focusing on land rehabilitation.

With digital photography, already it is easier to collect this data but sending large files via email can be restrictive (bouncing emails, inbox limits etc) and not everyone finds it easy to access online file sharing services.

And this is where Pinterest could be helpful. The site is all about sharing images, which makes it an ideal way to collate (using boards), sort and share photography.

Taking the above example of an NRM project, say a catchment area is being rehabilitated and Farmers participating in the project are asked to capture observations (with a set of guidelines) of change over time. The project officer could set up a Pinterest profile and create a series of boards which Farmers add photos to (including short written notes).

You would then be able to very easily see the changes over time in one space. It would also be compelling evidence for funders showing changes made and goals met.  And I’m sure those participating would enjoy seeing the results of their efforts so well represented.

The photos could be collated in a number of different ways depending on the reporting needs of the project including:

  • Monthly/seasonal boards – useful grouping change over time, helping comparison across Farms
  • A board per Farmer recording their observations
  • Boards focused around practices (e.g. tree planting)
  • Boards focused around plant types (e.g. grasses, shrubs)
  • Boards focused around benefits and outcomes

The same idea could be applied to programs focused around increasing production. Photos could be taken before, during and after the new/changed practice have been implemented. Ideally benefits and outcomes would be captured – for example decreased pesticide use or more available fodder for livestock.

Let your imagination go and be creative with data collection (of course while maintaining rigour). It doesn’t have to be that boring job that everyone procrastinates over.

Towards solving the social media evaluation mystery

  • Published: by Amy Samson | Permalink

In my experience social media as a communication channel can still feel overwhelming to many people. So taking the next step and thinking about how to meaningfully evaluate the use of it can almost feel too much of a mystery – which often means it doesn’t happen!

However, the more I’ve worked with social media evaluation the less mysterious I think it is. It’s easy to get caught up in the numbers that get thrown around like – we reached 20 million people with our tweet. To that I would say great, what does that number mean and so what?! (And there’s a whole other conversation to be had around how reach numbers are derived and what reach actually means.)

It’s no different to any other evaluation where you need to work out what success looks like for you. 

Understand where your social media activities fit into your overall objectives or Key Result Areas (KRAs) and build your measurements around this.  

Here’s a really quick example. You may have an overall KRA focused on encouraging on-farm practice change in one particular area. One strategy might be to use social media as a way to increase farmer and consultant engagement with the website and use the available information to inform practice change. Here are some ideas of what you could look at depending on your goals:

  1. Website traffic numbers – Do they increase when you include links in social media? Are people clicking through? Look at the traffic sources to see where people are coming from. 
  2. Quality of time spent on website – Are people spending longer on your site and are they spending it on the relevant pages? Has this time increased since you’ve been using social media?
  3. Engagement with content – Is the content on your site being shared through social media? Are your tweets being retweeted? are you being directly tweeted at and are you tweeting back? Are conversations happening on your Facebook page (if you have one). 
  4. Publication downloads – Are your publications being downloaded if you are promoting them? Are they being downloaded anyway? Who is downloading them and how are they being used (potential survey question)?
  5. Social media followers – Who are they? Are they the people you need to reach? Are more of the right people following you? Are you using the right channels? This could be something to ask in surveys/interviews. 
  6. Website awareness, use and contribution to practice change – Establish a baseline. Ask your target audiences (survey, interviews) whether they are aware of your website and how they currently use it. Ask again during and after your campaign. Find out if the information available contributed to any changes they’ve made, and are consultants are using it as a reference. TIP: Hijack other evaluations that might be going on and pop a couple of relevant questions in surveys, workshops, interviews etc.

In terms of understanding your social media strategy’s contribution to your KRA, point 6 is the most important particularly at the reporting stage. 

Every successful evaluation begins at the planning stage. This ensures that you collect meaningful data along the way. 

Plan to evaluate and you’ll plan to succeed. 

Musing about the on-farm influence of social media

  • Published: by Amy Samson | Permalink

There are a few questions running around my head about social media and the rural space – particularly in relation to on-farm practice change.

What is its level of influence?

As a part of this I need to better understand how social media is being used by farmers, agencies, organisations, communities and government and for what purpose. And how is it being evaluated?

A recent ABC story on farmers using social media for their advantage talked about it being digital ‘word of mouth (WOM)’. As a communication channel, WOM has always been one of the most important and most trusted.

So is this trust translated online?

Social media is also being effectively used to build online communities. Take for example AgChatOZ which aims to raise the profile of Australian agriculture by shining a light on the leading issues that affect the industry and the wider community. If you’ve been on twitter at all and followed agricultural discussions, you will have come across its hashtag (#AgChatOZ) at one time or another. Again as an evaluator I would be really interested to see how its conversation has grown over time and what its impact has been.

I’ve also seen words such as empowering and engaging used in relation to social media and rural and remote Australia.

Its ability to connect the isolated and allow their voices to be heard is a big drawcard.

There’s a lot to talk about and I can see many blog topics ahead of me to address the questions that keep popping up and making thought bubbles over my head!

I’m really interested to know what is out there already so please do point me in the direction of any stories, studies, papers etc covering this or related topics. My email is: or find me on twitter: @amyrsamson

Back to basics and telling stories

  • Published: by Amy Samson | Permalink

While writing this blog I thought how nice it was to feel the work cogs turning again and to read something other than parenting 101 blogs. Not recommended by the way, they have a frightening habit of making you feel like you’re doing it all wrong! In my short life as a mum (all nine months of it) I’ve learned that there is no perfect way to do anything but as long as you and baby are happy and thriving – you’re doing it right!

Anyway, to get my work groove on I thought I’d go back to the basics.

At its core, monitoring and evaluation as a practice is essentially about linking past, present and future actions through proper planning, goal setting, consistent data collection, analysis and effective reporting. And repeat.

If you follow this fairly straightforward formula, you have everything you need for robust story telling and feedback into the project, program or organisation.

In theory that is.

For the well over a decade I’ve worked in evaluation, I’ve felt that there has been a lack of focus on effective data collection. Which is crazy because this is the core of everything. We would see projects winding up, needing to conduct an evaluation and being unable to prove their impact properly because the right information hadn’t been collected in any sort of meaningful way. It was either not planned well, not carried out or collected in such a haphazard manner that making head or tail of it was almost impossible. Tough mudder for evaluators!

But there is change in the air.

What’s exciting is a distinct shift towards effective planning and implementation of data collection methods from the very start of a project (or continuously at the organisational level).

Action is being taken on a growing awareness that data needs to be robust and it needs to be targeted.

Collecting every possible piece of information is not only impractical but incredibly overwhelming when it comes to reporting on what matters.

Pressure is always increasing from funders, investors and regulators to prove that what you are doing and what they are throwing money at, is actually having an impact. They want to know if practices are changing, are natural resources are being managed better, has there been a return on investment etc etc etc.

As an evaluator it’s our job to tell your story, or to help you to tell yours in the best possible way. And I’m sure you’ll agree, that the richer and more targeted the data sources, the better.