Media campaigns are intended to affect their audiences – such as convincing people not to text and drive, or persuading people to purchase a product or vote for a particular candidate. There are various touchpoints where media can help lead an audience from awareness to action on an issue. Being able to track these touchpoints – and the audiences engaged – enables organizations and media distributors to more efficiently and effectively test their content as well as intelligently move audiences from awareness to action at a faster, more effective rate.
Referencing the four key stages in a media campaign, this post focuses on the second stage: content development. Informed by the information and process of the design phase, a content developer works with an evaluator to test ideas and content before significant investment is done in the form of a polished set of products and broadcasting. This is where individual pieces of content that support the larger campaign are designed and created.
But this is an ideal scenario.
Some producers do not work with an evaluator. It can be expensive and many funders assume that the producer knows what is going to work. Often, the content is generated with only an internal audience’s review – perhaps including the funder. As such, including an evaluator is often a forgotten part of the process that deserves focus and budgetary support.
One example of poorly tested content is the Anti-Drug PSAs that have been tested repeatedly and shown to be ineffective. Even CRACKED (a popular humor website) shares their “5 Most Ineffective Anti-Drug PSAs of All Time.” In most cases, one of two critical pieces was not followed: either the content was not tested, or it was tested with the wrong audience.
There are three key points in the content development process where content can be tested:
1. Early in development (through story boards or similar methods)
2. After draft content is developed
3. Finalized content, just prior to broadcast
Assessment of the content at all three points will significantly increase the likelihood of the content being successful. “Success” is determined by the goals outlined in the design phase, with the role of the content identified at the beginning of the process.
1. Early in development
At this stage, the evaluation focuses on two key pieces: What is the purpose of the content (e.g., raise awareness), and do the story and facts of the content support that purpose? Simple story boards or even draft written content will allow the reviewers to make the determination as to whether the content is on the right track. This can be done in a focus group or in more informal meetings.
It is also important to consider the audience. The funder can and perhaps should review the content (as will the production team), but the most important group to engage is the target audience. The content might sound like a great idea, the facts might be right, but if the story and/or tone are not right, the audience will not receive, much less internalize, the message.
2. After draft content is developed
Engaging the potential audience again after rough content is completed is the best time to assess whether the content is achieving its goals with the audience. Does the audience receive the message the content intends? Are they suitably emotionally engaged? Focus groups are particularly effective in these reviews, allowing the participants to share their observations as a group. Individual surveys or interviews are also useful if the topic is sensitive and you want to better understand the individual effects of the content.
3. Just prior to broadcast
The content is developed and is ready to go, why one more assessment? First, this is an opportunity to take one last look at the individual effects of the content on the audience. It also is an opportunity to string content together – that is, to assess the potential additive effects of the content. Up until now, the content has only been reviewed in isolation, now combinations can be explored to determine what additive effects they may have. Pulling together another focus group to review the content overall as well as in order of presentation is a good way to evaluate the content one more time.
Identifying and engaging the correct audiences to support the evaluation are critical. Several years ago, I recall sitting with a group of people who were designing an “innovative show” for the 20-something audience. Only one person is the room was under 30 and the majority skewed into their 40s and 50s. While the show was eventually produced by that under 30 individual, they were heavily influenced and directed by the 40 and 50+ year old colleagues. Content decisions for the show were also made by the older group. The show lasted a while, but never attracted its intended audience.
What would have happened if – instead of the older group trying to figure out what appealed to the younger audience – they had more deeply engaged 20-somethings in the development process? I leave that question as something to consider.
Stay tuned for my next post in the media series – distribution – where I will discuss how evaluation can help guide and tailor your distribution efforts to have improved effects on your audience.