In addition to extending warm winter wishes (and warmer than we get in February in New England!) to all of you reading this, we’d like to open today’s newsletter with a hearty greeting to those of you who may be receiving this for the first time. We’ve attended a number of conventions and meetings over recent months, where so many people who stopped by our booth made the effort to sign up for this publication. Thank you for stopping by, and thank you for reading!
Today’s newsletter explains the shared limitations of the voting process for the revered Oscar statuette and market research. Even if you aren’t a movie (or Oscar) lover, we hope you share our point of view on why why is so important!
Julie Brown President
Mark Palmerino Executive Vice President
The One Thing An Oscar Winner Will Never Know
With the Academy Awards just weeks away, we’re already taking bets around here regarding the likely winners. Will “Best Picture” go to Lincoln, the favorite? Or will Silver Linings Playbook sneak up from behind?
How about Best Actor? Daniel Day Lewis?… Joaquin Phoenix?… Bradley Cooper?
Whatever happens, one thing is certain: None of us will ever know why the winners won. Or, for that matter, how close any of the losers came to taking home a statue instead.
The fact is, from a market research perspective (is there any other?), the Oscars are the ultimate closed-ended survey. It’s pass/fail, with one pass and many fails for each category. When the smoke finally clears, there will be nary a clue about why the chips fell as they did that night.
All of which got us thinking back to 2005 and the very first edition of this newsletter (Crash won Best Picture that year, by the way). It was called “On a Scale of 1 to 5, How Are You?“, and it addressed this very topic: The challenges with closed-ended surveys.
Much has changed since then, but the article’s three main points regarding the limitations of this approach are still valid:
Limitation #1: The information isn’t as useful. As the Academy Awards example illustrates so nicely, there’s no “why” provided, and therefore no deeper insight and no guidance regarding possible areas of improvement.
Limitation #2: You need to know the answers going in. You can’t create a closed-ended survey without presuming the answers, a constraint which introduces bias by forcing participants to work within the range of options provided.
Limitation #3: The conversation is likely to get cut short. Closed-ended surveys leave little room for thought and zero room for conversation, two constraints which lead to limited understanding and frustrated participants.
Today, eight years later, the world has continued to evolve in many ways, three of which have important implications for those of us involved in research:
Much qualitative and nearly all quantitative research has shifted from phone to web-based.
Imagine a survey in which participants are asked to rate their level of satisfaction with a financial services product, using a 1-5 scale. Those who select anything less than five are then asked: “Please tell us how we can improve.” In a phone survey, participants can elaborate and explain, offer nuance, and respond to follow-up questions. On the web, they’re simply asked to type thoughts into a box.
The problem is that people are not as fast or as willing to type as they are to speak. They may have a lot to say, but when asked to use a keyboard, they nearly always offer truncated answers, if at all. In addition, absent a live, listening person on the other end of the phone, there’s no opportunity for further elaboration when needed.
In a nutshell, and even if asking an inherently open-ended question like “why,” the shift to the web and the restrictiveness of the survey vehicle itself dampens both the depth and quantity of responses.
We’ve all become “Twitterized.”
“Twitterization” – the tendency to communicate in hyper-brief, broken sentences – has found its way into survey responses as well. Typing/texting has become a “many times a day” event for most of us, a shift that has led to survey responses that are often cryptic, if not downright incomprehensible.
Consider the following typed answer to a question about deciding between our client and two competitors, in a recent Web survey of one of our clients: “Had an opportunity to hire and didn’t suggest them. Regret now. They didn’t know what they were talking about.”
Which they? In what way did they not know what they were talking about? Whom, if anyone, did you retain instead and now regret hiring? Making sense of results like this becomes a guessing game, one filled with ambiguity and with little opportunity, therefore, for informed, deliberate action.
Attention spans are believed to be shorter.
This third overall change – and the resulting conclusion that people are therefore no longer willing to spend time as participants in thoughtful, in-depth market research studies – is one we don’t buy into.
Yes, executives are all busier than ever before. That said, when it comes to important decisions and significant purchases, we’ve continued to have great success identifying and engaging high quality participants in our studies. In one recent project, for example, we were able to conduct interviews with 60% of the population in question, with an average interview time of 20 minutes each.
If the topic is important to the participants, and if the interview is well-conceived and professionally conducted, the opportunity to engage is as strong as ever.
So what’s it all mean, these persistent limitations from the past and new implications for the present?
For us, the answer is as clear as it was when we published our first newsletter, eight years ago: In-depth, open-ended conversations are the best way – perhaps the only way – to understand what your target audience members really think and to develop fact-based, actionable responses to a rapidly changing marketplace.
Take these ideas to heart and maybe next year, when the envelope is opened, the winner will be… You!
Mixology (Putting Research into Practice)
Given our suggestion that you engage your customers and prospects in conversations, rather than limit them to providing rating scales, here are three recommendations in today’s rapidly shifting research landscape:
Go in a qualitative direction wherever possible. When dealing with high-value audiences in particular, we favor open-ended, conversational telephone interviews.
Segment your customer base. Yes, SurveyMonkey and other online tools are quick and cheap. But as we make the case above, these approaches don’t come without a cost. If budget constraints prevent you from speaking in depth with all populations, remember that it doesn’t have to be all or nothing.
By segmenting your most important clients and customers from the pack, or by rotating which groups are interviewed in depth each year, you can strike a balance between the information you need and the costs involved in obtaining it.
Take action on what you learn and share this across your organization. One of our clients presents the results of its client satisfaction research in-person, across all its regional offices. They highlight the findings and put plans in place to improve for the following year.
By closing the loop on the measuring process, they ensure that their research dollars lead to action and not simply a collection of easily ignored reports.
The Center for Strategy Research, Inc. (CSR) is a research firm. The “Twist” to what we offer is this: We combine open-ended questioning with our proprietary technology to create quantifiable data. As a result our clients gain more actionable and valuable insights from their research efforts.
Secondary Sidebar Widget Area
This is the Secondary Sidebar Widget Area. You can add content to this area by visiting your Widgets Panel and adding new widgets to this area.
The Center for Strategy Research, Inc. 101 Federal Street · Suite 1900 Boston, MA 02110