The Center for Strategy Research, Inc. Vol 4 Issue 9   October 2008


Hello!

This month we take a look at the concept of “likelihood to buy,” and the impact it can have on your research. If you’re not taking this into account, you may be generating accurate – but flawed – results.

As always, please click here to send us your thoughts and comments.


Julie Brown
President

Mark Palmerino
Executive Vice President



Don’t Ignore “Likelihood to Buy” When Developing Your Conclusions

I came home from a meeting yesterday evening to find my 16-year-old son, Tim, and his friend, Dave, sitting on the couch watching TV. I got the usual, teenage, grunted greeting from the two of them and made my way past to the kitchen.

A few minutes later, and now eager for something to eat, they emerged from the family room to forage through the refrigerator. That’s when I noticed the T-Shirt that Dave was wearing: It was red, and said “OBAMA” in big white letters.

“Nice to see a kid with a political point of view,” I thought. At least he’s not among the seven percent of Americans still mysteriously undecided. (What on Earth are they waiting for?) It did strike me, however, that regardless of whom this young man chooses to support, at 16, it doesn’t really matter — he doesn’t get a vote in the election.

Indeed, as one of our readers observed as a follow-up to our last newsletter on the topic of polling pitfalls (Research, Like Election Polls, Demands Repetition):

“A poll gives you the % of people who favor one candidate over the other, but it does not necessarily tell you who will actually win because some groups (e.g., older voters) are more likely to actually vote than others (e.g., younger voters).”
It’s an excellent point, and highlights the concept of “sample representativeness” — the degree to which the individuals in a given study are representative of the population you wish to measure.

In a presidential poll, for example, much attention is paid to ensure that those surveyed are demographically representative of the group in play, based on factors such as age, sex, race, religion, location, economic status and more. There’s little point, for example, in polling white, female, Protestants in Massachusetts as a way to predict the voting behavior of Hispanic, male, Catholics in Florida, and those who conduct such surveys pay careful attention to these important elements.

What’s often missing from the polling, however, is likelihood to vote. Nursing home residents with limited mobility; single parents who can’t arrange childcare; citizens in understaffed precincts where the lines run out the door… these are all examples of people who may have strong opinions, but who may not have an opportunity to voice them come election day.

Indeed, in a typical presidential election, only about half of those eligible to vote actually do so. Simply asking the intentions of those technically qualified, therefore, may not accurately predict the outcome, and polls which ignore this fact are far less reliable.

Unfortunately, this type of sample representativeness error is also common in the market research world. Whether due to the constant pressure of trying to work faster and more efficiently, or simply out of neglect, many otherwise well-constructed studies are damaged by researchers who take their eyes off the likelihood ball.

Examples:
  • Asking a recently unemployed executive what he thinks of the new Mercedes. This person may have once been in a “likely to purchase” group, but is likely not at the time of the survey. The interview may uncover some interesting observations, but without confirming this person’s “likelihood to buy” relative to the luxury car market at this time, his point of view won’t be of short-term relevant help in fine-tuning product features.


  • Asking homebuyers who intend to finance less than 80% of the purchase price of a home to weigh in on a private mortgage insurance (PMI) product under development. In this example, you may have an individual who perfectly matches the demographic of the typical PMI buyer, absent one critical factor… the need to buy it in the first place.


  • Ignoring cell phone-exclusive users. As more and more (mostly young and/or less affluent) people choose cell phone subscriptions in favor of having a landline, traditional approaches to phone research are in danger of leaving these groups out of the mix.

    In this case, unlike the two above, groups of people who are likely to participate — whether in voting or in buying — are left out of the equation, potentially skewing the results as well. (See more suggestions for working with cell phone users in Mixology below.)
The point here is a simple one: Step one in conducting meaningful market research requires making sure that you’re talking to the right people. And while most researchers are (rightly) focused on carefully matching sample demographics to those of the larger population, without also zeroing in on likelihood to be in the market in the first place, you’re in danger of gathering plenty of statistically accurate — but completely useless — information.

— Mark

Click here to share this newsletter with a colleague.

According to a recent article in the Boston Globe, “People with only cell phones may differ enough from those with landline telephones that excluding the growing population of cell-only users from public opinion polls may slightly skew the results…”

This is already happening with regard to presidential polling and in our experience, with market research as well. Ignoring this group is no longer an option.

Some suggestions for handling the “cell phone-exclusive” issue:

  • Employ random dialing. Although cell phone numbers themselves are not listed, exchanges are typically regionalized, either matching landline codes or grouped into state-specific buckets. Lists such as this give you regional access to individuals, available via random dialing.


  • Develop a proxy. If you know some things about the cell phone users among the population you are studying (e.g. they’re younger, less affluent, more likely to be unmarried), you can overcompensate by including more members of this demographic in your study. So, for example, if the sample population you’ve found is older and more affluent than the population in question, add in more people — as a proxy — who match the “cell phone demographic,” to bring your sample in line.


  • Apply multiple methodologies. One way to avoid excluding the younger cohort of cell-phone-only users is to use a methodology that may be more appealing to them. While one such option is to conduct internet surveys of this group at the same time as conducting telephone surveys of other groups, this does introduce the possibility of methodology bias (i.e. responses being different that are attributable to the fact that a different methodology is being used).

    Another such option is to recruit telephone survey participants among younger cohort cell-phone-only users through use of social media such as LinkedIn, MySpace and Facebook. This approach uses the internet as a recruiting tool, but not as the survey capture mechanism, thus lowering the likelihood of methodology bias. However, both alternatives carry with them the obvious limitation that they will miss the increasing number of cell-phone-only users who lack landlines due to financial pressures or logistic considerations (the Roger Millers of this world).
No methodology or solution is perfect, and the risks associated with leaving some groups out of any methodology are worth a careful look.

 

Don’t Ignore “Likelihood to Buy” When Developing Your Conclusions

Mixology (Putting research into practice)

Twist and Shout

About Us


In November, 2007, we announced that our client Prudential had just published a landmark report, “Study of Employee Benefits: 2007 & Beyond,” based on CSR’s research of over 1,400 employee benefits decision-makers and over 1,000 employees.

We are pleased to announce that 2008’s report, based on our research of over 1,700 employee benefits decision-makers and over 1,800 employees, is now available. Download your own copy here.



In discussing the shirt Dave was wearing, I forgot to mention the hat he had on:



Problems? Click here to send us an email with your request.
The Center for Strategy Research, Inc. (CSR) is a research firm. We combine open-ended questioning with our proprietary technology to create quantifiable data.

 
Understanding What People Really Think


WordPress Lightbox Plugin