The Center for Strategy Research, Inc. Vol 3 Issue 2   February 2007


Hello and Welcome!

The weather may be cold, but excellent market research is as hot as ever (okay, we’re biased).

And speaking of bias, this month we take a look at survey participant selection, and how to make sure your research results are not tainted by three subtle — but all too common — mistakes in recruiting qualified and willing participants.

As always, please click here to send us your thoughts and comments.


Julie Brown
President

Mark Palmerino
Executive Vice President



Runaway Jury?

If you’ve been reading this newsletter for a while, you know that I’m a big fan of TV police shows — “Crime Dramas” as they’re usually called. Well, I’ve got a new favorite… a brand new series called “Shark,” starring James Woods as “a charismatic, supremely self-confident defense attorney.” (sigh)

The other night aired an episode involving the selection of a jury. I’m no attorney (although I’ve seen one on TV), and from what I can tell, good jury selection is a vital step in the process. Its objective is to stack the deck in one’s favor, by choosing individuals who are predisposed to thinking, behaving and ultimately, deciding, in a certain way.

No surprise there. What I realized while watching, however, is that this is exactly the opposite of the way research participants should be chosen.

In other words, while a good trial lawyer seeks to introduce bias at the outset (by finding jurors who are most likely to decide in his or her favor — regardless of the facts), a good researcher works to eliminate bias in the process of fielding survey participants.

In practice, of course, this is sometimes easier said than done, and inexperienced researchers may unknowingly stack the deck — in either direction — by overlooking some subtle, but critical, principles.

Here are three of the most common mistakes that we see in jury, I mean, participant, selection:

  1. Assuming “higher is better.”

    There’s prestige associated with interviewing the CEO, and there’s a widely held (if not always spoken) belief that the more high-ranking your participants, the better the information you’re likely to capture. We don’t always look at it that way.

    Instead, we begin by asking, “Who has the knowledge?” If you’re selling copy machines to Fortune 500 companies, for example, the CEO probably knows little about what makes for a good product and vendor. The administrative assistants, on the other hand — the people who live and die with all the copier problems each day — will give you much better insights into the truth you’re seeking.

    In addition, and generally speaking, the higher you reach in an organization, the more you complicate the research process. In contrast to their subordinates, C-Suite members are harder to schedule, more pressed for time, and more likely to take the conversation off in another direction… none of which adds up to research efficiency. As we like to say at CSR on the subject of sourcing participants, “reach as high as you need to, but no higher.”

  2. Lessening the power of a random sample.

    As we mentioned in the August issue of this newsletter (“Summer Sadistics: Mitigating Factors“), there’s an underlying assumption of “representative and random” at the heart of statistical research. The reason a national election involving millions of voters can be predicted by surveying just 1200 people, for example, isn’t because the 1200 are clairvoyant. It’s because the researchers have chosen a representative and random sample of the population at large.

    By contrast, consider this example. Suppose that you sell financial products, and want to learn more about how investors make decisions. If you call people at home during the work day, you’ll likely speak to many stay-at-home moms and retirees. That’s fine, if your target audience for your financial products is a match. If not, you’ve just introduced sampling bias into your survey.

    Sampling bias can be introduced by any number of factors, including geography (e.g. your focus groups took place in Boston but your market is worldwide), tenure (e.g. you only surveyed junior staff in trying to assess overall employee satisfaction), or even comfort with technology (e.g. the survey was only available online).

    The fact is, all surveys have a built-in sampling bias, in that you only get to talk with the people who are willing to talk. As a researcher, therefore, you need to make sure you’ve controlled for as much of this as possible.

  3. Not starting with a large enough and complete enough group of potential respondents.

    People tend to underestimate how big a list is needed to get the results they want. In a consumer study, for example, we find that we typically need 15–30 potential names for every successfully completed survey (our “field and tab” colleagues often seek 50 or more). And while the numbers can be better when sourcing businesspeople, you’ll seldom get 80% without heroic efforts, and at some point, you’ll reach a cap.

    As a rule, the harder it is to find survey participants, the more your results (and budget) will suffer. You’ll spend more time and money working the list, and — because you may need to alter your selection standards in order to meet your completion goals — you may impact randomness, as described in item #2 above.

In summary, watching shows like “Shark,” and seeing how much bias is deliberately introduced into jury trials, makes me hope I’m never in the position of being judged by a dozen “impartial peers.” By the same token, when the success of your business is at stake, take care to ensure that the research you depend so much upon is done as diligently and with as much validity as possible. Court adjourned!

Click here to share this newsletter with a colleague.

Mixology (Putting research into practice)

It’s all well and good to suggest (as we do above!) that you start with a large list of well-suited, potential research participants. But what does one do if the list isn’t available, or isn’t performing?

Some suggestions:

  • Cold-call companies. If you don’t have names of specific individuals, you may need to spend time calling companies in search of individuals with particular job titles.
  • Rent or purchase sample lists from reputable third parties.
  • Use third party services to fill in the missing data (i.e. phone numbers and/or e-mail addresses) on the names you’ve already got.
  • Ask respondents, “Who else should we speak with?” Those who’ve completed a survey are often an excellent source for leads to peers and co-workers.
  • Increase incentives. Raising the benefit of participating will increase the number of those who agree to participate and allow you to make maximum value of the pool of names you do have.
  • Offer more flexibility. Night and weekend calling or appointment-setting, for example, will allow you to reach a broader group of people who might not be available (or willing to speak) during standard business hours.
  • If at first you don’t succeed… While you’ll never get everyone on your list to participate, the longer and more frequently you work a list, the better the results will be.

Naturally, each of these tactics has the potential to raise cost and extend project duration, and it’s up to you to determine how these outcomes trade off against your research goals.

 

Runaway Jury?

Mixology (Putting research into practice)

Twist and Shout

About Us


Last month, we announced that CSR continues to grow, and we would also like to herald the recent arrival of Scott Robinson to CSR.

Scott is a recent import to the Boston area, having moved here from New Jersey, where he worked in financial services, most recently as a Strategic Marketing Analyst with PHH Mortgage. In this capacity, Scott provided a variety of marketing leadership and support services, including sales support and collateral development, developing and implementing strategic marketing initiatives, and analysis and reporting of research results.

At CSR, Scott is responsible for interviewing and coding on several of our ongoing client projects, as well as managing interviewers. He’s also been involved with other important elements of our research processes, including analysis and reporting.

In addition to his role at CSR, Scott is also an Adjunct Professor teaching Accelerated Basic Mathematics. Scott is a recent graduate of Fairleigh Dickinson University, where he earned a B.S. in Business Management and an M.B.A. in Marketing.



“It is better to be roughly right than precisely wrong.”

— John Maynard Keynes



Enter your email here to subscribe to “Research with a Twist


Problems? Click here to send us an email with your request.
About Us
The Center for Strategy Research, Inc. (CSR) is a research firm. We combine open-ended questioning with our proprietary technology to create quantifiable data.

 

(617) 451-9500
Understanding What People Really Think


WordPress Lightbox Plugin