The Center for Strategy Research, Inc. Vol 3 Issue 10   November 2007


Welcome!

This month we focus on focus groups. More specifically, on how an inherent weakness in focus group structure can sometimes bring more confusion than clarity to your research results.

As always, please click here to send us your thoughts and comments.


Julie Brown
President

Mark Palmerino
Executive Vice President



Is Your Open-Ended Approach Truly Open?

It’s only November, but with our local New England Patriots football team off to an incredible start (11 wins and no losses), people here in Boston are already talking about the 2008 Super Bowl, this January.

In addition to football, of course, America’s biggest annual sporting event is also famous for its advertisements… some of which have been more successful than others. You may, in fact, recall one ad in particular from the 2005 event. This ad — promoting the then, just-released 2006 Lincoln Mark LT pick-up truck — was most notable for the fact that it never appeared at all.

The 30-second ad featured a priest “lusting” after the pick-up truck in question, while a little girl looked on. Unfortunately, when word got out regarding the topic, it caused an uproar — particularly here in Boston, the epicenter of the clergy abuse scandal — after victim support groups and others raised objections to its underlying message.

Given the cost of running the ad (the airtime alone ran $2.4M), it’s no surprise that Ford tested it with consumers, who (according to Ford) described it with words such as “fun” and “humanity.” What was surprising, however, was that the research missed the other, much less favorable point of view, entirely.

Similarly, when it comes to the common practice of using qualitative research — and in particular, focus groups — to help design closed-ended, quantitative surveys, what doesn’t get included is often just as important as what does.

Let’s say, for example, that you want to conduct a closed-ended survey of 1,200 people on a particular topic. In order to do this effectively, and for your research results to have value, you must first develop a set of appropriate questions and associated answers regarding the topic at hand. If you don’t offer choices that reflect the range of most likely answers, you’ll bias the research from the very start and, as happened to Ford, may miss out on some important insights altogether.

Assembling the questions and answers may be relatively straightforward for a topic that’s well understood. When this is not the case — for new products, new concepts or even an unfamiliar population of respondents — a typical step often involves running four or more focus groups, as a means of learning more and generating ideas. Using the insights and information gained, the questions and answers for the closed-ended study are then designed.

Unfortunately, relying on focus groups for this purpose can lead to incomplete and biased answers. It’s another reason why we recommend in-depth, one-on-one surveys instead. Here’s why…

  • Focus groups are often biased towards the few.

    Part of the focus group moderator’s job is to involve all participants. In practice, however, it’s tedious to ask every single person what he or she thinks on every single issue. The louder, more energetic, more confident voices tend to carry the day, and those whose opinions might be just as valid (but may be less conventional or less well-expressed) tend to be marginalized.

    If, on the other hand, each of those participants were to be interviewed individually (at roughly the same cost, incidentally, as conducting several focus groups), a much wider range of points of view would emerge. As the survey developer, you’d therefore stand a much better chance of creating a list of questions and associated answers that truly reflect how the population at large views the topic.

  • Focus groups occur in a physical location.

    To develop a closed-ended study on the basis of qualitative research, running six, 12-participant focus groups in six different cities is certainly better than conducting them all in the same place. That said, they won’t give you nearly the same diversity of opinion as would conducting an equivalent 72 open-ended phone interviews with participants from around the country.

    (Who knows how much sooner Ford would have been tipped off to potential problems with the pick-up truck ad, had it sourced a geographically wider range of opinions.)

  • Focus groups make the ranking of responses difficult.

    As stated earlier, one of the keys to creating an effective closed-ended survey is to offer answer options that resonate with participants; you want to give people choices that reflect their most likely answers.

    Get this wrong, and respondents are forced to either select an inadequate choice from among those offered, or to simply choose “other.” In either case, you’ll end up with incomplete or bad data, possibly leading to incomplete or bad decisions (see Mixology below for more on this).

    But what do you do when your focus groups uncover dozens of potential answers to each question? You can’t offer dozens of answer options in a closed-ended survey, and without the benefit of an objective ranking system (something a focus group doesn’t provide), you’re forced to subjectively pare these down to a manageable list.

    If you survey people individually, on the other hand, you can quantify which ideas come up most frequently (and, by the way, without the bias of other group members). You can then objectively decide which answers are most likely to satisfy the true perspective of your closed-ended survey takers.

In conclusion, it’s important to keep in mind that even when used appropriately, a closed-ended survey can only be as good as the questions and associated answers it provides. Focus groups, while offering many of the characteristics of open-endedness required for survey design, can sometimes cover up as much as they reveal. And when that happens, your research may be thrown for a loss!

— Julie

Click here to share this newsletter with a colleague.

Mixology (Putting research into practice)

Here is a recent example of a closed-ended survey where the selection of possible answers was overly constraining to respondents:

Had you known of these job postings, would you have applied for them? If no, why not? (The question was part of a series relating to employee interest in applying for various internal positions. Note that multiple answers were permitted and captured, so answers below sum up to over 100%.)

Reasons (respondent) would not have applied:

  • The role(s) did not appeal to me — 18%
  • Too new in current role, and did not feel that I was/am ready — 15%
  • Not interested in working in a field role — 12%
  • Terms and conditions — 6%
  • Did apply, in fact — 5%
  • Advised not to by Line Manager — 3%
  • Don’t know — 3%
  • Other — 53%

The large “other,” of course, is a red flag. When we dig deeper and look at the range within this catchall, we see:

  • Too much travel — 20%
  • Happy in current role, no interest in other roles — 17%
  • They appeared to be temporary positions — 13%
  • Bad experience when applying for posted opportunities in past — 11%
  • Locations of posted jobs are undesirable — 6%
  • Personal/family issues/conflict — 6%

Many of these responses are more popular than selections offered in the survey. In addition, the phrase “terms and conditions,” which could have captured some of the “other” ideas (such as too much travel, temporary positions) was so vague that it did not appeal as a selection for many respondents.

Had in-depth interviews been conducted in advance of the closed-ended survey, the selection of answers would have been more robust, and the feedback from respondents more clear.

 

Is Your Open-Ended Approach Truly Open?

Mixology (Putting research into practice)

Twist and Shout

About Us


Some of you may recall recent announcements of clients retaining CSR to help design, execute, and publish results of thought leadership studies.

We’re pleased to share another recent White Paper, entitled “Study of Employee Benefits: 2007 & Beyond,” published by our client Prudential and based on our research of over 1,400 employee benefits decision-makers, and over 1,000 employees working for an employer with at least 50 employees.

Follow this link to read the White Paper or to download your own copy.



“Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing had happened.”

— Winston Churchill



Enter your email here to subscribe to “Research with a Twist


Problems? Click here to send us an email with your request.
About Us
The Center for Strategy Research, Inc. (CSR) is a research firm. We combine open-ended questioning with our proprietary technology to create quantifiable data.

 

(617) 451-9500
Understanding What People Really Think


WordPress Lightbox Plugin