The Center for Strategy Research, Inc. Vol 5 Issue 4   May 2009


Hello!

In addition to managing market research engagements, we’re involved in a fair amount of them ourselves as study participants. Julie’s recent — and frustrating — experience with a quantitative, online survey reminded us not to confuse poor quality surveys with poor quality participants.


Julie Brown
President

Mark Palmerino
Executive Vice President



Luxury in the Eyes of the B(MW)-Holder
Have you ever been frustrated by the wording or sequencing of a closed-ended survey? If so, then, as Alice Roosevelt Longworth would say, “Come sit next to me.”

It’s not that I enjoy complaining, however, a recent experience with an online, quantitative survey left me feeling both confined (at my inability to express myself within the constraints of the survey) and concerned (that the end client may not be getting the most accurate information).

The survey was purportedly about “luxury” automobiles and indeed, in one of the first questions, I was asked if I “currently drive a luxury automobile.”

The answer choices available to me were “Yes” and “No.” I was immediately flummoxed.

What constitutes a luxury automobile? I don’t consider my nine-year-old, base level model, 100,000-plus miles BMW a luxury automobile (although I certainly recognize that some might). So, after some consideration, I checked “No.”

Next question: “Would you consider buying an automobile in 2009?” Again, I was given the exceedingly unsatisfying options of “Yes” or “No.” (I was starting to sense a pattern.)

Now, to be honest, I really wouldn’t mind breaking in a new car this year (hey, who wouldn’t?), and my husband and I talk about a replacement for my aging roadster on at least a monthly basis (usually about the time a new repair is required). But I’m just not sure that the family budget will support this as a priority before the end of the year. So I checked “Yes,” focusing on the word “consider” in the question.

Next question: “Would you consider buying a luxury automobile in 2009?” Again with the luxury automobile vagueness. And the answer choices? You guessed it… “Yes” and “No.”

OK, so now, I’m thinking, even if I would consider a car in 2009, this wouldn’t be the year for that SLK I’ve been eyeing. So, my answer: “No.”

And then, the kicker: “Please tick all the brands you would consider purchasing in 2009.”

This was accompanied by a comprehensive list of auto manufacturers, one of which was BMW. Now, I’ve been pretty happy with my wheels, so I checked BMW, along with at least four other names.

Wrong! The survey immediately shut down, and a warning note flashed that I had given inconsistent answers. The survey was terminated. Goodbye.

Let me just stop right there and tell you that I really do have better things to do with my weekend mornings than worry about a survey cutting me short. But I was pretty annoyed. So I emailed the administrator, made clear that I actually had been paying attention, and explained my predicament.

Today, now several days later, I’m still waiting for an answer. And while I’m beginning to suspect that I’ll never hear back from these folks, the experience did give me a renewed appreciation for why research is so often criticized for not delivering what it should.

This particular survey had (at least) three significant problems in the way it was constructed:
  1. A lack of definition regarding what constitutes a “luxury brand.” Had the survey either started with a clarifying question such as, “Which of the following brands do you consider luxury brands?” or, “What do you consider a luxury automobile?” or, worst case, if the questionnaire provided a list of which brands “should be considered luxury for purposes of the survey,” the survey-taker and the survey administrator would have been on the same page.

    Instead, the survey writers assumed that their definition of “luxury” was the same as that of the participants, an assumption that got us off on the wrong foot almost immediately. Furthermore, the limited, “Yes or No” option prevented me from expressing my own uncertainty.

    (Note: This highlights one of the benefits of conducting qualitative research as a first step, prior to fielding a large-scale, quantitative study. This type of definition ambiguity would almost certainly have been brought to light through open-ended questioning.)
  1. An overzealousness in weeding out “bad survey takers.” We certainly understand the market researcher’s desire to weed out “low quality” survey participants from the sample base. These include the speeders (those who move too quickly through surveys), the cheaters (those who check boxes at random) and the repeaters (those who take surveys many times over in exchange for the stipend).

    But in our zeal to uncover the bad guys, we do ourselves a disservice by throwing out well-meaning folks who are victims of the survey itself or, of their own, understandable, human inconsistencies.

    If, instead, the administrators had allowed me to continue, and had programmed in an opportunity for me to explain the apparent inconsistency (an open-ended comment box at the end, for example), they might not have had to toss out my responses entirely.
  1. An abrupt termination at the first perceived inconsistency. The other problem with the way the survey ended is that it was just bad form. The message I got — loud and clear — was that there was something wrong with my answers. While we all understand the reluctance to reward survey respondents who don’t take their participation seriously, we also need to appreciate that not all inconsistencies in responses are the participant’s “fault.”

    Even putting aside what are just the basics of common courtesy, if one hopes to enlist participants in future studies, it’s never a good idea to “slam the door” (or, in this case, the termination screen) so abruptly in the face of a survey taker.
Here’s the Twist: It’s easy to mistake poor quality surveys for poor quality participants. And while it certainly makes sense to weed out those who, for whatever reason, may bias the data, we need to be equally vigilant about not improperly and/or rudely dismissing the well-meaning majority of participants.

Just as we would never simply hang up the phone on a telephone survey participant whose answers were believed to be inconsistent, online surveys takers should be treated with care and respect. When we consider how important the cooperation of survey participants is to our profession, not to mention the increasing difficulty of getting people to participate in the first place, it’s in our best interests to treat people as the ‘luxury” participants that they are.

— Julie

Click here to share this newsletter with a colleague.

While terminating a research participant based on one possible inconsistency is not a great tactic, we do realize the importance of identifying and removing those who are legitimately questionable. To this end, we recommend a “three strikes and you’re out” approach.

Specifically, we build into our surveys several “trap” questions — questions whose purpose is to uncover those participants who are not giving us quality information. Typically, we’ll include five to seven of these, with each mistake constituting a “strike.” (Completing the survey too fast — i.e. “the speeders” — is also counted as a strike.)

Those who reach three strikes are removed from the dataset. For those with one or two strikes, we review their pattern of responses on a case-by-case basis to determine if there are any other oddities to suggest removal.

For (much) more detail on this approach, check out our earlier issue, “Who Let the Dogs In“, from May of 2007.

 

Luxury in the Eyes of the B(MW)-Holder

Mixology (Putting research into practice)

Twist and Shout

About Us


We’re delighted to announce that Mark will be speaking at LIMRA’s Group Benefits Leadership Conference at Boston’s Marriott Long Wharf in September.

Together with one of our clients, Mark will discuss how our innovative approach to a compelling research issue helped this client better understand the challenges that members of key sales and distribution teams face.



If you haven’t got anything nice to say about anybody, come sit next to me.


— Alice Roosevelt Longworth



Problems? Click here to send us an email with your request.
The Center for Strategy Research, Inc. (CSR) is a research firm. The “Twist” to what we offer is this: We combine open-ended questioning with our proprietary technology to create quantifiable data. As a result, our clients gain more actionable and valuable insights from their research efforts.

 
Understanding What People Really Think


WordPress Lightbox Plugin