The Center for Strategy Research, Inc. Vol 3 Issue 5   May 2007

Funny thing. It turns out that the two of us aren’t really people. No, we’re both actually dogs. We know, we’ve had that fake photo of the two of us at the top of this newsletter for a couple of years now, but today, we thought we’d include a real picture and finally fess up.

Okay, we’re kidding. But the truth is, when you conduct Internet surveys, you can’t always be sure who’s filling them out. That’s why this month’s edition of Research With a Twist focuses on how to improve the validity of Internet-based surveys!

As always, please click here to send us your thoughts and comments (no barking).

Julie Brown

Mark Palmerino
Executive Vice President

Who Let The Dogs In?

If you’ve ever lived with a dog, I don’t think the following anecdote will come as a surprise to you…

Our 4-year-old Cavalier King Charles spaniel, affectionately known as Dickens, isn’t allowed to eat off of our china. He’s permitted to use his own doggie dish, but we try to draw the line at slurping dog tongues on food and dishes meant for human use.

How well does it work? Well, you decide: When we are around, he is the perfect little gentleman. If we have occasion to leave a dish unguarded, even for a minute, it’s a different story. How do we know? First, because we’ll sometimes notice that things are missing from the dish after we return, and there will be Dickens, wearing one of those guilty dogfaces that we pet owners know all too well. Second, and much more incriminatingly, we sometimes catch him up on a chair, happily wolfing down our food!

Clearly, dogs behave differently when they think nobody is watching. Guess what? So do people.

CSR recently completed research, in which we conducted 63 in-depth interviews with market research executives, to better understand the decision-making process around various market research initiatives (see Twist and Shout sidebar to download your own complimentary copy).

Among the many findings was this: Researchers are highly satisfied with Internet-based research. (Just over half indicated that they are very satisfied with this methodology and no one expressed dissatisfaction with it.) Specifically, respondents cited the Internet’s cost effectiveness (28%), ease of reaching respondents (22%) and, most importantly, its fast implementation or access to results (41%), as reasons for their high satisfaction.

There is, however, one significant drawback with Internet-based research, which many of these research professionals cited: Anonymity. Like a dog alone in the room with enticing food for the taking, survey-takers who answer questions on their own, without the involvement of an interviewer, will often behave differently (i.e. badly).

These differences may include misrepresenting oneself (e.g. claiming decision-making authority or job responsibility beyond what it actually is), racing through a survey, choosing survey answers indiscriminately, and others. All of these things add up to reduced survey validity.

Internet-based surveys, therefore (and by the way, the same could have been said of paper-based surveys twenty years ago), require careful development, as well as vigilance in analysis, to ensure that the results truly reflect the views of the population in question.

Consequently, we employ a number of tactics when using Internet-based surveys, to identify and (possibly) discard surveys that are less than ideal:

  • Time Checking. “Speeders,” is an industry term for survey-takers who move too quickly through a survey to have read (much less considered) all of the questions. By assigning a minimum threshold of survey completion time (e.g. five minutes) and noting those which fall below, we may remove these from the mix.
  • Mid-Stream Verification. Including a “question” such as, “When you read this question, check answer number seven,” in the middle of a survey — and flagging those people who don’t check answer number seven as instructed — we uncover individuals who are not being attentive.
  • Repetition. Often, we’ll include a question which asks for the same information in a different way, later on in the survey. Here as well, internally inconsistent answers are helpful in identifying those survey-takers who are not engaged or not answering truthfully.
  • Pattern Observation. Some survey-takers are known as “Straight-liners.” These people pick a particular answer (4 on a 5 point scale, for example) and simply check that answer over and over again. To uncover this behavior, we agree on a reasonable number of straight line choices in a row; anything beyond that is flagged.
  • Unreasonable Options. People who misrepresent themselves and/or their qualifications can be hard to uncover and the use of unreasonable options is one way of getting at this. The idea here is to offer choices that no truly knowledgeable person within the population being surveyed would select. If one of these answers is chosen, it’s a good sign that the survey-taker is unqualified or not engaged.
  • Logical Inconsistencies. Certain types of answers logically go hand in hand, and when they don’t appear that way, we’ve got another red flag. For example, if you state in one part of a survey that your company has 5,000 employees, but then check a box later on that says your company is too small to qualify for insurance, there may be an issue.

Of course, none of these is absolute, and part of the challenge is deciding how strictly to apply these rules. The most conservative approach, naturally, would be to exclude anyone who had any error. In doing so, however, we might also exclude valid respondents, who may have simply made an honest mistake or misread a question.

Keep in mind as well, that a “one strike and you’re out” policy towards errors could result in the elimination of a large percentage of completed surveys (e.g., 30%, 40%, 50% or more), which would significantly raise the cost and time involved in completing the research.

In the end, and whatever you decide to do, the important thing to remember is that your survey results — and by extension, the critical business decisions you make based on these results — are only as good as the individuals that you allow to participate and the answers they provide. As the old joke goes, “On the Internet, nobody knows you’re a dog.” Woof, woof!

Click here to share this newsletter with a colleague.

Mixology (Putting research into practice)

But is it really worth doing?

In other words, how do we know it’s worth the added time and expense of building in survey checks, analyzing survey results for possible invalidations, discarding potentially suspect results, and identifying additional survey respondents to achieve the required number of valid surveys?

Good question. The answer is that when we compare the survey results of those who do not make any “errors” versus those who do, we generally see differences. Further, the differences between the two groups are often staggering, with the mean responses on many questions being statistically significantly different.

In addition, analyzing the statistical differences between those who make no errors (where errors are defined as the kinds of checks we describe above in the main article), versus those who make one error, versus those who make two errors, and so on, can help us decide who we will exclude (i.e. where we draw the line).

We often find, for example, that there are few or only small differences between respondents who make no errors and those who make just one error. We may therefore, decide to keep those who make one error and exclude those who make two or more.

Given our experience, we recommend designing several questions into the survey, with the explicit purpose of identifying respondents who are not paying the appropriate level of attention or who may be misrepresenting themselves. In this way, you can identify and exclude them prior to data analysis.


Who Let The Dogs In?

Mixology (Putting research into practice)

Twist and Shout

About Us

The results are in! “Getting What We Need From Market Research,” CSR’s exclusive report on the results of interviews with market research executives, is now available! Download your free copy to understand what your colleagues really think about:

  • The challenges with conducting and using market research;
  • The decision factors and process by which companies retain market research firms;
  • The use of, satisfaction with, and reasons for satisfaction with various research methodologies (online surveys, qualitative and quantitative phone research, focus groups, and one-on-one interviews);
  • CSR’s research methodology.

Follow this link to obtain your complimentary copy of “Getting What We Need From Market Research.”

“Always tell the truth.
It’s much easier
to remember.”

— Mark Twain

Enter your email here to subscribe to “Research with a Twist

Problems? Click here to send us an email with your request.
About Us
The Center for Strategy Research, Inc. (CSR) is a research firm. We combine open-ended questioning with our proprietary technology to create quantifiable data.


(617) 451-9500
Understanding What People Really Think

WordPress Lightbox Plugin