The Center for Strategy Research, Inc. Vol 4 Issue 3   March 2008


Welcome!

A cartoon in the Sunday Boston Globe a few weeks back had us howling — with both laughter and frustration. It was a perfect example of something called “instrument bias,” the subject of this month’s issue of Research with a Twist.

As always, please click here to send us your thoughts and comments.


Julie Brown
President

Mark Palmerino
Executive Vice President



Is Your Instrument Biased?
cartoon - Non Sequitor

I have a confession to make: When I open the Sunday paper, the first thing I read are the comics.

I don’t know if it’s a coping mechanism for bracing myself against the “real” news or simply a holdover from my childhood when the comic section accounted for the sum total of my Sunday newspaper reading. Either way, that’s my starting point.

Interestingly, in just eight panels each (sometimes less), the best cartoons can often do a better job of summing up an issue than what it takes the rest of the paper to do in dozens of pages.

Such was the case a few weeks ago, when a comic called “Non Sequitur” skewered the inanity of the political polling process. You can follow this link to see the cartoon for yourself, but in a nutshell, the comic depicted a door-to-door survey-taker asking a series of short, “check the box” questions to a homeowner regarding his political orientation. Thanks to a series of limiting and poorly-conceived questions and associated answers, the results were laughable.

This type of structural limitation in a survey device or process is what’s called “instrument bias.” All surveys have them: Web-based surveys exclude those without computer access, door-to-door surveys exclude the homeless, etc. That’s understood going in, and the challenge for any organization that designs a survey is to minimize this bias.

Closed-ended surveys like the one shown in the comic are particularly vulnerable to instrument bias. Because both questions and answers are preconceived, there’s no room for dialogue, clarification or shades of grey as the survey unfolds — walking away with a handful of checked boxes does not ensure accurate or useful information.

And, while certainly (hopefully!) an exaggeration, let’s take a look at what went wrong in the cartoon example:

  • The questions asked were limiting. Asking whether you’re a Democrat or a Republican presupposes that those are the only options. Asked as such, the question leaves no room for independent, Green party, Bull Moose, or whatever else may be possible. A better way to ask would have been, “Are you a member of a political party, and if so, which one?”
  • The answers offered were incomplete. It’s impossible to offer participants the choice of every conceivable answer — even if you knew what they were ahead of time it would be unwieldy to list them all. And yet including some choices (and therefore, excluding others) is a necessary step in developing any closed-ended device.

    Focus groups are often used as a means of solving this dilemma. For example, if you were planning to conduct a Web-based, closed-ended, worldwide survey of IT hiring managers, you might first field several focus groups of 10 IT managers each and ask them about the problems they have in hiring staff. At the end of your sessions, you might have a list of 20 or 30 IT manager hiring woes.

    That’s when the fun (i.e. instrument bias) begins. You’ve now got to decide which five or six of these to include as choices in your survey. Unfortunately, your focus group offers no help in this regard. Focus groups tell you that the idea came up, but offer no guidance in determining which answers are the most likely or important, and would therefore appeal to the greatest number of people.

    A better approach in developing survey questions (WARNING: shameless CSR self-promotion ahead) is one which combines open-ended questioning with an ability to quantify the results. If you can both uncover new ideas/responses and rank the frequency of these responses at the same time, you’ll be better equipped to develop a solution set that resonates with participants for your closed-ended questions.

  • The choices and labels used were judgmental. While the standard catch-all “other” is a better option than the “Trouble Maker” or “Satan Worshipper” categories used in the cartoon to label answers which didn’t fit neatly into the survey-taker’s instrument, it reveals an important point. Poorly worded choices, whether the result of honest ignorance or deliberate intent in pushing a particular agenda, can quite easily influence results.

Robert Orben famously asked, “Do you ever get the feeling that the only reason we have elections is to find out if the polls were right?” In market research, unlike politics, election day never comes — our research and its tools are all we have in uncovering the true beliefs of the populations we study. Get started on the right foot by ensuring you’ve kept instrument bias to a minimum!

— Mark

Click here to share this newsletter with a colleague.

Mixology (Putting research into practice)

What you see is what you get.

The presidential election season seems to offer a nearly unlimited supply of research and data sampling predicaments. One of the hottest, of course, concerns the Democratic race and the question of delegates and how they’re chosen.

The Obama camp, ahead to date in popular votes, would like to see a positive correlation made between the popular vote and the assignment of delegates (and therefore the nomination).

The Clinton campaign, on the other hand, argues that the purpose of the primaries is to select the strongest candidate for the general election, and that this doesn’t necessarily correlate with the popular vote.

The answer to “Who’s right?,” of course, depends entirely on which side you’re on. Everyone seems to slice the data to their own advantage and you never hear anybody argue against their own best interests.

This natural, human bias is important to keep in mind when developing any survey or research approach. Competing agendas — or simply well-meaning, but nonetheless biased points of view — can play a significant role in influencing the outcome.

 

Is Your Instrument Biased?

Mixology (Putting research into practice)

Twist and Shout

About Us


Instead of conducting focus groups of Information Technology managers to inform the construction of a closed-ended Web survey, our client CompTIA retained CSR to conduct in-depth interviews worldwide (follow this link for more about the project).

By coding and quantifying the results, we were able to help our client create a closed-ended survey that was later administered to nearly 3,600 IT managers in 14 countries. Because the in-depth interviews were conducted by telephone, we were able to avoid the geographic constraints of focus groups and thus ensure that the selection of responses incorporated the feedback of managers in several countries.

The White Paper resulting from this research is now for sale by CompTIA. Follow this link for a summary.



“Whenever you find that you are on the side of the majority, it is time to pause and reflect.”

— Mark Twain,
Notebook, 1904



Enter your email here to subscribe to “Research with a Twist


Problems? Click here to send us an email with your request.
About Us
The Center for Strategy Research, Inc. (CSR) is a research firm. We combine open-ended questioning with our proprietary technology to create quantifiable data.

 

(617) 451-9500
Understanding What People Really Think


WordPress Lightbox Plugin