The Center for Strategy Research, Inc. Vol 2 Issue 8   August 2006


The sun continues to shine and we hope you are able to take some time off and enjoy the summer while it lasts!

We continue this month with Part Two of “Dr. Palmerino’s (not too complicated) Summer Course in Sadistics.”

Please click here to send us your thoughts and comments.

Julie Brown

Mark Palmerino
Executive Vice President

Mitigating Factors

Last month’s issue of Research With a Twist addressed the counterintuitive, and all too often misunderstood, concept of sample size. Specifically, we explained why “for a given group, on a single question, as long as you correctly and randomly survey 30 or more people, you’ll get quite close to accurately estimating the underlying ‘truth’ that’s out there.”

Every bit of what we said is true, however you may have noticed that in order to so plainly assert that 30 is the “magic number,” we had to first go to considerable lengths to qualify the statement (“for a given group, on a single question, as long as you correctly and randomly survey…”).

Unfortunately (or maybe, fortunately, for those of us in the research trade!), real life is rarely quite so tidy.

And with that in mind, today’s newsletter (as promised) takes a closer look at the factors which affect the sample size actually needed. In other words, we’ll explain why it’s often necessary to survey many more than just 30 people to get the answers you want. (Keep in mind that here as well, there is some oversimplification at work in our eagerness to condense such a complicated topic into a 900 word article!)

In general, there are three primary factors which affect the sample size necessary:

Factor #1: To what level of accuracy are you trying to predict?

Market research is predictive; it’s not the same as asking everybody. And because we don’t ask everybody, there is always some level of statistical error hidden in our results. As a general rule, the more accuracy (i.e. less error) you want, the more people you need to sample.

Accuracy in statistics is typically expressed in terms of “confidence level” and “confidence interval.”

“Confidence level” refers to the likelihood that if you did your research again, you’d get the same results. When you hear people talk about 95% confidence for example, what they’re really saying is that if you did the same research 100 times, you’d get the same results 95 times, and different results 5 times (95/100 = 95%).

“Confidence interval” on the other hand (often referred to as “margin of error”), relates to the range within which you believe the true answer lies (i.e. the answer you’d get if you sampled 100% of the population).

Let’s consider a real life example… When you watch the news and they tell you that the polls show Candidate X is expected to get 60% of the vote, they will also tell you the Confidence Level (e.g. “95% likelihood”) and the Confidence Interval (“This survey is accurate plus or minus 4%”). Applying our definitions above, that means that if the poll were conducted 100 times, the results would fall between 56% and 64% (i.e. “plus or minus 4%”) in 95 of those surveys.

The key idea is this: the more people you sample, the greater the accuracy of your results (i.e. higher confidence level; narrower confidence interval).

Factor #2: How much of a difference are you trying to detect?

Simply put, the smaller the actual difference in the population, the more people you need to survey in order to draw an accurate conclusion.

Taking our election example again, if you’ve only got two candidates in the race, and after surveying 30 people you find that one has 70% of the support and the other 30%, you don’t need to talk to any more people to predict the election. The difference is so wide that more surveys won’t shed any more light on the question.

If, on the other hand, you survey 30 people and your results are split 53/47, you’ve got more work to do. At that point, the actual difference appears to be so small that only by surveying many more people can you make a prediction.

It’s worth noting as well that the sample of people selected must always be “representative and random.” For example, if the 30 people surveyed all came from the same state, or, if they were all people you knew, then the sample would be neither representative nor random. At that point, the number of people being surveyed would be irrelevant, since there would be an inherent bias built into the process from the start.

Factor#3: How many subgroups are involved?

When we say that 30 is the magic number, we’re talking about surveying a given population. If, as companies often do, you want to subdivide the population and make additional, statistically valid statements about those subgroups (e.g. “We surveyed our customers and this is what the men in subgroup #1 think…”), each subgroup will need it’s own minimum of 30.

The more subgroups you have, the more people you will need.

Whew… Is the summer heat getting to me or has all that “sadistics” talk just got me sweating?

Either way, to ensure success on your next market research study, make sure you’ve considered questions of accuracy, size of differences and subgroups needed before getting underway. As we’ve explained above, all of these factors can have a significant impact on the sample size you’ll ultimately need.

— Mark

A Presentation With a Twist: Working with and presenting data can be a challenge, and yes, we admit it, even sometimes a little boring. It can be brought alive however.

If you have a few minutes, we guarantee you’ll enjoy a humorous and instructive video from Hans Rosling, professor of international health at Sweden’s world-renowned Karolinska Institute, and founder of Gapminder, a non-profit that brings vital global data to life.

Follow this link to watch!

Click here to share this newsletter with a colleague.

Mixology (Putting research into practice)

Budget has nothing to do with the statistical basis of market research, and in classroom discussions of research methodology, it is rarely mentioned. In the real world, however, tight budgets often present practical obstacles on the way to achieving research objectives.

For example, you may have 1,000 clients and an interest in better understanding how they feel about your products and services. Rather than treating them all as equally valuable, and conducting research that pulls in everyone’s point of view, however, you may be better off to first single out the most important clients from the group.

In other words, since the 80/20 rule is often at work (i.e. 20% of our clients represent 80% of our profits), the best use of your research dollar may be to do in-depth interviews with individuals who are representative of your 200 “best clients,” as opposed to more cursory surveys of a larger group culled from the entire client population.

The critical decision-factor — as in all research — is to first determine what it is you need to know. Only with that knowledge in hand can you make informed research design decisions.


Mitigating Factors

Mixology (Putting research into practice)

Twist and Shout

About Us

We’re delighted to announce that Jennifer Lacy is joining us this month as a Client Relations Director, responsible for developing new business, and working closely with CSR clients to design and deliver the quality research results for which we are known.

Those of you with very long memories may remember her, as she worked for CSR several years ago. Jennifer began her career in research with us as an Interviewer and Coder, then was promoted to Project Manager.

Most recently, she worked in The New York Times advertising department, initiating and designing strategic and tactical research studies to serve the needs of advertising teams across 38 industries. Before that, she worked as a Research Director specializing in financial services for Total Research Corporation (now Harris Interactive).

Jennifer has over ten years’ experience as a senior leader on both the client and supplier sides of market research. She is ABD in a PhD in political science from Boston College, has a B.A. from Rutgers University, and can quote Lincoln and Shakespeare with the best of them.

“Statistics: The only science that enables different experts using the same figures to draw different conclusions.”

— Evan Esar

Enter your email here to subscribe to “Research with a Twist

Problems? Click here to send us an email with your request.
About Us
The Center for Strategy Research, Inc. (CSR) is a research firm. We combine open-ended questioning with our proprietary technology to create quantifiable data.


(617) 451-9500
Understanding What People Really Think

WordPress Lightbox Plugin