CSR’s President, Julie Brown, will be attending the LIMRA Marketing and Research conference in Orlando June 1st through the 3rd. Just reply to this email if you’d like to plan coffee, cocktails, or a wine bath together!
There are three kinds of lies: lies, damned lies and statistics.
– Mark Twain
Questions? Click here to send us an email with your request.
This month’s edition of Research with a Twist, titled “The P-Hacks of Life,” was inspired by the new late night comedy phenom, the British-born John Oliver.
The P-Hacks of Life
If I had to live in a different country for some reason (looking at you, Donald Trump), I would pick England. Dry as dust humor, the Queen’s English, and civilized teatime. I’d be as thrilled as Prince Charles at a game of footie to live there.
Maybe that’s why I love the cheeky John Oliver, host of the HBO show “Last Week Tonight”, and former writer and actor on “The Daily Show”, so deeply. I think of him as our new Jon Stewart, who I miss quite a bit during this election season, but with that worldly, European, unnecessary “u” adding (rumour, glamour), point of view.
John Oliver’s recent takedown of the way in which morning talk shows in the U.S. mistreat scientific findings was a case in point. It’s well worth watching the entire segment if you haven’t already. From badly misinterpreting results of scientific studies to completely ignoring methodology, he cited numerous examples to demonstrate that it’s become common practice to butcher the objective, narrowly defined results that scientists work so hard to attain.
When I saw this segment of “Last Week Tonight”, I thought about market research (of course). While we researchers aren’t scientists, we do design and communicate about research for a living. Here are some thoughts on his commentary regarding the rubbish that’s being passed along as research these days:
P-hacking is science for “cherry picking.” In formal scientific studies, scientists first put forth a hypothesis, which is then proved or disproved by the research study. This practice promotes objectivity by setting parameters and maintaining them regardless of what outcomes the researcher sees. Collecting data and testing for correlations among all of the variables without the initial hypothesis in mind, and/or re-setting the original testing parameters, is called “p-hacking,” (named after the p value that only stat-heads truly understand) and is considered bad behaviour, scientifically (and British-ally) speaking.
As John Oliver so brilliantly illustrated, morning talk shows are serial “p-hackers” – pulling out data nuggets that will titillate the greatest number of viewers without reference to the initial hypothesis, study design or context of the research. One such “p-hacking” incident was the dodgy headline, picked up by many media outlets, announcing that scientists have determined that “a glass of red wine is the equivalent to an hour at the gym“.
If you google red wine and gym, you’ll see many, many stories that repeat this scientific “finding.” Meanwhile, you’d have to look quite a bit more closely to find an interview with the lead scientist on the study, who said, “We didn’t use any red wine in our study nor did we recommend not going to the gym.”
The study did conclude that resveratrol, a compound found in red wine, could help maximize exercise benefits for people with restricted exercise capacity, like heart failure patients. However, “to get the same amount that we’re giving patients or rodents you’d have to drink anywhere from 100 to 1,000 bottles a day.”
While of course we agree that getting snackered is far more fun than going to the gym, there’s nothing scientific about doing this. And just to make it clear, we REALLY don’t recommend drinking 100 to 1,000 bottles of wine per day. Swimming in it, maybe (you’re welcome).
Just the p-hacks, ma’am (NOT). P-hacking when conducting exploratory research, which is the bulk of what we do in qualitative market research, is less dishonourable. The goal of exploratory studies is to identify what might be happening – to develop hypotheses rather than to develop scientifically valid theories.
A form of cherry picking that we do see a lot, is the tendency, particularly for non-researchers, to home in on one piece of data without keeping the rest of the data from the same study in mind. For example, in a recent research initiative among benefit administrators, we found that the use of our client’s website was declining among customers. During our presentation of results, many in the room seized upon this as a sign that the website was failing customers, and that something was wrong. If all we wanted was a dramatic headline (website use declines!) and a clear action item (improve the website!), we would have left the discussion there.
Instead, digging deeper into the data, we found that 1) those who used the website less frequently were just as satisfied as they’d been with the overall relationship when they used it more often, and that 2) alternative reasons (none threatening to our client) were reducing the need for the website. Yes, it was a fact (and a p-hack) that use of the website declined, but it was not a fact that this was a problem.
Face the p-hacks: Researchers are the guardians of data. What John Oliver proved is that data can be used in ways that undermine its validity and reliability. As people whose careers and livelihood depend on the quality of the information we collect, this affects us deeply. The idea that research can mean whatever someone wants it to could put us all out of jobs if taken too far.
Personally, I love the whole research process – the journey toward understanding, and figuring out the best way to get there. Unfortunately, we researchers often have to take the lead in using and communicating about research responsibly. Or, as the great Brit Winston Churchill put it, “The price of greatness is responsibility.” ☺
Perhaps the price of our particular greatness is that we have to be teachers in addition to being researchers. Many of the best researchers I know (which includes many of you!) relish, and are infinitely patient about, educating non-researchers about the importance of conducting research correctly. While this adds an “unsung” dimension to our jobs, it might also help us keep our jobs in the long run!
Here’s the Twist: Cherry picking is a fine pastime, but not when it comes to communicating research results. As researchers, we have to take responsibility, to the extent possible, for ensuring that data are communicated accurately. Sometimes this means that we have to be teachers, too. Those are the facts of life!
Mixology (Putting Research into Practice)
Here are three recommendations to help avoid cherry picking when reporting the results of research studies:
Easy-to-understand methodology section: As we mentioned in our last newsletter, explaining methodology in page after page of text is a turn-off. However, displaying design and logistics info in as graphic and user-friendly a format as possible will help readers/users remember it and hopefully pass it along if and when the data are shared.
Base sizes on every page: Whether 10 or 1,000 people answer questions in a certain way is critical to assigning significance. Because of this, CSR identifies the number of research participants answering each question when reporting on research. Even if a study has 1,000 overall participants, branching question paths can reduce the number of respondents to a much smaller group – we never want our data to be misleading because sample size was not clear.
Themes, not single data points: CSR’s preferred approach to communicating findings involves organizing reports so that multiple pieces of data contribute to “themes”. This ensures that no single data point drives the story, and that the larger context of the study drives the narrative.
The Center for Strategy Research, Inc. (CSR) is a research firm. The “Twist” to what we offer is this: We combine open-ended questioning with our proprietary technology to create quantifiable data. As a result our clients gain more actionable and valuable insights from their research efforts.
Secondary Sidebar Widget Area
This is the Secondary Sidebar Widget Area. You can add content to this area by visiting your Widgets Panel and adding new widgets to this area.
The Center for Strategy Research, Inc. 101 Federal Street · Suite 1900 Boston, MA 02110