The Center for Strategy Research, Inc. Vol 2 Issue 2   February 2006



Welcome!

Football season may be over (at least here in America!), but the lessons of this year’s Super Bowl — as they apply to conducting effective research projects — still remain.

Read on for our thoughts regarding how to involve all members of a research project, particularly those who arrive late on the scene.

As always, your thoughts and comments are appreciated.


Julie Brown
President

Mark Palmerino
Executive Vice President



In Research, As In Football, Watching Doesn’t Necessarily Mean Understanding

I must admit that, unlike many of my fellow New Englanders, I’m not a huge football fan. I do however, enjoy the Super Bowl, with all its hype, overblown advertisements and sheer grandiosity.

Watching as an occasional fan, however, does pose one big problem: I don’t know all the subtleties of the game.

For example, when “pass interference” is called, and the replay is shown in slow motion over and over again, I sit there wondering what exactly I’m supposed to be looking for. Or, when a player gets tackled one yard short of a touchdown, and the coaches start agonizing over where precisely the ball should be placed, I can’t help but question why it’s all that important.

Why can’t the people who know football (i.e., the announcers) teach me what I need to know to be a better “user” of the football experience?

I posed this question to my brother during an earlier playoff game — the one where Doug Flutie performed “the first drop kick in the NFL since football was created” (or whatever). After reminding me that really, no one is as ignorant about football as I, he said, “Well, of course, they assume that people who watch the game know what football is about.”

I mention this today because in conducting research projects, we often find a similar situation: New “viewers” come on the scene after the initial kickoff, and those already on the project assume (or behave as if they do) that the recent arrivals know what the project is all about. It’s a bad assumption.

Whether due to job changes, transfers, reorganizations, or the like, the probability of new arrivals — particularly with ongoing projects (such as annual satisfaction or attrition studies) or multi-stage studies lasting six months or more — is quite high. And, just as I was not up to speed as a football fan, these folks would also benefit from tools and information that are geared towards bringing them into the loop.

The fact is, staff turnover is so common, that we now consider planning for it a key component of project success. Here’s what we recommend:

  1. Create a project portfolio. Early on in the project, team members and research vendors should collect all project data — objectives, costs, timelines, accomplishments, resources and risks — in a central repository. This allows everyone involved to easily access all information and to reallocate resources, adjust priorities and modify timelines when turnover occurs.
  2. Document problems and solutions. We have had the good fortune of working on several large annual research projects. In the first year of one such project, the client suggested that an “Issues Document” be created that would capture any project problems and the decisions that had been made to address them in one place. This has proven invaluable over time, not only to our client but to us as well. Of course, research companies experience turnover too, and it’s in any research firm’s best interest to ensure a seamless transition when it occurs.
  3. Take time for a full debriefing. Due to the speed and manner in which personnel changes take place, there often isn’t a chance for a full debriefing between incoming and outgoing team members about a research effort. And while we’re very grateful for the conscientiousness of those with whom we work — there’s always an introduction and a warm “handoff” — this isn’t the same as a “full debriefing.”

    The idea is to bring the team and research vendor together, to recap the original objectives and how they might have changed during the project so far; provide background on key decisions made to date, particularly budgetary, and the reasons for them; and reach agreement regarding upcoming expectations. Setting a thorough context for new research team members ensures that they come up to speed quickly and the project stays on track.

In the same way that I’d get more out of my football watching experience if John Madden would explain why it matters where the ball is when it goes out of bounds, new research team members will get more from their experience if the time is taken to document important decisions and to share detailed project background information.

— Julie

Click here to share this newsletter with a colleague.

Mixology (Putting research into practice)

Getting the wrong answer time and again, or…

There are two concepts that are critical for the successful design and execution of good research. One is reliability, the other is validity. And while both are necessary, validity is both more important and less understood.

Let’s tackle the easier one first. Reliability is simply the ability to get the same answer when measuring something at two different times or by two different people. An example of a fairly reliable measure would be using a metal ruler to measure the length of a piece of paper. If I did this at 11AM and you did it at 3PM, it’s likely that we’d both come up with the same (or extremely close) answer. Measuring the length of a piece of paper with a metal ruler is very reliable.

Validity on the other hand, refers to whether the measurement device itself is actually measuring what one intends to measure.

In the example above, it would be less valid (perhaps even absurd), to use a ruler to measure someone’s intelligence (e.g., by measuring the distance between a person’s shoulder blades). Although it might be reliable (i.e. you and I could reach the same conclusion at different times), we would still be getting answers that were just plain wrong.

In practice, this plays out in a number of ways. For example, while it is relatively easy to construct reliable closed-ended questions, it is much more difficult to be sure that the answers we obtain are valid. (It always struck me how much time and effort academics spend on scale construction compared to how quickly researchers in the business world will put together a closed-ended survey and field it. Academics spend months and years testing measurement devices in order to ensure that they are both reliable and valid.)

Interestingly, it is easier to come up with valid open-ended questions. One reason for this is because we do not constrain what the respondent can tell us. We’ve all had the experience of responding to a close-ended question with the frustration of “…but none of those options fully captures my thoughts, feelings or motivations.” Often, the only course of action given respondents with closed-ended questions is to pick an option that is at best “not quite right” or, is at worst, misleading.

Because of the above considerations, we feel it is more important to make sure that the results we obtain from research are as valid as possible. In other words, it is better to have valid answers — even if not quite as reliable — than to have very reliable answers that are wrong or misleading. In this respect, we prefer to talk to 100 or 200 people with an essentially open-ended approach (which we then quantify through content coding), than to obtain answers from 600 or 1200 respondents with a closed-ended instrument of questionable validity.

— Mark

 

In Research, As In Football, Watching Doesn’t Necessarily Mean Understanding

Mixology (Putting research into practice)

Twist and Shout

About Us


CSR President, Julie Brown will be speaking at the Second Annual Managing Retirement Income conference on Tuesday, February 28 in Cambridge, Massachusetts. For more information on the event, please click here.



“Nothing will ever be attempted if all possible objections must be first overcome.”

— Samuel Johnson



Enter your email here to subscribe to “Research with a Twist


Problems? Click here to send us an email with your request.
About Us
The Center for Strategy Research, Inc. (CSR) is a research firm. We combine open-ended questioning with our proprietary technology to create quantifiable data.

 

(617) 451-9500
Understanding What People Really Think


WordPress Lightbox Plugin