Taking data at face value? I am not a fan. Caveat emptor, as the saying goes. It behooves all of us to look more deeply into the relevance of any statistics, not to mention something of the study design behind them.
Oh, I’m not suggesting we become social scientists or statisticians to determine if numbers offered up on the altar of proving a point are valid. But couldn’t we use our heads to consider the basics?
For example, common sense will tell us that research comprised of a few dozen college students is unlikely to provide reliable extrapolation for millions of adults who are older, with families.
Then there are the myriad studies selling recommendations to eat this and drink that – or not.
Health Studies – Too Quick, Too Few, Too Many?
Coffee as “healthful” comes to mind immediately – once maligned, now approved; just be sure to read the fine print. And, given the contradictions and speed with which media make pronouncements, I can’t help but feel that all health briefs in the news need to be taken with a grain of salt.
Better yet, skip the sodium and go with Two Aspirin and Text Me in the Morning.
For myself, I admit to a love-hate relationship with data. I’m a believer in it, but only insofar as it’s legitimate, applicable, and rallied round reporting conclusions in a context that makes sense. Unfortunately, there are generally gaps in the data we so blithely toss about as we make sweeping statements about human experience. Consequently, I retain my healthy skepticism for the process, the results, and even more so, any interpretation that nags at me.
Research Design
I’ve written on the topic of interpreting data (and considering sources) before, petulant at the prevalence of numbers cherry-picked and molded into any shape to suit one’s purpose. This is all the more reason that we need to think for ourselves about surveys and studies – how they’re designed, their (various) agendas, who stands to gain, and… drum roll please… who foots the bill.
The New York Times features an interesting and specific twist on this topic. “Psychology Research Control” relates the challenges of reproducing results.
Scholar and lecturer in psychiatry at the Yale School of Medicine, Dr. Sally L. Satel writes:
… in a variety of fields, subtle differences in protocols between the original study and the replication attempt may cause discrepant findings; even little tweaks in research design could matter a lot.
Job Security: Publish or Perish
Dr. Satel goes on to clarify the real world dilemma of pressures to publish:
… a publish-or-perish world offers little reward for researchers who spend precious time reproducing their own work or that of others. This is a problem for many fields, but particularly worrisome for psychology.
So how often do we accept conclusions drawn from very modest findings, which may be insufficient?
As a culture, we love the broad brushstroke: sweeping statements and expert pronouncements. But we don’t necessarily vet our experts any more than we reflect on the agenda behind the data, much less the results we are offered.
Hello, Reality Check?
I’m not claiming to be more objective than anyone else – though I like to think I try. But then I’m not in the business of being objective; rather, I am “reflective,” observing, reading, thinking – and asking questions.
Reproducible Results
Dr. Satel also states that “a failure to replicate is not confined to psychology.” She references
… Stanford biostatistician John P. A. Ioannidis… his much-discussed 2005 article “Why Most Published Research Findings Are False.” The cancer researchers C. Glenn Begley and Lee M. Ellis could replicate the findings of only 6 of 53 seminal publications from reputable oncology labs.
Given the potential scope of the problem, Dr. Satel mentions this, which I take to be something of a study on studies:
… last year a group of psychologists established the Reproducibility Project, which aims to replicate the first 30 studies published in three high-profile psychology journals in the year 2008.
And I chase my tail right back to my original point: I believe in research, I believe in data, I believe there are many excellent sources that work diligently to move us forward in our understanding of ourselves, our minds, our bodies, our social systems.
Question Everything
I consider it sensible to question assumptions – not to mention people, institutions, and findings – in any and all arenas. There are always agendas (some evident, some hidden). There are always constraints (some obvious, others not). There will always be human error, even when there is intent above reproach.
At the very least, we, the public, need to take some responsibility in how we respond to what we hear and read.
As for a study on studies, why not?
Then again, as with a great deal of research, it is more than the design and interpretation that concerns me. It’s who’s underwriting the expense.
You May Also Enjoy
teamgloria says
data is for people who fear instinctual genius.
BigLittleWolf says
Ah, tg… what a wonderfully rebellious perspective.
Shelley says
My first question is, who funded the study? I suspect it’s all too complicated for most folks to figure out but I do wish we were a bit more skeptical about ‘studies’. If you not read it, the book Bad Science by Ben Goldacre is excellent at illuminating some of the key questions. He also has a website by that name.
BigLittleWolf says
I have not read it. Thank you for the recommendation, Shelley. I will check out the web site.
Lisa Fischer says
Call me paranoid, but I always wonder if “data” has been manipulated to reinforce a particular agenda. Especially when it comes from an organization that would definitely benefit from the information. Guess I’m just a jaded skeptic in my old age.
BigLittleWolf says
I don’t call it paranoid, Lisa. I call it smart.
Linda D'Ae-Smith says
Yep, spot on, I agree!
Scott Behson says
In my day job, I’m an academic researcher.
I can tell you that most scientific and social science research is conducted properly and rigorously. And most academics understand how to resist over-interpreting the findings of any single study.
… but once the media gets a hold of one paper from a university researcher (whose university has a good PR department), they will extrapolate to the high heavens.
This is why we get a new “miracle food” every month- because the media finds a study that some chemical is good for nutrition and notes that, say, pomegranates are particularly high in that chemical. All of a sudden pomegranate stuff explodes in the media and quickly takes over our supermarket shelves (and then acai berries, etc.)
All this despite the nutrition researcher never making the strong claim in the first place, and fully understanding that their study is one data point in a much larger pool of information.
In short, “journalists” don’t understand how science works, and we are all dumber for it.
(notice that this is a hot button topic for me?)
D. A. Wolf says
I’m glad it’s a hot button, Scott. It is frequently the interpreting of data that drives us batty. We don’t know what to believe.