Taking data at face value? I am not a fan. Caveat emptor, as the saying goes. It behooves all of us to look more deeply into the relevance of any statistics, not to mention something of the study design behind them.
For example, common sense will tell us that research comprised of a few dozen college students is unlikely to provide reliable extrapolation for millions of adults who are older, with families.
Then there are the myriad studies selling recommendations to eat this and drink that – or not.
Health Studies – Too Quick, Too Few, Too Many?
Coffee as “healthful” comes to mind immediately – once maligned, now approved; just be sure to read the fine print. And, given the contradictions and speed with which media make pronouncements, I can’t help but feel that all health briefs in the news need to be taken with a grain of salt.
Better yet, skip the sodium and go with Two Aspirin and Text Me in the Morning.
For myself, I admit to a love-hate relationship with data. I’m a believer in it, but only insofar as it’s legitimate, applicable, and rallied round reporting conclusions in a context that makes sense. Unfortunately, there are generally gaps in the data we so blithely toss about as we make sweeping statements about human experience. Consequently, I retain my healthy skepticism for the process, the results, and even more so, any interpretation that nags at me.
I’ve written on the topic of interpreting data (and considering sources) before, petulant at the prevalence of numbers cherry-picked and molded into any shape to suit one’s purpose. This is all the more reason that we need to think for ourselves about surveys and studies – how they’re designed, their (various) agendas, who stands to gain, and… drum roll please… who foots the bill.
The New York Times features an interesting and specific twist on this topic. “Psychology Research Control” relates the challenges of reproducing results.
Scholar and lecturer in psychiatry at the Yale School of Medicine, Dr. Sally L. Satel writes:
… in a variety of fields, subtle differences in protocols between the original study and the replication attempt may cause discrepant findings; even little tweaks in research design could matter a lot.
Job Security: Publish or Perish
Dr. Satel goes on to clarify the real world dilemma of pressures to publish:
… a publish-or-perish world offers little reward for researchers who spend precious time reproducing their own work or that of others. This is a problem for many fields, but particularly worrisome for psychology.
So how often do we accept conclusions drawn from very modest findings, which may be insufficient?
As a culture, we love the broad brushstroke: sweeping statements and expert pronouncements. But we don’t necessarily vet our experts any more than we reflect on the agenda behind the data, much less the results we are offered.
Hello, Reality Check?
I’m not claiming to be more objective than anyone else – though I like to think I try. But then I’m not in the business of being objective; rather, I am “reflective,” observing, reading, thinking – and asking questions.
Dr. Satel also states that “a failure to replicate is not confined to psychology.” She references
… Stanford biostatistician John P. A. Ioannidis… his much-discussed 2005 article “Why Most Published Research Findings Are False.” The cancer researchers C. Glenn Begley and Lee M. Ellis could replicate the findings of only 6 of 53 seminal publications from reputable oncology labs.
Given the potential scope of the problem, Dr. Satel mentions this, which I take to be something of a study on studies:
… last year a group of psychologists established the Reproducibility Project, which aims to replicate the first 30 studies published in three high-profile psychology journals in the year 2008.
And I chase my tail right back to my original point: I believe in research, I believe in data, I believe there are many excellent sources that work diligently to move us forward in our understanding of ourselves, our minds, our bodies, our social systems.
I consider it sensible to question assumptions – not to mention people, institutions, and findings – in any and all arenas. There are always agendas (some evident, some hidden). There are always constraints (some obvious, others not). There will always be human error, even when there is intent above reproach.
At the very least, we, the public, need to take some responsibility in how we respond to what we hear and read.
As for a study on studies, why not?
Then again, as with a great deal of research, it is more than the design and interpretation that concerns me. It’s who’s underwriting the expense.
You May Also Enjoy