Bunk Science

If Peer Review Is Working, Why All the Retractions?

British author and retired psychiatrist Theodore Dalrymple recently flagged an article in the prestigious medical journal The Lancet, in which researchers announced that they had calculated that an average of 92 ­minutes per week of exercise reduced subjects' "all-cause rate of mortality" by 14 percent. They also claimed that "every additional 15 minutes of daily exercise beyond the minimum of 15 minutes per day further reduced all-cause mortality by 4 percent."

Later, The Lancet received a letter pointing out that, if the researchers' findings were correct, a man who exercised for six hours every day would reduce his mortality rate to zero, thus becoming immortal. Dalrymple comments, "In my opinion, life would not actually go on forever; it would merely seem as if it did, in the sense of being boring and pointless."1

But how did this blooper get past peer review in the first place?

Peer review—colleagues' prior approval of journal papers—is, say science popularizers, the "gold standard" of science.2 Actually, today it is more like a troubled currency, of fluctuating and generally diminishing value.

Bias Toward the Positive

One well-recognized problem is a deforming bias toward publishing positive findings. As Daniel Sarewitz put it in Nature, when positive findings are published, "scientists are rewarded both intellectually and professionally, science administrators are empowered and the public desire for a better world is answered." He also notes that "the lack of incentives to report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties is widely appreciated—but the necessary cultural change is incredibly difficult to achieve."3

It is difficult to achieve because disproving a discipline's nostrums (through failed replication) is a high-risk activity. Take, for instance, a recent replication attempt that failed to support the classic 1948 Bateman study underpinning Darwinian sexual selection (males benefit from promiscuity, females from monogamy).4 Will Darwinists thank the authors for this news, when their theory is under fire elsewhere?

Conformism is more rewarding. Professor of medicine Fred Southwick recently complained in The Scientist that "many who succeed in advancing to leadership positions in academia have been cautious, making few enemies and stirring little controversy. But such a strategy fails to generate the insights that drive scientific fields of research forward."5

It's no surprise, then, that in PLoS Medicine, John P. A. Ioannidis was bold enough to explain, in 2005, "Why Most Published Research Findings Are False." He posited that, "for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias"6—what dissident biologist Lynn Margulis (1938–2011) described to science writer Suzan Mazur in 2008 as a "cycle of submission."7

Retractions Up, Repeatability Down

Meanwhile, plagiarism is said to have "skyrocketed" over the past decade.8Retractions have boomed, too. As science writer Carl Zimmer tells us (2012), "the journal Nature reported that published retractions had increased tenfold over the past decade, while the number of published papers had increased by just 44 percent."9

Daniel Kennefick, a cosmologist at the University of Arkansas, reveals that "many authors are nowadays determined to achieve publication for publication's sake, in an effort to secure an academic position, and are not particularly swayed by the argument that it is in their own interests not to publish an incorrect article."10 One factor in this attitude may be that repeatability in medical studies falls over time; Jonah Lehrer told us in The New Yorker in 2010 that "it's as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable."11

C. Glenn Begley, a former researcher at the biotechnology firm Amgen, which funds cancer research, offers a sobering example: when Amgen redid the published experiments of 53 "landmark" cancer studies, it could not replicate the original results in 47 of them. As reported in a Reuters news article:

Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.

"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."12

Not only that, but some study authors made the Amgen scientists sign a confidentiality agreement preventing them from disclosing data that contradicted the study authors' original findings. So we may never know which ones couldn't be replicated.

Peer review is not much use against problems at this level. These people are the peers.

Sexy Findings for the Media

The social sciences have rocked to some juicy scandals in recent years. Diedrik Stapel, a recently suspended professor at Tilburg University in the Netherlands, was famous for his edgy findings, widely publicized in prestigious journals. For example, as James Barham explains at the blogsite TheBestSchools.org:

The first paper upended a gender stereotype (alpha-female politicos philander, too?!), while the second linked the physical world to the psychological one in a striking manner (a messy desk leads to racist thoughts!?). Both received extensive news coverage."13

Trouble was, Stapel had made up or manipulated dozens of papers over nearly a decade.14

Then there's Harvard researcher Marc D. Hauser, author of Moral Minds and other books, who made dramatic, unsupported claims about monkeys' mental abilities, and was forced to resign in 2011. Darwinian philosopher Michael Ruse told the Chronicle of Higher Education that the case "really makes me mad." Detractors, he worries, "seize on issues that supposedly discredit evolution and parade them publicly as the norm and the reason to reject modern science." Thus, "one man's mistakes rebound on every evolutionist."15

But is it really just "one man's mistakes"? In one industry-advertised study, half the social scientists admitted reporting only desired results.16 A New York Times article admits that "self-serving statistical sloppiness" is common.17 Ed Yong noted in an article in Nature that "it has become common practice, for example, to tweak experimental designs in ways that practically guarantee positive results."18 And when statistician Theodore Sterling analyzed four major psychology journals in 1995, he found that 97 percent of the studies published in them reported positive results. Interestingly, that was the same percentage he found when he first analyzed the journals in 1959.19

Some sources attribute the unusually high rate of questionable findings in the social sciences to "physics envy." As philosopher Gary Gutting explains, "most social science research falls far short of the natural sciences' standard of controlled experiments."20 Political scientists Kevin A. Clarke and David M. Primo urge social scientists to just get over it and "embrace the fact that they [the social sciences] are mature disciplines with no need to emulate other sciences."21

But are they mature? Methodological expert Eric-Jan Wagenmakers told the Chronicle of Higher Education that psychology "has become addicted to surprising, counterintuitive findings that catch the news media's eye, and that trend is warping the field."22 Actually, Mr. Wagenmakers is mistaken on one key point: Stapel's and Hauser's misrepresentations were popular because they didconfirm the worldview of the discipline, and of the mainstream media as well, who like to think that Top People all philander, flyover country is racist, and monkeys 'r' us.

Suggested Fixes

Various fixes for the problem are on offer. In the Chronicle of Higher Education,Tom Bartlett reports on an effort called the Reproducibility Project, part of the Open Science Framework, which aims to expose the bunk by focusing on reproducibility of findings. "This," Bartlett writes, "is a polite way of saying, 'We want to see how much of what gets published turns out to be bunk.'"23 But what if the problems run deeper than exposing bunk?

For example, Dutch psychologist Jelte M. Wicherts notes that most social psychologists "simply fail to document their data in a way that allows others to quickly and easily check their work."24 He advocates sharing data, which sounds like a fine idea, but if this failure has gone unaddressed for decades, how do we know what data isn't bunk?

More usefully, astrophysics postdoc Andrew Pontzen asks us to look at what has changed. "Peer-review offered a quality-control filter in an age where each printed page cost a significant amount of money," he points out, but today, physicists download papers from arxiv.org irrespective of their peer-review status. And they form their own opinions, without the gatekeeper.

So Pontzen suggests an alternative, better suited to the internet age:

Imagine a future where the role of a journal editor is to curate a comment stream for each paper; where the content of the paper is the preserve of the authors, but is followed by short responses from named referees, opening a discussion which anyone can contribute to. Everyone could access one or more expert views on the new work; that's a luxury currently only available to those inside privileged institutions.25

At Physics World, Stephan Thurner adds a "scouts" model, where journal editors could troll the preprint servers for suitable papers: "Papers that no-one wants to publish remain on the server and are open to everyone—but without the 'prestigious' quality stamp of a journal."26 Thus, the information in ignored papers would still be available, eliminating the problem of absolute censorship.

These suggested fixes recognize the reality that in today's science publishing, "getting published" is much less the issue than gaining attention and credibility in a trillion-word stream. But there may be a deeper, underlying issue as well.

Fudging Truth for the Cause

One wonders why the "skeptic" Michael Shermer isn't embarrassed by his praise of peer review in Scientific American, "The Believing Brain: Why Science Is the Only Way Out of Belief-Dependent Realism" (2011).27 He sounds so astonishingly naive. As is so often the case when a troubled currency's value is diminishing, the underlying crisis is philosophical.

Cognitive psychologist Steve Pinker insists, "Our brains are shaped for fitness, not for truth; sometimes the truth is adaptive, sometimes it is not."28 And Darwinian philosophers like Michael Ruse insist that ethics is an illusion.29When scientists accept these accounts of ethics, a dilemma arises: Materialistic atheism is supposed to accurately account for all events. All contrary results will eventually yield to that overriding fact. Then why not fudge a little in the short term, so that the triumph will not be delayed by present-day inconveniences such as stubborn disbelief? Well-known atheist science writer John Horgan explicitly endorses lying for science in such cases as the effort to fight global warming: "It's a war, and when people are waging war, they always lie for their cause."30

A Different Standard for Christians

Christians face this temptation, too. But for Christians, truth is a Person, a Person who never asked them to lie for him, but who, on the contrary, threatens to consign liars to the fire (Rev. 21:8), not to tenure.

Thus—to the extent that scientists ­accept materialistic atheism—we will likely see plenty of smoke, noise, and mirrors around reform, but no real reform. Real reform means deciding that ethics is not an illusion but a correct relationship with reality. And discovering truth is what a brain is fit for. Science used to be like that. •


Peer Review & ID Facts

Critics of intelligent design (ID) make a twofold argument based upon peer review. The first element is theoretical, holding that peer review is the end-all and be-all of truth. The second element is factual, charging that ID hasn't been published in any peer-reviewed journals. The beauty of the argument is that critics can conclude that ID should be ignored, rejected, mocked, and ridiculed without addressing anything ID says.

Denyse O'Leary powerfully deconstructs the critics' first argument by showing that the peer-review system often publishes claims that are false. But another flaw in the peer-review system is that it often rejects claims that are true.

Historian of science Juan Miguel Campanario has documented numerous instances in which top journals have rejected significant scientific papers. In one case, Nature rejected research that later earned the Nobel Prize for Physiology or Medicine.1 It's no wonder the U.S. Supreme Court stated that peer review "does not necessarily correlate with reliability, and in some instances well-grounded but innovative theories will not have been published."2

Regarding the second part of their argument, in Salvos 16 and 17 we covered multiple pro-ID peer-reviewed papers published by ID proponents in the mainstream scientific literature. In fact, last year the ID movement published its fiftieth peer-reviewed scientific paper.3

There may be good arguments against ID, but the claim that it hasn't been published in peer-reviewed journals isn't one of them. The argument fails on both theory and the facts.

—Casey Luskin

Endnotes
1. Juan Miguel Campanario, "On Influential Books and Journal Articles Initially Rejected Because of Negative Referees' Evaluations," Science Communication, vol. 16(3):304–325 (March 1995); Juan Miguel Campanario, "Not in Our Nature," Nature, vol. 361:488 (Feb. 11, 1993).
2. Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 593–594 (1993).
3. See www.discovery.org/a/2640.


Endnotes
1. Theodore Dalrymple, "On Exercise and Laugh Expectancy," PJ Media (5/21/12): http://tinyurl.com/7h9cwvz.
2. Editorial, The New Atlantis (Winter 2004): http://tinyurl.com/7p6d4j4.
3. Daniel Sarewitz, "Beware the creeping cracks of bias," Nature (5/9/12): http://tinyurl.com/79x2qga.
4. Kim DeRose, "UCLA biologists reveal potential 'fatal flaw' in iconic sexual selection study," Phys.org (6/26/12): http://tinyurl.com/8626f2n.
5. Fred Southwick, "Opinion: Academia Suppresses Creativity," The Scientist (5/9/12): http://tinyurl.com/7e7rl9y.
6. John P. A. Ioannidis, "Why Most Published Research Findings Are False" (8/30/05): PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124.
7. Suzan Mazur, "Margulis: Peer Review or 'Cycle of Submission'?" Scoop (1/5/10): http://tinyurl.com/cg2yrlv.
8. Bob Grant, "Misconduct on the Rise," The Scientist (5/21/12): http://tinyurl.com/6r85583.
9. Carl Zimmer, "A Sharp Rise in Retractions Prompts Calls for Reform," New York Times (4/16/12): http://tinyurl.com/7lhf35a.
10. James Dacey, "Peer review highly sensitive to poor refereeing, claim researchers," PhysicsWorld.com (9/9/10): http://tinyurl.com/cbu7x3f.
11. Jonah Lehrer, "The Truth Wears Off: Is there something wrong with the scientific method?" New Yorker (12/13/10): http://tinyurl.com/6ptz2uw.
12. Sharon Begley, "In cancer science, many 'discoveries' don't hold up," Reuters (3/28/12): http://tinyurl.com/7dt6kpt. 
13. James Barham, "More Scientists Behaving Badly," TheBestSchools.org (11/17/11): http://tinyurl.com/7pwtujx.
14. Gretchen Vogel, "Dutch 'Lord of the Data' Forged Dozens of Studies," Science Insider (10/31/11): http://tinyurl.com/6h4ukjv.
15. Heather Horn, "Harvard Professor Found to Have Used False Data About Monkeys," Atlantic Wire (8/25/10): http://tinyurl.com/72ch6cm.
16. "Questionable Research Practices Surprisingly Common," Association for Psychological Science (5/24/12): http://tinyurl.com/83ry5nb.
17. Benedict Carey, "Fraud Case Seen as a Red Flag for Psychology Research," New York Times (11/2/11): http://tinyurl.com/67kx9j2.
18. Ed Yong, "Replication studies: Bad copy," Nature (5/16/12): http://tinyurl.com/bs4l697.
19. Ibid.
20. "How Reliable Are the Social Sciences?" New York Times (5/17/12): http://tinyurl.com/d3g8yhs.
21. Kevin A. Clarke and David M. Primo, "Overcoming 'Physics Envy,'" New York Times (3/30/12): http://tinyurl.com/747b773.
22. Christopher Shea, "Fraud Scandal Fuels Debate over Practices of Social Psychology," Chronicle of Higher Education (11/13/11): http://tinyurl.com/86qbuql.
23. Tom Bartlett, "Is Psychology About to Come Undone?" Chronicle of Higher Education blog (4/17/12): http://tinyurl.com/85wuhty.
24. Jelte M. Wicherts, "Psychology must learn a lesson from fraud case," Nature (11/30/11): http://tinyurl.com/c6sr48r.
25. Andrew Pontzen, "Time to review peer review," New Scientist (6/21/12): http://tinyurl.com/6qd2rc8.
26. James Dacey, op. cit., note 10.
27. Michael Shermer, "The Believing Brain: Why Science Is the Only Way Out of Belief-Dependent Realism" Scientific American (7/5/11): http://tinyurl.com/6fvkpks. 
28. Steven Pinker, How the Mind Works (W.W. Norton, 1997), p. 305.
29. Michael Ruse, "Evolution and Ethics," in Bruce L. Gordon and William A. Dembski, eds., The Nature of Nature: Examining the Role of Naturalism in Science (ISI Books, 2011), p. 861.
30. John Horgan, "Should Global-Warming Activists Lie to Defend Their Cause?" Scientific American (2/24/12): http://tinyurl.com/8yrj59t.


From Salvo 22 (Fall 2012)
Subscribe to Salvo today!

If you enjoy Salvo, please consider giving an online donation! Thanks for your continued support.

is a Canadian journalist, author, and blogger. She blogs at Blazing Cat Fur, Evolution News & Views, MercatorNet, Salvo, and Uncommon Descent.

This article originally appeared in Salvo, Issue #22, Fall 2012 Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/article/salvo22/bunk-science

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]