The Meaning of Life Type Stuff

View Original

Towards a Socialist Model of Scientific Publishing — Setting the Stage

An Experiment on a Bird in the Air Pump. Joseph Wright ‘of Derby.’ The proverbial canary in the coal mine is dead.

By Daniel Tarade

Our model of scientific publishing is broken. Perhaps irreparably so. The problem is compounded by the fact that those individuals who have the most power to fix how we publish science are those who have the most to lose. Young trainees and early-career professors could, en masse, published via radically different modalities and foster a more open, collaborative, honest science. Many conversations with colleagues have centred around our collective grief with scientific publishing. I have prescriptive ideas for how science ought to be disseminated. Those ideals will be the focus on later essays. But, first, let’s set the stage. What is the problem?

At the fore of the crisis is that scientific journals are largely for-profit entities run by massive corporations. I contend that any for-profit corporation prioritizes profit above all else. Not exactly a controversial viewpoint. However, compounding this issue is that any career advancement in the sciences requires publishing your work in a prestigious journal, which are invariably for-profit. This system emerged about fifty years ago and before scientists realized the maelstrom on the horizon, they were already quagmired. Below I outline one of the most perverse business models ever conceived;

One, scientists conduct a study that takes multiple years to complete. This work is (usually) government funded.
Two, scientists submit their work to journals for publication. This is the expected route of scientific dissemination.
Three, if the journal editors considers the work to be of interest, it will be sent out for peer-review
Four, a panel of peer-reviewers volunteer freely to judge the quality of the work
Five, if the work is deemed to be of high quality, it is accepted for publication
Six, the authors give up the copyright of their work and pay to have their work published
Seven, university libraries pay massive subscription costs to academic publishers for access to these journals

The majority of published scientific work moves through this system, with academic publishers making money coming and going. Government-funded work is given freely to a journal that charges universities to access said work. During the middle steps, editing is provided freely by experts. My supervisor likes to share an anecdote where his father-in-law, a Manhattanite lawyer, compares his consultation fee, starting at $1000 an hour, with those of scientists who freely consult on emergent scientific matters. No other group of experts is expected to provide consultation for free, particularly to help a for-profit corporation make billions a year. Elsevier, a giant in the academic publishing business, puts out 16% of all scientific literature and boasts annual revenues over $10 billion CAD with profit margins of 36%. Elsevier profits more each year than Canada invests in science annually.

Why do scientists put up with this exploitive system? Over the decades, articles have emerged as the currency of science. Although technological advances have now made it possible for me to release weekly updates on my research through a personal website, to do so would be to commit career suicide. Keep up with the Joneses. Of course, prestige in scientific publishing was pushed by academic publishers. In the seventies, certain journals began branding themselves as purveyors of scientific break-throughs and became incredibly selective, not on the basis of scientific quality, but impact. These journals rejected most submitted manuscripts and the bar for publication required years of study in hot areas of research. In 1974, when the journal Cell pioneered this new approach, the relationship between journals and scientists fundamentally changed. No longer were journals merely avenues for distributing research but they became gate-keepers of what is important science. Journals now influence what scientists study and how they perform their studies.

The fact that scientists put up with such a ridiculous financial models is just one frustration. However, it is the meddling of journals with the scientific process that is insidious and escapes notice, more often than not. We all live in a state of perpetual cognitive dissonance. The issue lays with subjective notions of impact. Prestigious, for-profit journals like Nature, Science, and Cell are meant to publish the most important, far-reaching scientific studies but how do editors determine impact? They rely on article metrics like impact factor, which quantifies the number of citations an average paper receives during two or five-year period. Like all models, impact factor merely approximates impact, poorly at that. And when jobs and awards are on the line, having become intimately linked to publication record, scientists, ever clever, have learned how to play the game. Impact factor does not measure robustness, rigour, or reproducibility of science but only how often a paper is cited. The easiest way to become highly cited is publish in a hot field. A hot field means more articles being published on the topic, meaning more citations. Rather than freely pursue their own research interests or even embarking upon entirely unknown avenues of research, scientists tend to be trapped in eddies, swirling in vortices of what is popular. Fast fashion, meet fast science.

The opposite approach for publishing in prestigious journals also works. Rather than rushing out scientific studies in a hot field, other strategies include hoarding data for years, to build the most comprehensive study possible. This may sound perfectly fine prima facie but consider that data hoarding delays other scientists from working off of those conclusions. In a publish or perish system, I have seen many professors refuse to present unpublished work at conferences or share correspondence with other scientists because they have been developing a story for years. The fear of being scooped and having others publish their finding first stifles science, delays progress, and turns potential collaborators into competitors. As reproduction studies are cited less often, prestigious journals give such important work little priority, often leaving those with the first say the final word. The bias does not only extend to the reproduction studies but also those with negative findings, where interesting hypotheses are rejected. Even among costly clinical trials, half of all studies are not published. Unsurprising, most often unreported clinical trials are those cases where experimental drugs are shown to be ineffective. Ultimately, the scientific community has conceded control over what is deemed important to the scientific community itself. Editors working for for-profit journals do not prioritize publishing negative findings or reproduction studies because they lower the impact factor and prestige of the journal, hurting their bottom line.

Incentives to participate in for-profit science run even deeper. Countries around the world are now paying research groups for every article published in a prestigious journal, as determined by impact factor. Most prominent among these Countries is China, where formulas for pay-out include impact factor as a key variable. Perhaps to be expected, there has been a steady increase in the rate of retraction of scientific papers, most obviously observed among high impact factor journals. Beyond retractions, there has emerged a bona fide reproducibility crisis. Only 1 in 10 ‘landmark’ studies in cancer biology could be reproduced in one study. In another survey sent to scientists, 70% reported that they have failed to reproduce the findings of another lab. Consciously or un-consciously, scientists have been compelled to over-state conclusions and cut corners in an effort to establish their name and achieve tenure.

If you can’t afford to play in a hot field, other for-profit journals have popped up to meet market demands.[i] These journals offer to publish any and all research articles with little to no peer-review. All it takes is money. A lot of money. Rightfully so, these journals are derided as predatory, taking advantage of desperate scientists who might not recognize the scam. Recent anecdotes that are equal parts hilarious and terrifying highlight clearly the machinations at work. For example, a paper entitled “Get Me Off Your Fucking Mailing List” was accepted for publication in the International Journal of Advanced Computer Technology. One of the ways to spot a predatory journal is to look out for the use of “International” or “Global” as adjectives. Although the advent of internet publishing has lead to an explosion of predatory journal, the first journals to meet contemporary definitions were established by Robert Maxwell, founder of Pergamon Press (later bought by Elsevier), who established the model for for-profit publishing in the mid-1950s. His original strategy involved attending conferences and offering to publish any work presented at the conference and having scientists sign exclusive editing contracts. Ironically, Maxwell also had a penchant for including the suffix “The International Journal of...” for the new journals he created. Although Elsevier journals now provide a peer-review, I contend that any for-profit, closed access journal is predatory and exploitive.

Further, despite the scrutiny scientists routinely display during voluntary peer-review, they display a remarkable shallowness during hiring and promotion of professors. One professor, who sits on several hiring and promotion committees, remarked to me that in recent years they have finally started reading the articles published by candidates. Before, they would merely tally up the number of articles and where they were published. More disheartening are my peers, other graduate students and post doctoral fellows, who, upon hearing about a recently submitted or published manuscript, often will first ask where. If the journal is unfamiliar, the follow up question almost always clarifies the impact factor. Try googling the name of any journal. Inevitably, google will auto-complete to Journal impact factor. Even for journals that have openly condemned the idea of an impact factor, like PLoS One. Rather than ask what the conclusions were or how the study was designed, we have become addicted to metrics spoon fed to us by for-profit journals. These ought not be the ideals that we grasp onto as we begin our academic careers.


As a young trainee, I have received conflicting advice as to what my focus should be. Some professors, including my own supervisor, argue that the focus should be on developing one’s penchant for crafting interesting hypotheses and learning how to design well-controlled studies that cut to the heart of the question. However, from all sides come mounting pressure to publish high and often. From department chairs, collaborators, and fellow students. More often than not, this pressure results in less focus on learning sound scientific principals. We are discouraged from spending extended periods of time learning our system of study or perfecting our model. Many students are not given entry level projects where one can begin to learn the basic principals of science. Instead, graduate students are often thrown into the deep end, where the risk-reward calculation looks particularly bright for supervisors; if a student fails to publish in a prestigious journal, it mainly affects the student but if they succeed, the supervisor benefits from large grants and increased name recognition.

Last year, our department’s administrative staff accidentally released internal rankings of all graduate students. Not only was private information disseminated, we were all able to catch a glimpse of the ranking algorithms, those which decide who are awarded scholarships. Not surprisingly, the publishing record of every student is listed but, unsurprisingly, the corresponding impact factor was also noted. Clearly, from the department’s perspective, the best way to ascertain who is the most promising scientist is to tally up the average number of citations a paper receives in the journals they happened to publish their work. This is the preferred method of evaluation despite the fact that we present our work orally twice during our degrees and annually in a poster format at departmental seminars and conferences. However, much like for-profit journals care about promoting their prestige, so do Universities. It is less about developing competent scientists and more about putting the University on the map and in international headlines. As scientists, we can and ought to do better.

[i] Vinny, P. W., Vishnu, V. Y., & Lal, V. (2016). Trends in scientific publishing: dark clouds loom large. Journal of the neurological sciences363, 119-120.