Publish or Perish!
Research must be made accessible for the world to benefit from the discoveries made. Research is supported by grants which are paid for through tax money by society at large and so, it is only fair that research findings are given back to the same society. Publishing research findings is also of benefit to the research community - ensuring that researchers do not unnecessarily repeat the experiments of others, thus creating a learning community that is far greater than any single laboratory. Publishing one's research has become ingrained in our scientific society to the extent that the quality of a researcher or academic is often determined from the number of publications.
Publishing one's work was not always the norm. Scientists were cautious about publishing their works in fear that other scientists would claim priority over the works. This attitude changed when Henry Oldenburg, a German theologian, diplomat and natural philosopher, became the founding editor of the Philosophical Transactions of the Royal Society - a publication of the Royal Society and which, in 1665 became the first journal exclusively devoted to science. Oldenburg convinced the scientists of the time that their work would be rapidly published and that society would offer its support should the author's priority be questioned. Oldenburg also had the idea of ensuring the quality of the publication by sending the manuscript for review to a select group of experts prior to publication. Hence, the beginnings of the peer-review process as we know it.
Oldenburg is rightly credited as the founder of the modern-day scientific publication process.
What is the path to getting one's work published?
Once research has reached a stage which is sufficiently mature such that experimental data has been collected and analysed, a researcher will be in a position to write a paper detailing one's approach, data collection, data analysis and results. This is the research article. The researcher then has two possibilities: to submit the work to a conference or to submit the work to a journal. In general, journal papers would require works that are more mature, with greater experimentation and results, while conference papers can document works that, while still ongoing, would have reached a certain level of maturity.
Once the article is submitted to either the journal editor or the conference programme chair, these will make a preliminary decision about the paper, determining if it fits within the scope of the journal or conference and if the quality appears to be, at first glance, to the expected standard.
The editor/programme chair is then responsible to send the paper to reviewers - usually three other scientists considered experts in the field and whose role is to read the paper and assess whether the method described is sound and novel, whether conclusions drawn are reflective of the results obtained and whether results could be replicated by others. The reviewers submit their reports to the editor independently of each other.
The editor/programme chair must then decide whether the paper should be accepted for publishing. In journal submissions, authors whose articles are outright accepted may be asked to revise their article according to the reviewer recommendations and go through a second round of review, in this way, giving the authors the possibility to amend the paper to improve its quality. In conferences, where the turn-around is by nature faster, this is not possible and a paper is either accepted or rejected.
Authors must, therefore, consider whether their work has reached a maturity level worth publishing and then, the best avenue for submitting the work.
How do journals come to exist?
Publishing wasn't easy. Besides the process being a long one, publishing houses were not in the business of making much profit, the quality of the publication was poor and publishing houses could only publish a few articles. This started to change when business-minded individuals like Robert Maxwell, who with others formed Pergamon press, started to shake the publication process. Maxwell started pursuing scientists at conferences, recruiting them to publish their papers. At a time when science was booming, Pergamon press recruited scientists to publish in new journals and then convinced universities to purchase subscriptions to these journals, channelling money back to the publishing house. While this model was tenable when the number of journals was relatively small, the number of journals has grown and libraries simply couldn't keep up with the costs of subscribing to all journals.
With the popularity of internet and digital access, publishing houses moved to digital libraries, offering universities access to all journals as digital assets. However, with an ever increasing number of new journals, the cost remains a substantial one.
For the science enthusiast, not in academia, it is easier and cheaper to access misinformation than actual scientific publications.
Research is then being placed behind a paywall, reducing its accessibility, particularly to those outside of universities or without access to limitless funds.
What is the cost of publishing?
It would be easy to criticize publishing houses and say that all journal articles should be made freely available. Publishing, however, is costly. At minimum, the publishing body provides an archiving service, the platform for authors to submit their papers, and the platform for reviewers to submit their reviews. It ensures that articles, once accepted for publication, are typeset in such a way that adheres to the journal style. The publishing body is also responsible for ensuring the technical quality of the articles in its journals. And just like Oldenburg's initial reassurance, the publishing body also protects its authors and the material submitted by the authors from academic misconduct far better than any sole author may do.
The issue is that the publishing process relies on the goodwill and the time of the scientific community. Authors write articles and submit them to the journal. If the article is published and receives attention from the research community, generating revenue for the publishing house, the authors do not receive any royalties for their efforts. The peer-review process, through which the quality of the articles is assessed, is also reliant on other scientists who do the reviews on a voluntary basis, under the expectation that others return the favour when they too submit their own scholarly works. Moreover, with many science journals resorting to the use of LaTeX, authors are, very often, submitting manuscripts that require minimal editing before being published.
In view of this, then one naturally asks why is it that publishing bodies ask for high fees for others to access the scholarly works submitted and reviewed for free? And to place figures for context, a single article from the Elsevier publisher would cost $27.95. While this may not seem much, in 2019 Elsevier had 3,000 journals, published nearly half a million articles per year and reported a revenue of $9.8 billion, with profit margins surpassing 35%. Considering that its raw material - the articles - are given freely, this is a hefty profit.
The open access model
Conscious of the fact that research grants are funded through tax-payer money, funding bodies are increasingly requesting that scholarly works are published under the Open Access model. There are various levels of open access. The three most common modalities being Gold, Green and Hybrid models.
In Gold Open Access, the publisher grants access to all articles and related materials immediately upon publication. The articles are licensed for sharing and reuse under the creative commons licence. Green Open Access allows authors to self-archive their own works in websites controlled by the author, the research institution funding the research, or to some central open repository. In most cases, the article posted under the Green Open Access would be a pre-print, that is, the author's own copy, before the final publication typesetting edits. Hybrid models are similar to the Gold model, with the exception that the author is given the choice on whether to publish as open access or not.
Given the obvious benefits of open access, why then would anyone not jump at the open access opportunity? In reality, open access places a financial burden on the author. The fee for open access can easily go up to $1000 per article. Authors with limited funding will find this model unsustainable.
Publish or perish!
The irony is that this conundrum with publications has been fuelled mainly by academic institutions which view the number of publications as a benchmark for academic progress. This has led to an ever increasing variety of journals, some in very niche and narrow fields such that it becomes increasingly difficult to keep track of all journals and for libraries to gain access to all content. To retain competitive edge, publishing bodies and authors are looking for a fast turn-around for papers. This in turn is placing pressure on reviewers who are now being asked to review papers in as little as five working days, resulting in poor reviews and a decline in quality.
The need to publish and to boost the number of publications is also placing an unnecessary burden on researchers, pushing researchers to publish works, preferably in high impact journals, diluting research fields with works that are not necessarily repeatable. Long and slow work which was the modus operandi of the scientist of the 20th Century is no longer a viable career option. Consider, for example, Frederick Sanger, who was awarded the Nobel Prize in Chemistry twice, and considered as the father of genetic sequencing. He published 70 papers throughout his career - which makes one cheekily wonder if he would have found himself out of a job by today's standards!
Is there a metric to assess publication quality?
With so many journals to choose from, institutions and authors are starting to give more importance to the quality of the journal. One way to determine this is by looking at the impact factor of the journal. This is a number that reflects the average number of citations that articles in the journal over the last two years and is worked out as:

One may think that to truly make an impact, you would need to publish in journals that have a high impact factor. Indeed, the impact factor was introduced by Eugene Garfield with the scope of assisting librarians in determining which journals are worth archiving.
The impact factor, however, is not without flaws. Publishers may manipulate what they consider as published material, i.e. changing the denominator of the equation, which allows for some drastic change in the metric. A well known example of such manipulation occurred in 1988 when, by eliminating meeting abstracts from the published lists, FASEB Journal boosted its impact factor from 0.24 to 18.3!
Moreover, editors can be picky about the articles they choose to publish. Review papers, in general, get more citations than other papers. So, accepting more review papers is a way of increasing the impact factor.
The number of citations a journal receives is also proportional to the size of the field it covers and so, transactions or journals with a broad catchment area are likely to have high impact factors than more specific journals. This makes the metric difficult to compare across different fields or disciplines.
Finally, the impact factor is no measure of the quality of a given article. This is because, the impact factor is determined as the mean of the number of citations and the mean is easily skewed by outliers. For example, in 2004, around a quarter of the articles in Nature contributed to 90% of it's impact factor, meaning that the actual number of citations for a single article in the journal is most likely much lower than the mean number of citations across articles. And these are certainly not unique examples.
Using the impact factor as a measure of quality may give the impression that a poorly cited paper in a high impact factor journal is of more importance to a better cited one in a lower impact factor journal. The consequence of the impact factor metric, unfortunately, is that authors attempt to submit to high impact factor journals at the expense of these being less than ideal matches for the paper content. This often results in frustration at lengthy publication processes.
The San Francisco Declaration on Research Assessment made several recommendations to authors, publishers and funding bodies along three main themes:
Eliminate journal-based metrics from appointment and promotion considerations
Assess research based on its own merits rather than the journal in which it is published
Capitalise on opportunities provided by online publications
In view of this, the selection of where to publish should then be determined not just by the impact factor but through other aspects such as the target audience, the quality of the papers, the kind of works published, the cost of the publication and open access policies of the journal.
At the other end of the spectrum from selecting high impact factor journals is to eschew the peer-reviewed publishing route altogether and submit articles to community-led archiving systems such as arXiv. Inspired by Joanne Cohn and put into action by Paul Ginsparg, arXiv is an open access repository of electronic pre-prints and post-prints which are approved for posting after moderation but not peer-reviewed. Authors must be endorsed by other established authors, although endorsement is automatic for authors from recognised institutions. Most papers submitted to arXiv are subsequently submitted in mainstream journals, although in some instances, examples of influential works were only submitted on arXiv and never published elsewhere.
A closer look at the peer review process
The peer review process is the tried and tested method for evaluating the quality of the submitted paper. The role of the reviewer is to read the submitted work and objectively determine the technical soundness and novelty of the submitted work, suggest additional experiments that authors may have overlooked or challenge data interpretation.
Peer-reviews typically occur in two modalities: blind or double-blind. In both modalities, the reviewer names are not revealed to the authors. In double-blind reviews, the author names are also withheld from the reviewers. The anonymity of the reviewers ensures that the reviewers may make their comments without concern of repercussions. However, the anonymity may also provide the reviewers with a licence to provide comments that are less than kind. Seniority and regional biases can be introduced in single-blind reviews and for this reason, double-blind reviews are generally perceived as the better option.
Although intended as a gate-keeper for quality control, peer-review is not without its flaws. Valid papers may be wrongly rejected because reviewers miss the contribution of the paper or fail to understand the soundness and beauty of an approach because it breaks the mould a little too much. Perhaps worse, papers that are sub-par may be accepted because the reviewers fail to notice the flaws in the research. The best known example of this would be Andrew Wakefield's infamous MMR paper. More baffling are experiments where bogus papers made it through the review process.
However, new models for conducting reviews are on the rise: ones that recognise the fact that the review process does not need to end at the moment the paper is published. This is the concept behind post-publication review, where a paper is open to comments and observations by all others in the field, opening papers to a longer period of scrutiny and dialog between authors and commenters, creating manuscripts that may be revised to allow for better exposition of the research problem being discussed.
In this utopian approach, papers are initially vetted for clarity and scope, but it is then up to the scientific field at large to determine the quality and worth of the article. A good article would gather traction, similar to the way that an article published in the traditional sense gathers citations. The difference would be that the initial gatekeep is more lenient.
Is this post-publication review the solution? Humans are susceptible to prejudice. And this system, although on paper sounds more democratic may still suffer from these prejudices. A top researcher from a prestigious university is still more likely to attract more followers than an early-stage researcher in a fledgling university. A double-blind review, with all its drawbacks, would have at least, given everyone a level playing field irrespective of their background. Some middle ground between the double-blind review and the post-publication review needs to be found.
Looking ahead
Going forward, I think that it is time to re-introduce the ideal that it is the quality and not the quantity of publications that contribute value to science. It is also essential that institutions educate young researchers on the art of performing a good review such that the scientific community may benefit from a larger pool of reviewers that provide more than #sixwordpeerreview. It is also important for reviewers to demand longer time frames if this is necessary to provide a good review. Finally, in selecting journals, authors may want to consider how much the publication values open access vs the potentially doctored impact factor to ensure that research is truly accessible to all.
What are your opinions? Do share your views in the comments below.
Further reading
Stephen Buranyi (2017) Is the staggeringly profitable business of scientific publishing bad for science? The Guardian, 27th June, 2017.
Jessica Borger (2018) Peer review has some problems – but the science community is working on it The Conversation, 12th July, 2018.
Bradley Allf (2020) I published a fake paper in a 'Peer-Reviewed' journal, Undark, 26th July, 2020.
Elaine Devine (2015) Why peer review needs a good going over, The Guardian, 28th October, 2015
Julia Belluz and Steven Hoffman (2015) Let's stop pretending peer review works, Vox, 7th December, 2015
Rashmi Tandon (2016) Making and impact: Pros and cons to impact factor BiteSizeBio, 9th July, 2016
Toni Feder (2021) Joanne Cohn and the email list that led to arXiv Physics Today, 8th November, 2021