As a new journal yet unbranded by an impact factor (IF), we can address this important element of science without suffering either the anguish of those journals that languor in the lower divisions or the insouciance of those with double digit IFs. This number, as abstract as it is, has become the pervasive arbiter of scientific careers for those who apply for positions and fellowships. Just as The New York Times theatre critic has become the one to determine which plays on Broadway flourish and which plays will shut down, IFs can boost or destroy scientific careers. All of us who have had to turn down an applicant know the dreaded words of execution: ‘...the publications are in low impact journals.’ Indeed, I suspect that some selection committees, deprived of strong scientific input, simply add the IFs of a candidate's publications and are comfortable with this non‐subjective manner of ranking. This is quite at odds with frequent studies showing that a journal's IF is dependent on the research field, i.e. an IF of 3 in one area is the Everest among hillocks, whereas the same IF in other areas is only a mere Mount Blanc.
Then there is the gamesmanship associated with pushing up the IF of a journal. The attributed IF value is based on the frequency at which the average paper in each issue was cited during the previous two years. It is an inevitable consequence of the deluge of papers published that we get a lot more information through reviews than from original papers. The limitations that journals impose on space for references inevitably lead to a higher level of citation of reviews too. Consequently, journals with a high number of reviews have an advantage in the IF league over those that publish only primary research papers. Furthermore, journals that are very selective or even restrictive can reduce the number of papers per issue and limit them to topics that are currently more trendy and therefore more likely to be cited. Again, the consequence is positive for the journal's IF without necessarily reflecting the quality of the science presented. It does not take great insight to know that a paper on p53 will be more often cited than one on plasmid replication, even if the scientific quality is equal. Now, look at some of the journals that most scientists would like to publish in and you will find that the commercially sensible analysis outlined above is being aggressively applied. Commercial is the appropriate word here, as a high IF gives rise to higher circulation and higher advertising revenue. A practical, obvious, and fair alternative would be to exclude reviews from the IF data. But the commercially owned ISI, which establishes the IF, has no plans to do so.
Nonetheless, the IF ranking provides a wonderful guide to what should be read. It stratifies the journals and, in general, guides scientists to those where the best papers are most likely to appear. Referees for the high‐impact journals can be the most critical and demand the most convincing combination of proofs in order to extract a superb paper. This raises standards and expectations, which serves both the author and the reader. Occasionally, voices are raised against the implicit elitism of high‐impact journals. But to argue for an egalitarian system is to deny the obvious fact that papers of very different quality are published in different journals. We are well served by a system that focuses our attention on the tip of the iceberg.
A more fair assessment of a scientist's career, however, could come from a greater use of citation indices (CI). Here, each paper is judged individually by how often it appears in other papers and reviews. Thus, the brilliant results that you were not able to publish in a high‐impact journal—because the editors did not consider your paper appropriate—can be given their true value, irrespective of the journal's title. But CI is dependent on visibility. An outstanding paper deposited in an obscure location will have the same problem of becoming upwardly mobile as do people who start their lives in a bad neighbourhood. Furthermore, bean‐counting decision‐makers in selection committees are not attuned to CIs and thus do not know what is a good score in this system.
But all this may be about to change. With electronic versions of journals becoming an inevitable progression and ‘mouse clicking’ winning over ‘page flicking’, we increasingly find articles that we would otherwise never have seen. In the future, we may not even care about which journal a paper appeared in, when its title captures our attention. We will show our interest by clicking to read the abstract or perhaps downloading the full text of selected papers. Thus, the subset of scientific papers that is of true influence ends up in our reference list. Would it not be more equitable to have a more complete ‘impact’ system, which would monitor all three aspects–namely scanning of abstracts, downloading the full text, and frequency of citation–and give them different weightings to determine the real value of a paper? Developing such a system is timely, and the tools to do so are in place on each journal's electronic site. The original citation system started before the need for an impact factor was recognised. Now that it is an integral part of our scientific lives, it would be surprising if a better version did not come into being in the near future. Watch out for www.citation.com!
- Copyright © 2000 European Molecular Biology Organization