Impact Factor Distortions (对影响因子的歪曲)

发表日期: 2013-05-17    浏览数: 4403



Impact Factor Distortions (对影响因子的歪曲)

Bruce Alberts, Editor-in-Chief ofScience.

The impact factor, a number calculated annually for each scientific journal based on the average number of times its articles have been referenced in other articles, was never intended to be used to evaluate individual scientists, but rather as a measure of journal quality. However, it has been increasingly misused in this way, with scientists now being ranked by weighting each of their publications according to the impact factor of the journal in which it appeared. For this reason, I have seen curricula vitae in which a scientist annotates each of his or her publications with its journal impact factor listed to three significant decimal places (for example, 11.345). And in some nations, publication in a journal with an impact factor below 5.0 is officially of zero value. As frequently pointed out by leading scientists, this impact factor mania makes no sense.

The misuse of the journal impact factor is highly destructive, inviting a gaming of the metric that can bias journals against publishing important papers in fields (such as social sciences and ecology) that are much less cited than others (such as biomedicine). And it wastes the time of scientists by overloading highly cited journals such asSciencewith inappropriate submissions from researchers who are desperate to gain points from their evaluators.

But perhaps the most destructive result of any automated scoring of a researcher's quality is the "me-too science" that it encourages. Any evaluation system in which the mere number of a researcher's publications increases his or her score creates a strong disincentive to pursue risky and potentially groundbreaking work, because it takes years to create a new approach in a new experimental context, during which no publications should be expected. Such metrics further block innovation because they encourage scientists to work in areas of science that are already highly populated, as it is only in these fields that large numbers of scientists can be expected to reference one's work, no matter how outstanding. Thus, for example, in my own field of cell biology, new tools now allow powerful approaches to understanding how a large single-celled organism such as the cilateStentorcan precisely pattern its surface, creating organlike features that are presently associated only with multicellular organisms.§The answers are likely to bring new insights into how all cells operate, including our own. But only the very bravest of young scientists can be expected to venture into such a poorly populated research area, unless automated numerical evaluations of individuals are eliminated.

The DORA recommendations are critical for keeping science healthy. As a bottom line, the leaders of the scientific enterprise must accept full responsibility for thoughtfully analyzing the scientific contributions of other researchers. To do so in a meaningful way requires the actual reading of a small selected set of each researcher's publications, a task that must not be passed by default to journal editors.

Science 17 May 2013:Vol. 340 no. 6134p. 787 ; DOI:10.1126/science.1240319