WSJ claims STM journals rig impact factors
The Wall Street Journal published an article on Monday claiming that science journals routinely manipulate impact factors by encouraging contributors to cite heavily prior articles from the same journals. (The link goes to a login-free copy of the article as found on the Stay Free! blog.)
Now, I think it’s true that because many journals tend to be highly specialized in the sorts of articles they publish that there will be a natural tendency for authors to use and cite a good number of articles from the same publications. But this article in the Wall Street Journal isn’t simply drawing a conclusion from the number of citations to a journal’s own articles. It is reporting on authors’ experiences with editors who ask for more citations to their journals as a condition of publication. It also includes an editor’s candid description of this phenomenon.
Isn’t it interesting and somehow a little odd that this is coming up in the Wall Street Journal? “Impact factor” is a term you don’t often see in a daily newspaper. Perhaps this problem has been studied elsewhere (please comment with cites if you have them), but this article isn’t just a digest of another study. The article’s author, Sharon Begley, interviewed a good number of authors and scholarly journal editors in preparing her story for WSJ. What she says is important. I would like to see the issue discussed as directly and frankly in the Chronicle of Higher Education, but I’m not holding my breath. (Okay, point me to the citation…)
8 comments on “WSJ claims STM journals rig impact factors”
We discussed this a lot at my library and with our faculty and university administration last fall. There are a good number of articles out there, including in the Chronicle of Higher Ed.
For example: From the issue dated October 14, 2005 –
The Number That’s Devouring Science
The impact factor, once a simple way to rank scientific journals, has become an unyielding yardstick for hiring, tenure, and grants
By RICHARD MONASTERSKY
http://chronicle.com/free/v52/i08/08a01201.htm
I’d like to see something that says what this WSJ article is saying – not that impact factors are heavily used and can distort the research process, but they are being outright manipulated in the fraudulent way described in the WSJ article.
The WSJ article doesn’t seem to be saying anything that other articles haven’t already said. For example, in the CHE article, it states:
“That influence has also led to a creeping sense of cynicism about the business of science publications. Journal editors have learned how to manipulate the system, sometimes through legitimate editorial choices and other times through deceptive practices that artificially inflate their own rankings. Several ecology journals, for example, routinely ask authors to add citations to previous articles from that same journal, a policy that pushes up its impact factor. Authors who have received such requests say that the practice veers toward extortion and represents a violation of scientific ethics.”
and ” But the number continues to be so influential that some who run journals try to manipulate the system. “Publishers have become quite expert in skewing it to their own benefit,” says Vitek Tracz, chairman of Current Science Group, which publishes more than 100 open-access journals.
One well-known method is to publish more review articles — those that give overviews of a topic but don’t usually present new data. They generally attract more citations than do original research articles. So when the editorial board of the Journal of Environmental Quality met in 2003, it resolved to emphasize review articles in order to shore up the journal’s slipping impact factor.
Other tactics exploit gaps in the way ISI calculates the impact factor. When journals publish news articles, editorials, book reviews, and abstracts of meetings, ISI does not count those items as “citable articles”; hence they do not go into the denominator of the impact-factor calculation. But if those uncounted items get cited in the literature, ISI still puts those citations into the numerator, thereby increasing the journal’s impact factor.”
and
“Crooked Citations
Editors defend the changes they have made in their journals, arguing that editorials, book reviews, news sections, and similar features are important and popular with readers. But journal watchers point to other, less scrupulous, ways to raise the citation numbers.
Sometimes journals will run editorials that cite numerous articles from previous issues. In a new study, Jan Reedijk, of Leiden University, and Mr. Moed found that a significant number of journals get a noticeable jump in their impact factors from such self-citations in editorials.
In other cases, research articles in a journal preferentially cite that very journal, with the effect of raising its impact factor. ISI detected a clear example of that practice at the World Journal of Gastroenterology. The company stopped listing that journal this year because 85 percent of the citations to the publication were coming from its own pages. (Despite that censure, the journal’s Web site has a moving banner that still trumpets its 2003 impact factor.)
The gaming has grown so intense that some journal editors are violating ethical standards to draw more citations to their publications, say scientists. John M. Drake, a postdoctoral researcher at the National Center for Ecological Analysis and Synthesis, at the University of California at Santa Barbara, sent a manuscript to the Journal of Applied Ecology and received this e-mail response from an editor: “I should like you to look at some recent issues of the Journal of Applied Ecology and add citations to any relevant papers you might find. This helps our authors by drawing attention to their work, and also adds internal integrity to the Journal’s themes.”
Because the manuscript had not yet been accepted, the request borders on extortion, Mr. Drake says, even if it weren’t meant that way. Authors may feel that they have to comply in order to get their papers published. “That’s an abuse of editorial power,” he says, “because of the apparent potential for extortion.””
Ah, yes. Thanks.
Back to your point about why this is showing up now in WSJ of all places – I don’t know about intent, but one possible consequence is that it may provide further support to those who want to limit spending on academic, university, *public* science endeavors. It seems that these days there is a lot of criticism of science in conservative publications.
Yes, I agree. The ironic thing is that this phenomenon is so much due to the growing pressure on the scientific community to compete for decreasing public funding and to make compromises in order to accept private funding. It’s a manifestation of the distorting influence of commercial pressure on science, and the agenda of these conservative interests is in part to privatize science to an even greater extent.
It would be good if librarians and tenure committees went back and understood the roots of scientific communication. Books like The Double Helix; papers like Julie Virgo’s in LIBRARY QUARTERLY 47:415-430 (1977)
“STATISTICAL PROCEDURE FOR EVALUATING IMPORTANCE OF SCIENTIFIC PAPERS.”
Maybe there could be weight and measure and not just inflated citations among friends.
Comments are closed.