Tenure and Citation Influence Tracking Tools – Yea or Nay?


I was just asked a question by a faculty member about using Web of Science citation tracking as a preparation for tenure review. While I would never, at this point in time, advise anyone to NOT look or not make sure they have these figures in hand, the situation has gotten more complicated in recent years. For that reason, I wanted to share a lightly edited portion of my response to this faculty member, as possibly being of interest to others.


What we did last time was to search Web of Science for your articles and citations to them, and you helped identify which of the articles were yours and not someone else with a similar name. You also helped identify citations to your articles that were variations of the right citation, but which still meant your article.

Here is are a couple guides for how to do the sorts of things we tried back in the day.

UMich: Citation Analysis Guide (2013): http://guides.lib.umich.edu/content.php?pid=98218

Tufts: Tools for Tenure-Track Faculty (2011): http://researchguides.library.tufts.edu/content.php?pid=158890&sid=1344528

Since the mid-2000s, there are now more tools to check for this type of information, and there are also new formulas to more accurately calculate an author’s influence and impact. Some of the most important to know about are the h-index, in particular, and altmetrics, in general.

Marnett, Alan. H-Index: What It Is and How to Find Yours. (2010) http://www.benchfly.com/blog/h-index-what-it-is-and-how-to-find-yours/

Altmetrics, a manifesto: http://altmetrics.org/manifesto/

FYI, there is also an emerging conversation in science to the effect that the “Citations + Impact Factor” approach to tenure review is dysfunctional and causing increasing problems in the practice, quality, and credibility of science. Here are a few interesting pieces I’ve read on this recently, in chronological order.

2000

McGarty, C. (2000) The citation impact factor in social psychology: A bad statistic that encourages bad science? Current Research in Social Psychology, 5 (1). pp. 1-16. http://www.uiowa.edu/~grpproc/crisp/crisp.5.1.htm

SNIP:

“In conclusion, it is worth asking how social psychology found itself to be using a poor measure for assessing a matter that is so important to so many of its practitioners. Haslam and McGarty (1998, in press) have argued that scientific practices can be understood as a process of uncertainty management. In psychology uncertainty is customarily dealt with by measuring statistical uncertainty and reducing methodological. Various other forms of uncertainty are frequently banished from formal consideration in the pages of journals and textbooks. Thus, uncertainty that arises from controversial questions involving political and societal matters which might be embarrassing for the field are frequently swept aside. The impact factor is attractive because its seemingly objective nature and the independent status of the statistic’s author (the Institute for Scientific Information) prevents many doubts from ever being formed (thereby banishing uncertainty). The two year impact factor clearly favors journals which publish work by authors who cite their own forthcoming work and who are geographically situated to make their work readily available in preprint form. The measure punishes journals which publish the work of authors who do not have membership of these invisible colleges and is virtually incapable of detecting genuine impact. It is not just a bad measure it is an invitation to do bad science.”

2009

Goldacre, Ben. Funding and findings: the impact factor. The Guardian, Friday 13 February 2009. http://www.guardian.co.uk/commentisfree/2009/feb/14/bad-science-medical-research

SNIP:

“But Tom Jefferson and colleagues looked, for the first time, at where studies are published. Academics measure the eminence of a journal by its “impact factor”: an indicator of how commonly, on average, research papers in that journal go on to be referred to by other research papers. The average journal impact factor for the 92 government-funded studies was 3.74; for the 52 studies wholly or partly funded by industry, the average impact factor was 8.78. Studies funded by the pharmaceutical industry are massively more likely to get into the bigger, more respected journals.
That’s interesting, because there is no explanation for it. There was no difference in methodological rigour, or quality, between the government-funded research and the industry-funded research. There was no difference in the size of the samples used in the studies. And there’s no difference in where people submit their articles: everybody wants to get into a big famous journal, and everybody tries their arm at it.”

2010

Werner, Yejudah L. The Aspiration to be Good is Bad: The ‘Impact Factor’ Hurts both Science and Society. International Journal of Science in Society 2010 1(1):99-106. http://ijy.cgpublisher.com/product/pub.187/prod.14

ABSTRACT:

“The fruitful aspiration of researchers to be classified as ‘good’ has been mounting, driven by the quantification of research quality and especially by the impact factor (IF). This paper briefly reviews examples of the many known or hypothetical fringe evils. Many universities now evaluate academics by the IF of the journals in which they publish. Because journals, fighting for their IFs, now select papers for brevity and for forecast of being quoted, this mal-affects science in several ways: (1) Much information remains unpublished. (2) Some projects are published in splinters. (3) Scientists avoid unpopular subjects. (4) Innovations are suppressed. (5) Small research fields are being deserted. (6) Active authors recruit inactive coauthors whose name could land the paper with a higher-IF journal, which generates assorted complications. (7) Journals striving to elevate their IF adorn their advisory boards with dignitaries who do not endeavor to help the journal. The IF also shortchanges society more directly, through the ‘quality’-driven choice of research subjects: (1) Academics concentrate on ideas and theories and avoid publishing facts of potential service to society. Thus biologists discuss how species arise, rather than describe new species to enable their conservation. (2) Professors, fighting for their resumés, regard academic neophytes as paper-manufacturing manpower and hinder their developing intellectual independence. Finally, some potential partial remedies are proposed.”

Neylon, Cameron. Warning: Misusing the journal impact factor can damage your science! 6 SEPTEMBER 2010. http://cameronneylon.net/blog/warning-misusing-the-journal-impact-factor-can-damage-your-science/

SNIP:

“It seems bizarre that we are still having this discussion. Thomson-Reuters say that the JIF shouldn’t be used for judging individual researchers, Eugene Garfield, the man who invented the JIF has consistently said it should never be used to judge individual researchers. Even a cursory look at the basic statistics should tell any half-competent scientist with an ounce of quantitative analysis in their bones that the Impact Factor of journals in which a given researcher publishes tells you nothing whatsoever about the quality of their work.”

2011

Grant, Richard P. Bye bye, Impact Factor… Faculty of 1000 24 October 2011. http://blog.f1000.com/2011/10/24/bye-bye-impact-factor/

SNIP:

The UK Government hands out money to its higher education funding bodies, which distribute that money according to the results of the Research Excellence Framework (REF), which will be completed in 2014. Traditionally, the predecessor of the REF (the Research Assessment Exercise) measured ‘impact’ of research by counting numbers of publications in high impact factor journals. Mr Willetts seems to be saying that the Journal Impact Factor will not play a role in the REF:
“Individual universities may have a different perspective on the journals you should have published in when it comes to promotion and recruitment, but the REF process makes no such judgements.”

2012

Vanclay, Jerome K. Impact Factor: outdated artefact or stepping-stone to journal certification? 15 Jan 2012. http://arxiv.org/abs/1201.3076

SNIP:

“However, there are increasing concerns that the impact factor is being used inappropriately and in ways not originally envisaged (Garfield 1996, Adler et al 2008). These concerns are becoming a crescendo, as the number of papers has increased exponentially (figure 1), reflecting the contradiction that editors celebrate any increase in their index, whilst more thoughtful analyses lament the inadequacies of the impact factor and its failure to fully utilize the potential of modern computing and bibliometric sciences. Although fit-for-purpose in the mid 20th century, the impact factor has outlived its usefulness. Has it become, like phrenology, a pseudo-science from a former time?”

Lozano, George A.; Lariviere, Vincent; Gingras, Yves. The weakening relationship between the Impact Factor and papers’ citations in the digital age. 19 May 2012. http://arxiv.org/abs/1205.4328

SNIP:

“Third, and even more troubling, is the 3-step approach of using the IF to infer journal quality, extend it to the papers therein, and then use it to evaluate researchers. Our data shows that the high IF journals are losing their stronghold as the sole repositories of high quality papers, so there is no legitimate basis for extending the IF of a journal to its papers, and much less to individual researchers. This is congruent with the finding that over the past decade in economics, the proportion of papers in the top journals produced by people from the top departments has been decreasing (Ellison, 2011). Moreover, given that researchers can be evaluated using a variety of other criteria and bibliometric indicators (e.g., Averch, 1989; Leydesdorff & Bornmann, 2011; Lozano, 2010; Lundberg, 2007; Põder, 2010), evaluating researchers by simply looking at the IFs of the journals in which they publish is both naive and uninformative.”

Lozano, George. The demise of the Impact Factor: The strength of the relationship between citation rates and IF is down to levels last seen 40 years ago. Impact of Social Sciences June 8, 2012.

SNIP:

“If the pattern continues, the usefulness of the IF will continue to decline, which will have profound implications for science and science publishing. For instance, in their effort to attract high-quality papers, journals might have to shift their attention away from their IFs and instead focus on other issues, such as increasing online availability, decreasing publication costs while improving post-acceptance production assistance, and ensuring a fast, fair and professional review process.”

Curry, Stephen. Sick of Impact Factors. Reciprocal Space August 13, 2012. http://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/

SNIP:

“But the real problem started when impact factors began to be applied to papers and to people, a development that Garfield never anticipated. I can’t trace the precise origin of the growth but it has become a cancer that can no longer be ignored. The malady seems to particularly afflict researchers in science, technology and medicine who, astonishingly for a group that prizes its intelligence, have acquired a dependency on a valuation system that is grounded in falsity. We spend our lives fretting about how high an impact factor we can attach to our published research because it has become such an important determinant in the award of the grants and promotions needed to advance a career. We submit to time-wasting and demoralising rounds of manuscript rejection, retarding the progress of science in the chase for a false measure of prestige.”

One response to “Tenure and Citation Influence Tracking Tools – Yea or Nay?

  1. There was another great piece I remember from Arxiv but couldn’t find last night. Eric Schnell (@ericschnell) reminded me of it this morning, to my delight.

    Brembs, Björn; Munafò, Marcus. Deep Impact: Unintended consequences of journal rank. 16 Jan 2013. http://arxiv.org/abs/1301.3748

    ABSTRACT:
    “Much has been said about the increasing bureaucracy in science, stifling innovation, hampering the creativity of researchers and incentivizing misconduct, even outright fraud. Many anecdotes have been recounted, observations described and conclusions drawn about the negative impact of impact assessment on scientists and science. However, few of these accounts have drawn their conclusions from data, and those that have typically relied on a few studies. In this review, we present the most recent and pertinent data on the consequences that our current scholarly communication system has had on various measures of scientific quality (such as utility/citations, methodological soundness, expert ratings and retractions). These data confirm previous suspicions: using journal rank as an assessment tool is bad scientific practice. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact. Therefore, we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary. This new system will use modern information technology to vastly improve the filter, sort and discovery function of the current journal system.”

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s