Impact factor

This article is about a measure of journal influence. For other similar metrics, see Citation impact.

The impact factor (IF) of an academic journal is a measure reflecting the yearly average number of citations to recent articles published in that journal. It is frequently used as a proxy for the relative importance of a journal within its field; journals with higher impact factors are often deemed to be more important than those with lower ones. The impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific Information. Impact factors are calculated yearly starting from 1975 for those journals that are listed in the Journal Citation Reports.


In any given year, the impact factor of a journal is the number of citations received by articles published in that journal during the two preceding years, divided by the total number of articles published in that journal during the two preceding years.[1] For example, if a journal has an impact factor of 3 in 2008, then its papers published in 2006 and 2007 received 3 citations each on average in 2008.

Note that 2008 impact factors are actually published in 2009; they cannot be calculated until all of the 2008 publications have been processed by the indexing agency.

New journals, which are indexed from their first published issue, will receive an impact factor after two years of indexing; in this case, the citations to the year prior to Volume 1, and the number of articles published in the year prior to Volume 1 are known zero values. Journals that are indexed starting with a volume other than the first volume will not get an impact factor until they have been indexed for three years. Occasionally, Thomson Reuters assigns an impact factor to new journals with less than two years of indexing, based on partial citation data.[2][3] The calculation always uses two complete and known years of item counts, but for new titles one of the known counts is zero. Annuals and other irregular publications sometimes publish no items in a particular year, affecting the count. The impact factor relates to a specific time period; it is possible to calculate it for any desired period, and the Journal Citation Reports (JCR) also includes a five-year impact factor.[4] The JCR shows rankings of journals by impact factor, if desired by discipline, such as organic chemistry or psychiatry.


The impact factor is used to compare different journals within a certain field. The Web of Science indexes more than 11,000 science and social science journals.[5][6]

It is possible to examine the impact factor of the journals in which a particular person has published articles. This use is widespread, but controversial. Garfield warns about the "misuse in evaluating individuals" because there is "a wide variation from article to article within a single journal".[7] Impact factors have a large, but controversial, influence on the way published scientific research is perceived and evaluated.

Some companies are producing false impact factors.[8]


Numerous criticisms have been made regarding the use of impact factors.[9][10] For one thing, the impact factor might not be consistently reproduced in an independent audit.[11] There is also a more general debate on the validity of the impact factor as a measure of journal importance and the effect of policies that editors may adopt to boost their impact factor (perhaps to the detriment of readers and writers). Other criticism focuses on the effect of the impact factor on behavior of scholars, editors and other stakeholders.[12][13] Others have criticized the impact factor more generally on the institutional background of the neoliberal academia, claiming that what is needed is not just its replacement with more sophisticated metrics but a democratic discussion on the social value of research assessment and the growing precariousness of scientific careers.[14][15][16]

Validity as a measure of importance

It has been stated that impact factors and citation analysis in general are affected by field-dependent factors[17] which may invalidate comparisons not only across disciplines but even within different fields of research of one discipline.[18] The percentage of total citations occurring in the first two years after publication also varies highly among disciplines from 1–3% in the mathematical and physical sciences to 5–8% in the biological sciences.[19] Thus impact factors cannot be used to compare journals across disciplines.

The impact factor is based on the arithmetic mean number of citations per paper, yet citation counts have highly skewed distributions,[20] making the arithmetic mean potentially misleading if used to gauge the typical impact of articles in the journal rather than the overall impact of the journal itself.[21] For example, about 90% of Nature's 2004 impact factor was based on only a quarter of its publications, and thus the actual number of citations for a single article in the journal is in most cases much lower than the mean number of citations across articles.[22] Furthermore, the strength of the relationship between impact factors of journals and the citation rates of the papers therein has been steadily decreasing since articles began to be available digitally.[23]

Indeed, impact factors are sometimes used to evaluate not only the journals but the papers therein, thereby devaluing papers in certain subjects.[24] The Higher Education Funding Council for England was urged by the House of Commons Science and Technology Select Committee to remind Research Assessment Exercise panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published.[25] The effect of outliers can be seen in the case of the article "A short history of SHELX", which included this sentence: "This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination". This article received more than 6,600 citations. As a consequence, the impact factor of the journal Acta Crystallographica Section A rose from 2.051 in 2008 to 49.926 in 2009, more than Nature (at 31.434) and Science (at 28.103).[26] The second-most cited article in Acta Crystallographica Section A in 2008 only had 28 citations.[27] It is also important to note that impact factor is a journal metric and should not be used to assess individual researchers or institutions.[28][29]

Journal rankings constructed based solely on impact factors only moderately correlate with those compiled from the results of expert surveys.[30]

A.E. Cawkell, sometime Director of Research at the Institute for Scientific Information remarked that the Science Citation Index (SCI), on which the impact factor is based, ″would work perfectly if every author meticulously cited only the earlier work related to his theme; if it covered every scientific journal published anywhere in the world; and if it were free from economic constraints.″[31]

Editorial policies that affect the impact factor

A journal can adopt editorial policies to increase its impact factor.[32][33] For example, journals may publish a larger percentage of review articles which generally are cited more than research reports.[34] Thus review articles can raise the impact factor of the journal and review journals will therefore often have the highest impact factors in their respective fields.[35] Some journal editors set their submissions policy to "by invitation only" to invite exclusively senior scientists to publish "citable" papers to increase the journal impact factor.[35]

Journals may also attempt to limit the number of "citable items"—i.e., the denominator of the impact factor equation—either by declining to publish articles that are unlikely to be cited (such as case reports in medical journals) or by altering articles (e.g., by not allowing an abstract or bibliography in hopes that Thomson Scientific will not deem it a "citable item"). As a result of negotiations over whether items are "citable", impact factor variations of more than 300% have been observed.[36] Items considered to be uncitable—and thus are not incorporated in impact factor calculations—can, if cited, still enter into the numerator part of the equation despite the ease with which such citations could be excluded. This effect is hard to evaluate, for the distinction between editorial comment and short original articles is not always obvious. For example, letters to the editor may refer to either class.

Another less insidious tactic journals employ is to publish a large portion of its papers, or at least the papers expected to be highly cited, early in the calendar year. This gives those papers more time to gather citations. Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the same journal which will increase the journal's impact factor.[37][38]

Beyond editorial policies that may skew the impact factor, journals can take overt steps to game the system. For example, in 2007, the specialist journal Folia Phoniatrica et Logopaedica, with an impact factor of 0.66, published an editorial that cited all its articles from 2005 to 2006 in a protest against the "absurd scientific situation in some countries" related to use of the impact factor.[39] The large number of citations meant that the impact factor for that journal increased to 1.44. As a result of the increase, the journal was not included in the 2008 and 2009 Journal Citation Reports.[40]

Coercive citation is a practice in which an editor forces an author to add extraneous citations to an article before the journal will agree to publish it, in order to inflate the journal's impact factor. A survey published in 2012 indicates that coercive citation has been experienced by one in five researchers working in economics, sociology, psychology, and multiple business disciplines, and it is more common in business and in journals with a lower impact factor.[41] However, cases of coercive citation have occasionally been reported for other scientific disciplines.[42]


Because "the impact factor is not always a reliable instrument", in November 2007 the European Association of Science Editors (EASE) issued an official statement recommending "that journal impact factors are used only—and cautiously—for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programmes".[10]

In July 2008, the International Council for Science (ICSU) Committee on Freedom and Responsibility in the Conduct of Science (CFRS) issued a "statement on publication practices and indices and the role of peer review in research assessment", suggesting many possible solutions—e.g., considering a limit number of publications per year to be taken into consideration for each scientist, or even penalising scientists for an excessive number of publications per year—e.g., more than 20.[43]

In February 2010, the Deutsche Forschungsgemeinschaft (German Research Foundation) published new guidelines to evaluate only articles and no bibliometric information on candidates to be evaluated in all decisions concerning "performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding proposals, [where] increasing importance has been given to numerical indicators such as the h-index and the impact factor".[44] This decision follows similar ones of the National Science Foundation (US) and the Research Assessment Exercise (UK).

In response to growing concerns over the inappropriate use of journal impact factors in evaluating scientific outputs and scientists themselves, the American Society for Cell Biology together with a group of editors and publishers of scholarly journals created the San Francisco Declaration on Research Assessment (DORA). Released in May 2013, DORA has garnered support from thousands of individuals and hundreds of institutions,[45] including in March 2015 the League of European Research Universities (a consortium of 21 of the most renowned research universities in Europe),[46] who have endorsed the document on the DORA website.

Université de Montréal, Imperial College London, PLOS, eLife, EMBO Journal, The Royal Society, Nature (journal) and Science (journal) proposed citation distributions metrics as alternative to impact factors.[47][48][49]

Closely related indices

Some related values, also calculated and published by the same organization, include:

As with the impact factor, there are some nuances to this: for example, ISI excludes certain article types (such as news items, correspondence, and errata) from the denominator.[51][52][53]

Other measures of impact

Main article: Citation metrics
Further information: Scientometrics

Additional journal-level metrics are available from other organizations. The measures above apply only to journals, not individual scientists, unlike author-level metrics such as the H-index. Article-level metrics measure impact at an article level instead of journal level. Other more general alternative metrics, or "altmetrics", may include article views, downloads, or mentions in social media.


Fake impact factors are produced by companies not affiliated with Thomson Reuters (TR).[8] These are often used by predatory publishers.[54] Consulting TR's master journal list can confirm if a publication is indexed by TR, which is a necessary (but not sufficient) condition for obtaining an IF.[55]

See also


  1. "Journal Citation Reports: Impact Factor". Retrieved 2016-09-12.
  2. "RSC Advances receives its first partial impact factor", June 24, 2013. Retrieved on May 21st 2015.
  3. "Our first (partial) impact factor and our continuing (full) story", July 30th, 2014. Retrieved on May 21st 2015.
  4. "JCR with Eigenfactor". Retrieved 2009-08-26.
  5. "Web of Knowledge > Real Facts > Quality and Quantity". Retrieved 2010-05-05.
  6. "Thomson Reuters Master Journal List". Thomson Reuters. Retrieved 2013-06-20.
  7. Eugene Garfield (June 1998). "The Impact Factor and Using It Correctly". Der Unfallchirurg. 101 (6): 413–414. PMID 9677838.
  8. 1 2 Jalalian M (2015). "The story of fake impact factor companies and how we detected them". Electronic Physician. 7 (2): 1069–72. doi:10.14661/2015.1069-1072. PMC 4477767Freely accessible. PMID 26120416.
  9. "Time to remodel the journal impact factor". Nature (journal). 535 (466). 2016. doi:10.1038/535466a.
  10. 1 2 "European Association of Science Editors (EASE) Statement on Inappropriate Use of Impact Factors". Retrieved 2012-07-23.
  11. Rossner, M.; Van Epps, H.; Hill, E. (17 December 2007). "Show me the data". Journal of Cell Biology. 179 (6): 1091–2. doi:10.1083/jcb.200711140. PMC 2140038Freely accessible. PMID 18086910.
  12. Wesel, M. van (2016). "Evaluation by Citation: Trends in Publication Behavior, Evaluation Criteria, and the Strive for High Impact Publications". Science and Engineering Ethics. 22 (1): 199–225. doi:10.1007/s11948-015-9638-0.
  13. Moustafa, Khaled (2015). "The disaster of the impact factor". Science and Engineering Ethics. 21 (1): 139–142. doi:10.1007/s11948-014-9517-0. PMID 24469472.
  14. Brembs, B.,; Button, K.; Munafò, M. (2013). "Deep impact: Unintended consequences of journal rank.". Frontiers in Human Neuroscience. 7 (291): 1–12. doi:10.3389/fnhum.2013.00291.
  15. Kansa, E. (2013, December 11). It’s the neoliberalism, stupid: Why instrumentalist arguments for open access, open data, and open science are not enough.
  16. Cabello, F.; Rascón, M.T. (2015). "The Index and the Moon. Mortgaging Scientific Evaluation.". International Journal of Communication. 9: 1880–1887
  17. Bornmann, L.; Daniel, H. D. (2008). "What do citation counts measure? A review of studies on citing behavior". Journal of Documentation. 64 (1): 45–80. doi:10.1108/00220410810844150.
  18. Anauati, Maria Victoria and Galiani, Sebastian and Gálvez, Ramiro H., Quantifying the Life Cycle of Scholarly Articles Across Fields of Economic Research (November 11, 2014). Available at SSRN:
  19. Erjen van Nierop (2009). "Why Do Statistics Journals Have Low Impact Factors?". Statistica Neerlandica. 63 (1): 52–62. doi:10.1111/j.1467-9574.2008.00408.x.
  21. Joint Committee on Quantitative Assessment of Research (12 June 2008). "Citation Statistics" (PDF). International Mathematical Union.
  22. "Not-so-deep impact". Nature. 435 (7045): 1003–1004. 23 June 2005. doi:10.1038/4351003b. PMID 15973362.
  23. Lozano, George A.; Larivière, Vincent; Gingras, Yves (2012). "The weakening relationship between the impact factor and papers' citations in the digital age". Journal of the American Society for Information Science and Technology. 63 (11): 2140–2145. doi:10.1002/asi.22731.
  24. John Bohannon (2016). "Hate journal impact factors? New study gives you one more reason". Science (journal). doi:10.1126/science.aag0643.
  25. "House of Commons – Science and Technology – Tenth Report". 2004-07-07. Retrieved 2008-07-28.
  26. Grant, Bob (21 June 2010). "New impact factors yield surprises". The Scientist. Retrieved 31 March 2011.
  27. "What does it mean to be #2 in Impact?", Thomson Reuters Community.
  28. Seglen, P. O. (1997). "Why the impact factor of journals should not be used for evaluating research". BMJ. 314 (7079): 498–502. doi:10.1136/bmj.314.7079.497. PMC 2126010Freely accessible. PMID 9056804.
  29. "EASE Statement on Inappropriate Use of Impact Factors". European Association of Science Editors. November 2007. Retrieved 2013-04-13.
  30. Serenko, A.; Dohan, M. (2011). "Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence" (PDF). Journal of Informetrics. 5 (4): 629–648. doi:10.1016/j.joi.2011.06.002.
  31. Cawkell, Anthony E. (1977). "Science perceived through the Science Citation Index" (PDF). Endeavour. 1 (2): 57–62.
  32. Monastersky, Richard (14 October 2005). "The Number That's Devouring Science". The Chronicle of Higher Education.
  33. Arnold, Douglas N.; Fowler, Kristine K. (2011). "Nefarious Numbers". Notices of the American Mathematical Society. 58 (3): 434–437. arXiv:1010.0278Freely accessible. Bibcode:2010arXiv1010.0278A.
  34. Garfield, Eugene (20 June 1994). "The Thomson Reuters Impact Factor". Thomson Reuters.
  35. 1 2 Moustafa, Khaled (2014). "The disaster of the impact factor". Science and Engineering Ethics. 21 (1): 139–142. doi:10.1007/s11948-014-9517-0. PMID 24469472.
  36. PLoS Medicine Editors (6 June 2006). "The Impact Factor Game". PLoS Medicine. 3 (6): e291. doi:10.1371/journal.pmed.0030291. PMC 1475651Freely accessible. PMID 16749869.
  37. Agrawal, A. (2005). "Corruption of Journal Impact Factors" (PDF). Trends in Ecology and Evolution. 20 (4): 157. doi:10.1016/j.tree.2005.02.002. PMID 16701362.
  38. Fassoulaki, A.; Papilas, K.; Paraskeva, A.; Patris, K. (2002). "Impact factor bias and proposed adjustments for its determination". Acta Anaesthesiologica Scandinavica. 46 (7): 902–5. doi:10.1034/j.1399-6576.2002.460723.x. PMID 12139549.
  39. Schuttea, H. K.; Svec, J. G. (2007). "Reaction of Folia Phoniatrica et Logopaedica on the Current Trend of Impact Factor Measures". Folia Phoniatrica et Logopaedica. 59 (6): 281–285. doi:10.1159/000108334. PMID 17965570.
  40. "Journal Citation Reports – Notices". Archived from the original on 2010-05-15. Retrieved 2009-09-24.
  41. Wilhite, A. W.; Fong, E. A. (2012). "Coercive Citation in Academic Publishing". Science. 335 (6068): 542–3. Bibcode:2012Sci...335..542W. doi:10.1126/science.1212540. PMID 22301307.
  43. "International Council for Science statement". 2014-05-02. Retrieved 2014-05-18.
  44. DFG press release <>
  45. Cabello, F. and Rascon, M.T. (2015) "The Index and the Moon. Mortgaging Scientific Evaluation". International Journal of Communication <>
  46. Original LERU press release <>
  47. Veronique Kiermer (2016). "Measuring Up: Impact Factors Do Not Reflect Article Citation Rates". PLOS.
  48. "Ditching Impact Factors for Deeper Data". Retrieved 2016-07-29.
  49. "Scientific publishing observers and practitioners blast the JIF and call for improved metrics.". Retrieved 2016-03-08.
  50. "Impact Factor, Immediacy Index, Cited Half-life". Swedish University of Agricultural Sciences. Archived from the original on 23 May 2008. Retrieved 30 October 2016.
  51. "Bibliometrics (journal measures)". Elsevier. Retrieved 2012-07-09. a measure of the speed at which content in a particular journal is picked up and referred to
  52. "Glossary of Thomson Scientific Terminology". Thomson Reuters. Retrieved 2012-07-09.
  53. "Journal Citation Reports Contents -- Immediacy Index" ((online)). Thomson Reuters. Retrieved 2012-07-09. The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. The aggregate Immediacy Index indicates how quickly articles in a subject category are cited.
  54. Jeffrey Beall. "Scholarly Open-Access - Fake impact factors".
  55. "Thomson Reuters Interllectual Property & Science Master Journal List".

Further reading

External links

This article is issued from Wikipedia - version of the 10/30/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.