Skip to Main Content

Measuring Research Impact: Home


Libguide for Faculty

 


Universities and Institutes of Higher Education measure the impact of their research for evaluating applications for recruitment and promotion, tenure , seeking grants for research, renewal & progress reports of current research projects. Universities also measure for accreditation, ranking agencies and assess the impact of the research in the Government and Social Sector. Inflibnet (a universities libraries network) with head-quarters at Ahmedabad (sponsored by MHRD) , maintains a portal to collate citation of publications by faculty & research community. Here is the Link  https://irins.org/irins/ 
 

 

 

  Authors choose journals to submit their manuscript by evaluating the reputation of the journal, with several metrics as mentioned below.

 

  1. Impact Factor: The impact factor (IF) is a measure of the frequency with which the average article in a journal has been cited for a particular year. It is also used to measure the ranking of a journal by calculating the times its articles are cited.
  2. 5 Year Impact Factor: 5-year journal Impact Factor is the average number of times articles from the journal published in the past five years have been cited.
  3. Eigen Factor: Eigen factor scores are intended to give a measure of how likely a journal is to be used and are thought to reflect how frequently an average researcher would access content from that journal.
  4. SNIP (Source-normalized Impact per Paper): Source Normalized Impact per Paper (SNIP) measures contextual citation impact by weighting citations based on the total number of citations in a subject field.
  5. SJR (Scimago Journal & Country Rank): The SCImago Journal & Country Rank is a portal that includes the journals and country scientific indicators developed from the information contained in the Scopus database.
  6. Immediacy Index:The immediacy Index is the average number of times an article is cited in its published year.



 

 


The following journal impact metrics are used to measure impact factors in Web of Science.

 

  1. Impact Factor: Impact Factor was developed in the 1960s and is commonly used to measure a journal's quality. Eugene Garfield proposed it. It can be defined as "the average number of times articles cited from the journal published over a defined time."
    It can be calculated as No. of citations in the current year to an article published in the Journal for previous n years, divided by total no. of articles published in the Journal for last n years. The impact factor is available in “Journal Citation Report (JCR)”. The JCR provides quantitative tools for ranking, evaluating, categorizing, and comparing journals. It usually published Journal Impact Factor (JIF) report annually.
    How Impact factor is calculated: The calculation is based on a two-year period and involves dividing the number of times articles were cited by the number of articles that are citable.

  2. Calculation of 2010 IF of a journal:

     

  3. 5 Year Impact Factor: The average number of times articles from the journal published in the past 5 years have been cited in the JCR year.
    Broad View - The 5-Year Journal Impact Factor provides a broader view onto the citation data, but at the expense of granularity (which is reduced).

    From 2007 onwards, JCR also provides a 5 Yr. impact factor. For example, to calculate the 5 Yr. impact factor for one Journal named "Journal of Accountancy" for the Year-2014.
    Cites in 2014 to items published in No. of Items published in
    2013= 18682 2013= 348
    2012= 20876 2012= 360
    2011= 20265 2011= 349
    2010= 17847 2010= 345
    2009= 17730 2009= 352
    SUM= 95400 SUM= 1754
     

    *All the figures are an imaginary representation for understanding.

    Limitations and Cautions
    The typical lag after publication of a paper until peak citation is variable (across papers, across time, across journals, and across subject areas). When the lag is greater than two years (which it often is), a publication’s 5-Year Journal Impact Factor will tend to be higher than its Journal Impact Factor. Journal Impact Factor and 5-Year Journal Impact Factor will typically be identical for the first two years that a publication is covered in the JCR.

  4. Immediacy Index: The immediacy Index is the average number of times an article is cited in its published year. It indicates how quickly articles in a journal are cited and is calculated by dividing the number of citations to articles published each year by the number of articles published in that year. Journal Immediacy index

  5. Eigen Factor: No. of times articles published in a journal over the past 5 yrs have been cited, with citations from more influential journals weighted more than citations from less influential journals. It can compare journals from different subject fields. It is available under JCR and freely accessible at http://eigenfactor.org/. It uses information from the entire citation network to measure the importance of each Journal, much as Google's PageRank algorithm measures the importance of websites on the world wide web.

  6. Essential Science Indicator: The ESI database reveals emerging science trends as well as influential individuals, institutions, papers, journals, and countries in the field of research.

    Essential Science Indicators (ESI) is an analytical tool that helps to identify top-performing research in Web of Science Core Collection. ESI surveys more than 11,000 journals from around the world to rank authors, institutions, countries, and journals in 22 broad fields based on publication and citation performance. Data covers a rolling 10-year period and includes bimonthly updates to rankings and citation counts.

    ESI is updated every two months. Essential Science Indicators is sourced from the Science Citation Index-Expanded (SCIE) and the Social Sciences Citation Index (SSCI) in Web of Science Core Collection.

 


The following journal impact metrics are used to measure impact factors in Scopus.

 

  1. SJR (SCImago Journal Rank): SCImago website from the University of Granada provides SJR ranking of journals. Weighted Citation in Year X to papers published in last 3yrs. Citations from more prestigious journals weighted more than citations from less prestigious journals. It can be used to compare journals from different subject fields.

  2. SNIP (Source Normalized Impact Per Paper): SNIP measures contextual citation impact by weighting citations based on the total number of citations in a subject field.
    It calculates the ratio of Journal citation count per article to citation potentials of subject fields. Single citation is higher in fields where citations are less likely.
    It is a ratio with a numerator and a denominator. SNIP's numerator is a journal's impact per publication (IPP). This is simply the average number of citations received in a particular year (e.g., 2013) by papers published in the Journal during the three preceding years (e.g., 2010, 2011, and 2012).

  3. CiteScore: Cite Score measures the average citations received per peer-reviewed document published in the title.
    Cite Score values are based on citation counts in a range of four years (e.g., 2017-2020) to peer-reviewed documents (articles, reviews, conference papers, data papers, and book chapters) published in the same four calendar years, divided by the number of these documents in these same four years (e.g., 2017– 20).
     

 

   

Google Scholar Metrics is another source of journal-level metrics. Google Scholar citations rank journals grouped by subject categories or language.
The metrics used are: h5-index defined as the h-index for articles published in the last five complete years. The largest number h such that h articles published in 2010-2014 have at least h citations each, and h5-median for a journal is the median number of citations for the articles that make up its h5-index. The screenshot below shows the Google Scholar ranking for journals under the Developmental Economics area.

 

 

 

 



It is a new approach to quantify the reach and impact of a published research. The originator of article-level metric is Public Library of Science (PLOS).

In the print world count of citations is taken into account, but how an article is used, who is using it. But the discussion & immediate interactions about an article can be captured by Altmetrics. The article-level metrics is interchangeably used for Altmetrics, but they are not synonymous.

To measure the impact of the article, traditional data points & data sources from social media are considered.

Altmetrics: A new form of measuring research impact by adding on a wider set of metrics to traditional bibliographic rankings based on academic journal citation analysis
Impact Story: is an open source, web based tool that provides open metrics & share the diverse impact of all research products.
Altmetrics & Plumx is a commercial venture.



Resources to measure the Article level Metrics: The following resources provide article-level metrics

  • Altmetric: Altmetric tracks and analyzes online activity such as mainstream media, social media, and publisher download data for mentions of scholarly literature. This data is used to calculate an Altmetric score, which is a measure of the quality and quantity of attention that an article has received.
  • PLOS Article-Level Metrics: PLOS provides article-level metrics, such as the number of views and downloads, for articles published in its journals
  • Bookmetrix: This platform provides a number of citations, downloads, online mentions, and reviews for books and book chapters published by Springer, Palgrave Macmillan, and Apress(now Bookmetrix.com is offline)
  • Dimensions: Dimensions is a discovery and analytics platform that provides article-level metrics, including the number of citations, Relative Citation Ratio, and altmetrics, for more than 90 million publications.
  • CINAHL: This database indexes nursing, consumer health, and allied health journals, trade publications, dissertations, and books. CINAHL provides article-level metrics, such as the number of abstract views or mentions on Twitter, from Plum Analytics,
  • Scopus (Plumx Metrics): Scopus provides the number of citations per article, as well as other article-level metrics, such as the number of abstract views or mentions on Twitter, from Plum Analytics.
    1. Plumx Metrics provides insights into ways people interact with individual pieces of research output (articles, conference proceedings, book chapters, and many more) in the online environment).
    2. PlumX gathers and brings together appropriate research metrics for all types of scholarly research output.
    3. categorize metrics into 5 separate categories: Citations, Usage, Captures, Mentions, and social media.
  • Web of Science: Web of Science provides the number of citations per article.
  • Google Scholar: it Searches scholarly literature available on the web and provides the number of citations per article.
  • PubMed: PubMed is a database of more than 30 million citations for biomedical literature from MEDLINE, life science journals, and online books.
  • Mendeley: Mendeley provides information on readership (included in Scopus PlumX Metrics).


Altmetrics Tools As explained in the first section of this report, Article-Level Metrics (ALMs) incorporate data points from a variety of different data sources, some traditional (e.g., times cited) and some new (e.g., tweets). There are a number of tools that have emerged to capture and display these alternative metrics, or altmetrics. This appendix enumerates several of the more prominent ones.

  • Altmetric: Backed by Digital Science, Macmillan’s technology incubator, Altmetric has been adopted by Springer, Nature Publishing Group, Scopus, and BioMed Central, among others. Altmetric tracks social media sites, newspapers, and magazines for any mentions of hundreds of thousands of scholarly articles. Altmetric then creates a score for each article. This is a quantitative measure of the quality and quantity of attention that a scholarly article has received. It is based on three main factors: the number of individual mentioning a paper, where the mentions occurred (e.g., a newspaper, a tweet), and how often the author of each mention talks about scholarly articles. Altmetric is a for-profit entity.
  • ImpactStory: ImpactStory is an open-source altmetric tool, its code available freely for anyone to use. ImpactStory (formerly Total Impact) draws from a variety of social and scholarly data sources, including Facebook, Twitter, CiteULike, Delicious, PubMed, Scopus, CrossRef, scienceseeker, Mendeley, Wikipedia, slideshare, Dryad, and figshare. ImpactStory normalizes metrics based on a sample of article published the same year; altmetrics are reported in both raw scores and percentiles compared to other articles. ImpactStory offers a free widget to embed metrics on any web page. It is a nonprofit entity.
  • Plum Analytics: A Seattle-based technology startup, Plum Analytics aims to track metrics for nearly two dozen types of outputs, including journal articles, book chapters, datasets, presentations and source code. Its focus product provides custom reports intended to quantify departmental productivity, support grant proposals, and address other impact related questions. PlumX is marketed to universities and other research institutions a way to track researchers’ productivity.
  • PLOS: PLOS has developed and released a Ruby on Rails application that stores and reports user configurable performance data on research articles. The open source utility can be customized to track ALMs for specific articles, and to include additional data sources for deriving the metrics. The code has been available since 200


Limitations of article metrics can be also due to differing social-media practices across academic disciplines and countries. Even within a discipline some article types may be more attractive for social media than others, confounding the metrics calculation and necessitating context evaluation.



Author Impact is qualified in terms of number of citations. One of the challenges of tracking the author’s having a complete list by creating an author’s profile with a persistent identifier such as ORCID ID, researcher ID etc.


Author metrics are used to track how often an author's work is cited, and demonstrate the reach and impact of a researcher's work, for use in grant applications, tenure, promotion and performance reviews. An author's impact is frequently quantified in terms of the number of citations to their publications. 
 


 


H-Index – proposed by J.E. Hirsch in 2005, it measures a researcher’s impact based on number of citations to their work.
For example, if an author has h-index- 9, it means that out of total number of publications in his/her credit, 9 of them cited at least 9 times.

h-index = number of papers (h) with a citation number ≥ h. Example: If an author has an h-Index of 9, it means that out of the total number of published documents by that author, 9 of those documents have been cited 9 times.
Please note that the name you publish under may impact on your h-index; search for every variation of your name used in your publications. Web of Science and Scopus are the main source databases for the author h-index.
Publish or Perish - merge this data with that from Web of Science and Scopus, then deduplicate results
Note: The h-index is not widely used outside the Sciences. Social Science scholars may find the journal h-index useful when selecting where to publish.

 

 

Harzing's Publish and Perish Manual explains the g-index is calculated based on the distribution of citations received by a given researcher's publication. For example: A g-index of 20 means that an academic has published at least 20 articles that combined have received at least 400 citations. It was suggested in 2006 by Leo Egghe.
One of the main advantages of the g-Index is the inflated values of this index helps give credit to lesser cited or non-cited work whilst attributing credit for highly-cited papers.

 

The i10-Index, used solely by Google Scholar, was introduced in July 2011. It calculates the number of academic publications an author has written that have at least ten citations from others. This is one way to gauge the productivity of an author.
i10-Index = the number of publications with at least 10 citations. Learn more:
• About Google Scholar metrics
Note: In order to use the i10-Index authors must have a public Google Scholar profile.


    • It is worth remembering that the key measuring tools (databases) only gather data from the journals they index. Some are more comprehensive and cover more disciplines than others, so it is best to use more than one method to track citations and evaluate the author impact. Also note that other material, such as abstracts or conference papers, are generally not included.
    • Calculating the h-index: Web of Science, Scopus or Google Scholar? This guide from MyRI discusses the pros and cons of using each database to calculate the h-index.
    • Author naming inconsistencies can lead to missed citations and in some cases multiple entries, so an author must check their entries in the databases carefully. See the Researcher Profiles page for more information on author name ambiguity and attribution.
    • h-index citation counts do not necessarily reflect research impact in the Humanities, as mentioned above the key measuring tools are not always comprehensive in covering the humanities discipline.
    • Many of the measuring tools are skewed towards STEM (Science, technology, engineering, medicine) research.
    • Web of Science counts the number of papers published, therefore favours those authors who publish more frequently and are more advanced in their careers, not early-career researchers.
    • Like other tools, the citation counts do not measure the number of times a work has been read or accessed (see Altmetrics) For more information see York University Libraries' Limitations of Bibliometrics

SPARC: SPARC (the Scholarly Publishing and Academic Resources Coalition) works to enable the open sharing of research outputs and educational materials in order to democratize access to knowledge, accelerate discovery, and increase the return on our investment in research and education.
COPE: COPE(Committee on Publication Ethics) is committed to educating and supporting editors, publishers and those involved in publication ethics with the aim of moving the culture of publishing towards one where ethical practice become a normal part of the publishing culture.
LIBER: Ligue des Bibliothèques Européennes de Recherche is an Association of European Research Libraries to promote world-class research. The network of liber is covering organizations in Europe and beyond and is based on goal-oriented partnerships. LIBER's 2018-2022 policy endorses the exceptional focus on the key topics of Copyright Reform, Digital Humanities, Open Access, Metrics, Research Data Management, etc.