Self-adapted Evaluation System
In my previous post, I talked about the perverse incentives in current academia based on Dr. Edwards paper. To me, I feel like it is of critical need to adopt a more intellectual and reasonable evaluating system or quantitative metrics. The impact factor (IF) is defined by the average times of article cited in the past two years for the whole journal, and indeed offers some insights for revealing the scientific value of one specific work. Due to the limitation of calculation approach, the IF obviously cannot reflect the quality of each specific paper. Other similar metrics, h-index for instance, put the young scholars at a disadvantage position and fail to normalize raw citation counts. Currently, we put too much emphasis on these quantitative metrics in terms of funding application, position promotion and international rankings. However, it is also not possible to abandon these quantitative performance metrics completely in such a short time. The best solution is to propose some alternatives to better present scientific work and scholar’s academic influence. In 2013, a novel impact factor , i.e. IF’, is proposed with some unique features: (1) a more intellectual identification of citation distribution curse is built inside the IF’, i.e. larger values of IF’ with a sharp citation distribution over the time and smaller values of IF’ with a flat distribution; (2) IF’ is based on more citations from the last three years, instead of two-year length in normal IF calculation; and (3) IF’ is usually consistent with IF and falls in the same order of magnitude with the latter one. With these features, IF’ will render a more comprehensive evaluation of a scholar with enhanced data, increasing its acceptance rate in current academia communities when serving as a complementary parameter to current IF.
Recently, a more advanced quantification tool, iCite, is developed on National Institutes of Health (NIH) with a new metric named Relative Citation Ratio (RCR). iCite is a newly developed tool to access a dashboard of bibliometrics for academic publications. Scholars can upload the PubMed IDs of articles (from SPIRES or PubMed) and get a full portfolio. iCite will output the number of articles, articles per year, citations per year, and Relative Citation Ratio (a field-normalized metric that shows the citation impact of one or more articles relative to the average NIH-funded paper). For customized analysis, the scholar can choose the year range, the article type (all, or only research articles), and individual articles can be excluded. The final report table with the article-level detail can be downloaded by scholar for future use. Below is my personal example with a simple entry of my name.
Within the iCite evaluation, the cites/year gives you the information of the maximum, the mean, the standard error of the mean (SEM), and the median (MED) of your papers. With all kinds of data, this evaluation system provides a more comprehensive assessment for your research work. In the following column, RCR represents a citation-based measurement of your publication influence. A paper with an RCR of 1.0 means it received the same number of cites/year as the average NIH-funded paper in your field. With this dynamic comparison in your own field, the evaluation will be more direct and unbiased. The iCite tool offers supplementary information to current impact factor and h-index. As the authors put it, “Scientists and administrators agree that the use of these metrics is problematic, but in spite of this strong consensus, such judgments remain common practice, suggesting the need for a valid alternative.” Gladly, the whole academia is aware of the issue, and more self-adapted evaluation systems should be put forward in the future higher education system.