They find that assessors give higher scores to papers from higher impact-factor journals, and papers from those journals get cited more often. They try to make the argument that assessor scores for papers from journals with high impact factors are inflated, but this is unconvincing. The assessor scores, citation numbers, and journal impact factors are all positively correlated - there is a lot of co-linearity in the data. By trying to control for journal impact factor they effectively eliminate the correlation between assessor score and citation number per paper. These three variables cannot be untangled because of they are strongly associated. Their conclusion is based on a statistical artifact and does not reflect the true relationships among these three variables.

I agree that citation number is not always the best measure of the quality or impact of a paper. Search engines such as Web of Science rank papers based on numbers of shared citations - the papers at the top of the list are most likely to get read and cited.

To objectively assess the quality of papers it would have to be a blind test - papers would have to be presented to assessors in plain manuscript form with no authors or journal indicated. I don't think it would be worth the effort. We all know that not all papers in high-impact journals are are high quality, but you are more likely to find high quality papers in journals with high impact factors.
Mitch Cruzan


On 10/21/2013 9:04 AM, malcolm McCallum wrote:
just an fyi, I thought some might be interested in!

http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001675;jsessionid=CF510EB51871DB51380C5DAD0E41CBDA

Reply via email to