I didn’t intend to talk about Impact Factors so early in this blog but my colleagues on PLoS Medicine have written such a good editorial on the subject, The Impact Factor Game, that I couldn’t let it pass without mention.
It doesn’t take a great deal of thought to see why the ‘worth’ of a paper isn’t well assessed by the Impact Factor of the journal in which it is published. Impact Factors are essentially the average number of citations for papers in a particular journal. Problem is citations aren’t normally distributed across those papers making the power of that average to predict the likely citations of an individual paper very low. As a rule of thumb 80% of a journals impact factor is determined by 20% of the papers published.
Anyway that isn’t the point of the editorial. Rather it looks at the vagaries of getting an Impact Factor at a journal level, and the realisation that Impact Factor isn’t some objective measure but is open to a large degree of interpretation. At the moment it is Thompson Scientific whose interpretation is relied upon by scientists and editors alike:
During the course of our discussions with Thompson Scientific, PLoS Medicine’s potential impact factor – based on the same articles published in the same year – seesawed between as much as 11 and less than 3.
You can imagine the amount of hair pulling and rending of clothes that caused!
With a journal like PLoS ONE Impact Factor will be irrelevant. It will be far too broad and I don’t intend to get into the game. However there will still be a need to provide guidance as to the ‘potential worth’ of papers published. I’m personally very interested in the way that the algorithms behind Google page ranking are being applied to citation analysis both at the journal level with the concept of the Y-factor and at the individual paper level as in the recent paper Finding Scientific Gems with Google.
Along side such attempts at an objective ranking the opinion of scientists shouldn’t be forgotten. That’s why with PLoS ONE we are also keen to explore ‘user ranking’ of papers. This isn’t a new concept, it is the whole raison d’etre of Faculty of 1000 and new sites such as Biowizard also have this feature. The difference with PLoS ONE is that ranking will be open to all and an integral part of the post publication peer review of papers.
Different ways of assessing papers tell you different things. Isn’t it better to have a diversity of measures rather than rely on a single, over-used and over-interpreted metric?