It’s an on-going problem: how do you know if a scientist is a great scientist or just average? How can you measure someone’s contribution to a field of knowledge? The question is critical for scientists, because when they apply for a grant or for an academic position, the application is assessed on that basis.
Often, there are three factors used to determine whether a scientist should get more money or whether they should be hired: they publish papers often; they publish their studies in ‘high-impact’ journals, such as Science, Nature and New England Journal of Medicine; and/or their papers are often cited in other studies.
This is a terrible proxy for evaluating someone’s contribution to a scientific field. Let me count the ways: it encourages scientists who tackle smaller problems – the ones that predictably result in papers – rather than the big, outstanding questions of our time; it encourages scientists who publish quickly, sometimes foregoing proper cross-checks or even turning to fraudulent practices; the emphasis on recent publications means good scientists find it difficult to return to science after a break in their career; it encourages practices such as gift authorship, naming someone as an author thought they haven’t truly contributed; and it encourages an individualistic approach, where scientists are valued only for their own personal achievements and output.
It’s the last point that Alexander Oettl, from the Georgia Institute of Technology in the U.S., has addressed so well in his comment article in this week’s issue of Nature. He looked at a part of published scientific papers that isn’t included in the above metrics: the acknowledgements section. Scientists who share data and expertise, who give thoughtful criticisms of manuscripts, and who help others shape their experiments are often thanked in the acknowledgements section, not cited or listed as a co-author.
He first looked through all the acknowledgements in more than 50,000 papers published in the Journal of Immunology between 1950 and 2007, noting who was thanked and what they were thanked for. He then looked for scientists who had died unexpectedly, by looking through the obituaries of publications associated with immunology and checking that they were still actively publishing at the time of their death.
In total, he found 149 scientists who had died unexpectedly, of which 63 were ‘helpful’. When scientists who were not particularly helpful died, there was little change in the quality of colleagues’ publications. However, Onttl found a dramatic decline is the quality of publications in a field following a helpful scientist’s death.
“By reviewing the acknowledgements in immunology papers since 1950, I have found that when principal investigators (PIs) who were frequently thanked by others died unexpectedly, the quality of the papers of their colleagues dropped,” Oettl wrote.
The number of high-impact publications declined by 20-22%, and the number of citations of papers written by a co-author of the helpful scientist dropped by 21-28% when the helpful colleague died. That dip could last for more than five years.
“The impact of a death was particularly profound on co-authors of PIs who were helpful with conceptual feedback, such as advice and criticism,” he wrote.
The current system of judging a scientist doesn’t affect those who have both a stellar publication record and are also very helpful. But there are scientists with the potential to contribute substantially to colleague’s work, without being able to produce work of the same quality themselves. And it also penalises scientists who spend their time and critical thinking on other’s projects – something that may help scientific enterprise as a whole.
Oettl ends suggesting a factor should be developed that measures ‘helpfulness’ of a scientist – by average acknowledgements per year, for example.
What do you think? Should institutions and grants take into account ‘helpfulness’ when assessing a grant or job application?
Abstract in Nature: Sociology: Honour the Helpful