Curt Rice, rektor på Hhøgskolen i Oslo og Akershus: De som har begrenset tillit til forskning har et viktig poeng.

Why you can’t trust science

Rektor på Høgskolen i Oslo og Akershus, Curt Rice, forklarer her hvorfor folk har gode grunner til å tvile på forskning, og hva som må gjøres for å bedre situasjonen.

Publisert Sist oppdatert

OBS! Denne artikkelen er mer enn tre år gammel, og kan inneholde utdatert informasjon.

De siste ukene har det vært en diskusjon i Norge om mangel på tillit mellom forskere og deres arbeid på den ene siden og publikum på den andre siden. Debatten er forårsaket av en undersøkelse utført av Kantar TNS for Forskningsrådet, der folk er spurt om hva de tror om forskning og de som utfører den.

Jeg vil på min side påstå at et mye mer alvorlig forhold er at det er veldig gode grunner til å tvile på forskning, i hvert fall hvis vi snakker om det som blir publisert.

I dette essayet drøfter jeg noen forskningsfunn om retraction rates, the decline effect og impact factor. Min påstand er at disse funnene viser en alvorlig krise for vitenskapen og at de som har begrenset tillit til forskning har et viktig poeng. Heldigvis kan det gjøres endringer som kan forbedre situasjon, men man må finne politiske vilje til å gjennomføre slike endringer.

(Essayet er først publisert på Curt Rice sin blogg, deretter i The Guardian, red. anm)

Science research: three problems that point to a communications crisis

We have a system for communicating results in which the need for retraction is exploding, the replicability of research is diminishing, and the most standard measure of journal quality is becoming a farce.

Curt Rice

...de som har begrenset tillit til forskning har et viktig poeng.

Curt Rice

 

Traditional scientific communication directly threatens the quality of scientific research. Today’s system is unreliable – or worse. Scholarly publishing regularly gives the highest status to research that is most likely to be wrong. This system determines the trajectory of a scientific career and the longer we stick with it, the more likely it will deteriorate.

Think these are strong claims? They, and the problems described below, are grounded in research recently presented by Björn Brembs from the University of Regensburg and Marcus Munafò of the University of Bristol in Deep impact: unintended consequences of journal rank.

Retraction rates

Retraction is one possible response to discovering that something is wrong with a published scientific article. When it works well, journals publish a statement identifying the reason for the retraction.

Retraction rates have increased tenfold in the past decade after many years of stability. According to a recent paper in the Proceedings of the National Academy of Sciences, two-thirds of all retractions follow from scientific misconduct: fraud, duplicate publication and plagiarism.

More disturbing is the finding that the most prestigious journals have the highest rates of retraction, and that fraud and misconduct are greater sources of retraction in these journals than in less prestigious ones.

Among articles that are not retracted, there is evidence that the most visible journals publish less reliable (in other words, not replicable) research results than lower ranking journals. This may be due to a preference among prestigious journals for results that have more spectacular or novel findings, a phenomenon known as publication bias.

The decline effect

One cornerstone of the quality control system in science is replicability – research results should be so carefully described that they can be obtained by others who follow the same procedure. Yet journals generally are not interested in publishing mere replications, giving this particular quality control measure somewhat low status, independent of how important it is, for example in studying potential new medicines.

When studies are reproduced, the resulting evidence is often weaker than in the original study. Brembs and Munafò review research leading them to claim that «the strength of evidence for a particular finding often declines over time.»

In a fascinating piece entitled The truth wears off, the New Yorker offers the following interpretation of the decline effect, that the most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. Yet it is exactly the spectacularity of statistical flukes that increase the odds of getting published in a high prestige journal.

The politics of prestige

One approach to measuring the importance of a journal is to count how many times scientists cite its articles; this is the intuition behind impact factor. Publishing in journals with high impact factors feeds job offers, grants, awards, and promotions. A high impact factor also enhances the popularity – and profitability – of a journal, and journal editors and publishers work hard to increase them, primarily by trying to publish what they believe will be the most important papers.

However, impact factor can also be illegitimately manipulated. For example, the actual calculation of impact factor involves dividing the total number of citations in recent years by the number of articles published in the journal in the same period. But what is an article? Do editorials count? What about reviews, replies or comments?

By negotiating to exclude some pieces from the denominator in this calculation, publishers can increase the impact factor of their journals. In ‘The impact factor game’, the editors of peer-reviewed open access journal PLoS Medicine describe the negotiations determining their impact factor. Their impact factor could have been anywhere from 4 to 11; an impact factor in the 30s is extremely high, while most journals are under 1.

In other words, 4 to 11 is a significant range. This process led the editors to «conclude that science is currently rated by a process that is itself unscientific, subjective, and secretive».

A crisis for science?

I believe the problems discussed here are a crisis for science and the institutions that fund and carry out research. We have a system for communicating results in which the need for retraction is exploding, the replicability of research is diminishing, and the most standard measure of journal quality is becoming a farce. Indeed, the ranking of journals by impact factor is at the heart of all three of these problems. Brembs and Munafò conclude that the system is so broken it should be abandoned.

Getting past this crisis will require both systemic and cultural changes. Citations of individual articles can be a good indicator of quality, but the excellence of individual articles does not correlate with the impact factor of the journals in which they are published. When we have convinced ourselves of that, we must see the consequences it has for the evaluation processes essential to the construction of careers in science and we must push nascent alternatives such as Google Scholar and others forward.

Politicians have a legitimate need to impose accountability, and while the ease of counting – something, anything – makes it tempting for them to infer quality from quantity, it doesn’t take much reflection to realize that this is a stillborn strategy. As long as we believe that research represents one of the few true hopes for moving society forward, then we have to face this crisis.

It will be challenging, but there is no other choice.

Velkommen til vårt kommentarfelt
Logg inn med en Google-konto, eller ved å opprette en Commento-konto gjennom å trykke på Login under. (Det kan være behov for å oppdatere siden når man logger inn første gang)

Vi modererer debatten i etterkant og alle innlegg må signeres med fullt navn. Se Khronos debattregler her. God debatt!
Powered by Labrador CMS