BICYCLE HELMET
RESEARCH
FOUNDATION

cyclehelmets.org


Home page

Main topics
News Headlines

Frequently asked Questions
For Policy Makers

Research evidence
Misleading claims
Helmet laws
Analysis

Search Engine

Australia
Canada
New Zealand
UK
USA
Other countries

Full index
Links


BHRF
Policy statement

Peer review in the dock

Peer review is the process whereby articles submitted for publication in a journal are first sent to other people with expertise in the subject matter for their assessment and comments. The procedure, used by most scientific journals, including medical journals, is intended to uphold scientific rigour by highlighting errors and poor methodology. It is supposed to be the keystone of quality control for research projects and academic studies, yet evidence of its many deficiencies has been building up for over 20 years.

A large number of papers that have made it through to publication have subsequently been shown to be unreliable or fundamentally wrong in their methodology or analysis. Cycle helmet research, discussed later, is just one area where the quality of peer review has come in for much criticism..

Documentary highlights weaknesses of peer review

In August 2008, a BBC radio documentary (BBC, 2008) investigated what it described as "the tarnished image of a flawed process". These are some of its findings.

According to the BBC, it has been estimated that 30 to 50 per cent of published articles in medical journals have major or methodological flaws. Dr Drummond Rennie, deputy editor of the Journal of the American Medical Association, says that "There's an awful lot of rubbish there". In most cases, this is the outcome of  research projects that have been poorly designed or which suffer from the misuse of statistics.

The peer reviewers

Peer reviewers are supposed to assess the originality, reliability and value of research but this is often not done effectively. The main skill required of a reviewer is the ability to spot errors, but they are not trained or tested in this. In an experiment, 8 glaring errors were inserted into a paper that was then sent to 400 peer reviewers. Most of the reviewers detected none of the errors and the best managed no more than 4 or 5. References and statements of 'fact' made by the authors are hardly ever checked at all, the reviewer relying completely upon the honesty of the researchers.

Reviewers should be independent and impartial, but often they are friends or colleagues of the researchers which makes it less likely that they will critically appraise a work. Sometimes researchers are permitted to recommend who should carry out their peer review. There is a marked tendency for reviewers to be biased in favour of high status authors and institutions, and to favour their analyses more highly than others.

With only a few exceptions, journals do not disclose the names of their reviewers. Drummond Rennie believes it bizarre that this should be the normal state of affairs and that it is an ethical issue. According to him, "This is a form of justice and we have a very long history in law that secret justice is disastrous. Peer review shouldn't be that much of a black box". It also does not inspire a reader's confidence in the peer review of an article if the reviewers are allowed to remain anonymous.

Statistics

It is in the use of statistics that most researchers fail badly and peer reviewers often know no better, yet this is often the most important part of a paper. The correct use of statistical analysis is essential in deciding whether a treatment is effective and free of side effects. Yet research analyses are often not carried out by statisticians. Where they are involved, there are usually many fewer problems. However, to achieve statistically sound output, statisticians also need to be involved in the initial design of a study. If the design is wrong, there is nothing that can be done to correct the deficiencies later.

Most medical journals do not have the resources to employ a qualified statistician, but the few that do have found that it leads to most submitted papers requiring major revision.

Publication bias

There are many kinds of bias that affect medical research, but one of the most difficult to identify is publication bias. It is known that positive studies are more likely to be published than those not supporting an intervention, and that they are more likely to be published fast and in high profile journals. In this way the medical literature overall will tend to suggest that the intervention is more effective than may be the case.

Publication bias also occurs when studies are only offered for publication if they have met the goals of the researchers. Studies that come up with what is perceived to be the 'wrong result' may be simply put aside so that no-one else knows about them. Similarly, some data may be suppressed because it doesn't seem to contribute to the expected result.

Publication bias makes meta analyses – which compare multiple papers about the same intervention – particularly difficult to both conduct and review. Do they represent the balance of the evidence, or only that part that has made it into print?

Another form of publication bias is particularly perverse. Poorly designed studies often produce more dramatic results which makes them more likely to be published because both editors and reviewers shy away from inhibiting knowledge about research that appears to produce large benefit. In this way medical opinion can be dominated by the studies that are the least credible scientifically.

Fit for purpose?

The present system relies upon the integrity of the researchers and is very vulnerable to abuse, poor standards or ignorance.

According to Ian Chalmers of the Cochrane Collaboration, "Doctors, with the best of intentions, often do harm and they do it often on a very, very wide scale before they realise that they should have been more diligent about demanding good evidence for the basis of their practice". Cochrane Reviews tackle some of the weaknesses of the peer review system by providing a larger context for individual papers.

“Doctors, with the best of intentions, often do harm and they do it on a very, very wide scale before they realise that they should have been more diligent about demanding good evidence”

Peer review is not foolproof against gross errors. The process, designed to uphold scientific rigour, has escaped its own verification for too long..

Richard Smith, former editor of the British Medical Journal, believes that opening up the peer review process via the Internet, with all the data and methods available for inspection, could be a way forward. It may then be clearer that certain articles are deficient. But the medical establishment remains cautious about changing how it operates.

The lawyers step in

In the USA, lawyers have started to test the system for themselves. Some no longer give automatic respect to peer reviews as a seal of approval for research. Instead, they have started challenging peer reviewers directly about the quality of their reviews and are finding a lack of independent thought and objectivity. This has enabled some of them to start challenging expert witnesses on the basis that peer review no longer guarantees their expertise.

Peer review in cycle helmet research (BHRF commentary)

BHRF is supportive of the principle of peer review. As the BBC programme concluded, it is certainly better than nothing and useful as one means of filtering research. However, the shortcomings of peer review expressed above have many echos in the field of cycle helmet research and this has done a lot of damage to cycling.

Pro-helmet research is dominated by a few papers that claim dramatic benefits from helmet wearing – much greater than has been found by other researchers. But these papers have been extensively criticised for poor design, major flaws and methodological errors, including the incorrect use of statistics. The same research is referenced – unchallenged but sometimes relied upon – by most other papers that advocate the wearing of cycle helmets.

By contrast, when the available evidence has been examined by professional statisticians, such as Dr Dorothy Robinson and Paul Hewson, they have found much of the literature statistically flawed and no obvious benefit from helmet use. A similar conclusion has been reached by specialists in other disciplines concerned with helmets, such as Brian Walker of Head Protection Evaluations Ltd, whose company is an internationally accredit test centre for helmets and whose tests have found that most cycle helmets offer little protection.

It has been clear that peer reviewers have often not checked authors' references for they have failed to draw attention to the fact that some references have been the subject of subsequent peer feedback and corrections that the author should have taken into account. The reviewers of one paper failed to notice that the authors had incorrectly cited research by the Transport Research Laboratory, upon which the paper's conclusions critically relied. Because of this the paper has subsequently been referenced widely as evidence that cycle helmets produce benefit, whereas the proper conclusion (using the TRL data correctly) was that they had made no difference.

Because the names of peer reviewers are withheld, it is not possible to know who has peer reviewed helmet papers, their suitability or balance. It is known, however, that medical opinon about helmets has been dominated by a relatively small group of researchers, authors and editors who are thought sometimes to have cooperated closely.

It is difficult to prove publication bias but it is known that one research project, with the aim of proving that helmet promotion does not discourage cycling, did not produce the report that was a condition of its government grant. Given the track record of the researchers, it is almost unthinkable that the report would not have been produced and widely publicised if the researchers' goals had been met. However, because the research was not brought to a proper conclusion, it is not possible to know if it was found that helmet promotion did discourage cycling.

Publication bias is also a likely outcome of the editorial hostility in some quarters towards authors submitting articles or letters challenging the effectiveness of helmets. Some authors have expressed alarm at the pressure to take a more pro-helmet view, while others may have given up at the hurdle.

The radio documentary noted that the Cochrane Review process can overcome some of the weaknesses of peer review by allowing papers to be judged in the context of other papers on the same subject matter. Unfortunately, the Cochrane Collaboration has failed to produce independent and unbiased analyses of cycle helmets and helmet laws. The two Cochrane Reviews which analysed these subjects were each carried out by researchers who had previously been criticised for their predisposition to helmet benefit and who were active campaigners for helmet laws. Their reviews were highly selective in the research studied and dominated by research that they had carried out themselves, with no reference to published criticisms. With the authors as evidence providers, judge and jury, these reviews have made a mockery out of the Cochrane process but have influenced medical and public opinion on a basis of very poor science.

References

BBC, 2008

Peer Review in the Dock. BBC Radio 4, 4th August 2008.