Frequently asked Questions
For Policy Makers
The study of cycle helmets is beset by conflicts between case-control studies, which infer large benefits from helmet use by comparing injuries of cyclists who chose to wear helmets with those who did not, and data from entire cyclist populations when substantial increases in cycle helmet use (sometimes as a result of legislation) show that the benefits, if any, fall far short of those predicted by case-control studies.
Cycle helmet research is not the only area of research where such conflicts exist, as evidenced by an increasing number of papers in epidemiological journals drawing attention to this problem. There have been issues with studies of the effect of hormone replacement therapy on heart disease, vitamin supplements, antibiotics and the MMR triple vaccine. Findings that had appeared robust subsequently turned out to be unreliable or simply wrong.
Over a number of years, evidence accumulated through observational studies that combined hormone replacement therapy (HRT) conferred significant protection against coronary heart disease (CHD). The studies compared women receiving HRT with those who didn't; it was noted that those receiving HRT were up to 50% less likely to suffer CHD than the other women. These observational studies were taken by many to be reliable evidence of a causal link and credible mechanisms were advanced to explain this (Stampfer and Colditz, 1991). However, subsequent randomised controlled trials showed that in fact HRT did not protect from CHD, rather the effect was either null or slightly negative (Petitti, 2004; Lawlor, Smith and Ebrahim, 2004).
Randomised control trials are considered the most reliable form of evidence because they assign treatment options at random to those taking part. So any differences between the control group and those given HRT are due purely to chance and can be quantified by the law of averages.
But why did the different types of study produce contradictory results?
Some commentators had already pointed out that case-control studies could be used to show that HRT apparently conferred protection against violent death. Such an improbable result suggested a socio-economic bias in women who took HRT. Later investigations revealed that doctors tended to be risk-averse in how they prescribed HRT, avoiding women who already exhibited risk factors for CHD, such as hypertension and diabetes. Also, women who requested HRT were generally more health-conscious than those who did not, and were generally of higher socio-economic status. When the data were analysed, these biasing effects were not adequately accounted for, and this probably led to the misleading conclusions.
Leading epidemiologists have also questioned the findings of several observational studies suggesting that anti-oxidant vitamins confer longer life and protection against CHD and cancer (Lawlor et al, 2004). As with HRT, randomised controlled trials have failed to repeat these findings (Lawlor et al, 2004). Re-analysis of the original data suggested that the differences originally attributed to vitamin supplementation were probably due to socio-economic differences between those who chose to take supplements and those who did not. Although the observational studies tried to adjust for these differences, their conclusions were nonetheless incorrect and misleading.
There may be other examples of this problem. For example, the association between cannabis use and psychosis has been questioned (Degenhardt, Hall and Lynsky, 2003). Several observational studies showing an apparent link between antibiotic use in early life and a subsequent risk of asthma have also been questioned; time-trend data show no association (Foliaki et al, 2004).
Finally, some researchers claimed observational data show the MMR triple vaccine increases the risk of autism. The scare was sustained long after the association was refuted and may have reduced public confidence in an important public health programme.
Comparison with the science on smoking and lung cancer is informative. Here the predictions of case-control studies were matched by population-level statistics which showed that levels of lung cancer tracked the growth in smoking. Two independent sources of evidence agreed, and were further supported by animal-based testing, leading to a robust conclusion regarding the link between smoking and cancer. The contrast with cycle helmet research is stark: for helmets the different study types disagree on both the magnitude and the direction of any relationship.
A similarity between these now controversial areas of epidemiological research has been the existence of a 'snowball effect', where observational studies have led to further similar studies in which the researchers have overlooked and repeated the methodological shortcomings of the earlier studies in their keenness to find a solution to a perceived problem.
Meta-analyses are also published, which summarise the state of present knowledge on a subject without actually adding any new data. As these various studies cite each other in turn, the list of references in support of the intervention builds up without necessarily adding to the sample base under consideration. In a relatively short period of time, acceptance of the need for and effectiveness of the intervention becomes the conventional wisdom in the medical profession and beyond.
Against this background, research that comes to a different conclusion can have difficulty making an impact. When the randomised control studies into HRT were published, reactions were mixed. Some researchers tried to rework their data to fit the results of the trials, in effect to 'prove' that they were right all along. Others denied the validity of the new data. Some tried to explain the discrepancy by reference to differences in the study groups.
The recognition that case-control and other observational studies may be fallible is valuable in the context of evidence for and against the efficacy of cycle helmets. The parallels with the examples given above are compelling.
Case-control studies reported high levels of protection from serious head injury, although many were conducted at a time when helmet use was low - only 3% in the case of the most widely cited paper (Thompson, Rivara and Thompson, 1989). However, time-trend studies have repeatedly shown little or no reduction in serious head injuries even where helmet wearing has increased sharply because of helmet legislation (BHRF, 1241). Given the universal failure of time-trend data to show any benefit from increasing helmet use, it is difficult to accept that cycle helmets confer significant protection at the individual level.
So why did the case-control helmet studies report very high levels of protection, up to 75% in the Addenbrooke’s Hospital study in Cambridge in 1993 (Maimaris, Summers, Browning and Palmer, 1994)? Serious head injuries to cyclists are fairly rare and the rate of helmet use was at that time low (11%). Thus the figure of 75% is not really as robust as it may look, being hedged by large error limits. The 95% confidence limits ranged from 6% to 90% protection.
Was this result also distorted by confounding socio-economic factors? There are often large differences in socio-economic factors between groups of cyclists who choose to wear helmets and those who do not. It was noted in Australia that those most resistant to helmet use appeared to be from the lower socio-economic groups. More formal study from Canada has shown that children of well-off parents are 2-3 times more likely to use helmets than those from poorer homes (Macpherson, 2004). It has also long been noted that children from families without a car are many times more likely to be killed or seriously injured in road crashes than those from better off homes (Grayling et al, 2002). These two factors affecting lower socio-economic groups – lower helmet use, but greater risk of serious head injury– provide one plausible explanation as to why case-control studies have yielded excessive estimates of helmet efficacy.
A particular problem with cycle helmet research is that randomised control trials are not possible, for ethical and practical reasons. Moreover, the observational studies are usually published in the medical academic press and the conflicting evidence may be published in entirely different journals, outside the academic press, or not at all (being accumulated by ongoing and more widely based data collection processes involving civil servants rather than academics). In this way considerable momentum may be built up before the medical establishment becomes aware of the existence of the conflicting evidence. The medical press is also dominated by an interventionist approach, which may not always be the most appropriate. For example, some recent road safety innovations have taken the opposite direction, removing the effects of decades of interventions to produce a road environment devoid of paint and signage with, it seems, very positive results. In cycling, convincing evidence has been published that the most effective way to improve cycling safety is to encourage more people to cycle (Jacobsen, 2003; Robinson, 2005b).
Paralleling the experience with MMR, important public health benefits will be lost if cycling is discouraged by publicity trying to convince cyclists that, despite its relatively low injury rate per hour of activity, riding a bike is dangerous and helmets should be worn at all times. It is poor science to ignore the substantial body of evidence concerning head injury rates when legislation has increased helmet-wearing rates by 40-50 percentage points. These studies show beyond doubt that even very large increases in helmet use have little or no tangible benefit, and the best way to reduce injuries to cyclists is to try and prevent accidents from happening in the first place.
In an examination of the controversy that now surrounds observational studies, the editor of the International Journal of Epidemiology writes (Smith, 2004) that:
"It is, at the very least, clear that observational epidemiology may be more fallible than some have suggested".
"There is … at least as much to be learned from studies that have reached what now appear to be the wrong conclusions, as those that got it right. In many cases these will have been carried out to the highest standards, yet somewhere along the line, the authors were misled."
"An important clue as to whether the findings of individual-level associations in observational epidemiological studies are likely to be causal can come from time-trend or ecological data."
Drawing four lessons from the HRT controversy, Petitti, 2004 recommends:
Head injuries and helmet laws in Australia and New Zealand. .
Degenhardt, Hall and Lynsky, 2003
Degenhardt L, Hall W, Lynsky N, 2003. Testing hypotheses about the relationships between cannabis use and psychosis. Drug Alcohol Dependency 2003;71:37-48.
Foliaki S, Kildegaard Neilsen S, Bjoerksten B, von Mutius E, Cheng S, Pearce N, 2004. Antibiotic sales and the prevelance of symptoms of asthma, rhinitis and eczema. Int Jour Epid 2004;33:568-73.
Grayling T, Hallam K, Graham D, Anderson R, Glaister S, 2002. Streets ahead - Safe and liveable streets for children. Institute for Public Policy Research .
Jacobsen PL, 2003. Safety in numbers: more walkers and bicyclists, safer walking and bicycling. Injury Prevention 2003;9:205-209.
Lawlor DA, Smith GD, Bruckdorfer KR, Kundu D, Ebrahim S, 2004. Those confounded vitamins: what can we learn from the differences between observational versus randomised trial evidence?. Lancet 2004;363:1724-1727.
Lawlor, Smith and Ebrahim, 2004
Lawlor DA, Smith GD, Ebrahim S, 2004. The hormone replacement - coronary heart disease conundrum: is this the death of observational epidemiology?. Int Journal of epidemiology 2004;33:464-467.
Communication concerning unpublished research by Dr A. Macpherson et al, School of Kinesiology and Health Science, York University, Toronto, Ontario, and M J Wardlaw in April 2004. .
Maimaris, Summers, Browning and Palmer, 1994
Maimaris C, Summers CL, Browning C, Palmer CR, 1994. Injury patterns in cyclists attending an accident and emergency department: a comparison of helmet wearers and non-wearers. BMJ 1994 Jun 11;308(6943):1537-40.
Petitti D, 2004. Hormone replacement therapy and coronary heart disease: four lessons. Int Journal of Epidemiology 2004;33:461-463.
Robinson DL, 2005. Safety in numbers in Australia: more walkers and bicyclists, safer walking and bicycling. Health Promotion Journal of Australia 2005;16:47-51.
Smith GD, 2004. Classics in epidemiology: should they get it right?. Int Journal Epidemiology 2004;33:441-442.
Stampfer MJ, Colditz GA, 1991. Estrogen replacement therapy and coronary heart disease: a quantitative assessment of the epidemiologic evidence. Prev Med 1991 Jan;20(1):47-63.
Thompson, Rivara and Thompson, 1989
Thompson RS, Rivara FP, Thompson DC, 1989. A case control study of the effectiveness of bicycle safety helmets. New England Journal of Medicine 1989 v320 n21 p1361-7.