Intended for healthcare professionals

Editorials

Can it work? Does it work? Is it worth it?

BMJ 1999; 319 doi: https://doi.org/10.1136/bmj.319.7211.652 (Published 11 September 1999) Cite this as: BMJ 1999;319:652

The testing of healthcare interventions is evolving

  1. Brian Haynes, professor of clinical epidemiology and medicine
  1. McMaster University Health Sciences Center, Hamilton, Ontario L8N 3Z5, Canada

    General practice p 676

    The British pioneer clinical epidemiologist Archie Cochrane defined three concepts related to testing healthcare interventions.1 Efficacy is the extent to which an intervention does more good than harm under ideal circumstances (“Can it work?”). Effectiveness assesses whether an intervention does more good than harm when provided under usual circumstances of healthcare practice (“Does it work in practice?”). Efficiency measures the effect of an intervention in relation to the resources it consumes (“Is it worth it?”). Trials of efficacy and effectiveness have also been described as explanatory and management trials, respectively,2 and efficiency trials are more often called cost effectiveness or cost benefit studies.

    Almost all clinical trials assess efficacy. Such trials typically select patients who are carefully diagnosed; are at highest risk of adverse outcomes from the disease in question; lack other serious illnesses; and are most likely to follow and respond to the treatment of interest. This treatment will be prescribed by doctors who are most likely to follow a careful protocol; the comparison will be a placebo, not the current best alternative therapy; and participants will receive special attention from staff who supplement or replace those employed in usual clinical settings. The results of such trials are very useful: if the intervention doesn't work under such ideal conditions it surely won't work under usual conditions. Most treatments don't survive this stage of testing, and it makes good sense to sequence the testing of all interventions through this efficacy stage.

    Even if an intervention works astonishingly well in a “Can it work?” study, it may not work well in usual care. Effectiveness in the community depends not only on efficacy but also on diagnostic accuracy, provider compliance, patient adherence, and the coverage of health services.3 Misdiagnosis can result in the wrong people getting or not getting the treatment. Providers often fail to prescribe or administer the treatment properly. Patients typically take less than half of prescribed treatments. “High tech,” expensive, or new interventions are usually not available in all communities in the developed countries or to most communities in the rest of the world. To paraphrase Muir Gray, what works well at the Sloan Kettering (a high tech cancer centre) may not work very well in Kettering (a small UK community).

    The study by Llewellyn-Jones et al in this issue of the BMJ reveals many of these problems (p 676).4 In attempting to provide an intervention to help general practitioners to detect and care for depressed elderly people in residential care, the authors found little evidence that general practitioners improved their prescribing habits. Many patients refused to participate or dropped out after entry to the study. The result was a barely detectable benefit, even among those patients who stuck with the programme. Even then, the small benefit was at the expense of additional resources—that is, the investigators and their educational programme.

    Alas, there are more troubles here. Though this study was intended to be “community based,” this desired state was compromised by the difficulty of recruiting doctors and patients and keeping those recruited engaged. In the end, the study doesn't inform us about whether the community's mental health was improved. Sadly, the multiple barriers to doing health services research and implementing innovative health services are why so few investigators try to do effectiveness studies. And even if they succeed, healthcare managers, planners, and politicians will want to know more than “Does it work?”: they will want to know “Is it worth it?”—in comparison with use of the resources for other needs.

    But don't despair. We're simply going through an evolutionary phase in testing interventions. Since the end of the second world war we've learned to walk, with randomised trials that assess efficacy. Trials such as the one by Llewellyn-Jones et al show that we're just now learning to run—with community trials that tackle difficult challenges in research design and implementation that can undermine the feasibility of a study or prejudice the interpretation of its findings. Issues of economic analysis also are being resolved, so that questions of efficiency can be better addressed. This progress will seem slow to researchers caught up in it and to all of us waiting for the answers, but in the history of the world we're heading for success at a blistering pace. Our progress is fuelled by efficacy studies and by researchers and governments intent on reaping the benefits they promise.

    We need more effectiveness studies to sort the fool's gold from the true gold and efficiency studies to tell us if the price of extraction is a bargain. Fortunately, many governments around the world are aware of the need for more and better research into health services and are providing funds for training and research development. One hopes that they will not lose heart or patience: we're going in the right direction, but trial and error are needed, along with investment in methodological research to get effectiveness and efficiency studies right.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    View Abstract