—
Most of what Professor Healy describes in the above presentation will sound pretty familiar to advocates of evidence-based medicine (EBM). And as he says, he’s not really against EBM, he’s against being blind to the limitations of poor trial data. I don’t think there is a single advocate of EBM who would disagree with that.
Not being a medical practitioner, and interested in the methods of EBM rather than the difficulties with its implementation in medicine specifically, I cannot speak to how anecdotal data is dropping out of medical practice.
What does seem obvious, is that good anecdotal data is good anecdotal data – the problems come when you rely on it without being aware of its limitations, because we also know that good statistical analysis can trump what we learn from anecdotes – particularly for subtle effects, or where biasing factors (deliberate or not) distort the reporting of events.
It’s hard to get a full picture of what Healy is saying from a talk like this, but one of his claims which appear a little odd is that good drugs don’t need large randomised, controlled trials (RCTs). In fact, there are very few drugs which have large, striking effects such that only small trials are needed; and in the case of identifying adverse effects, large numbers might be required there as well, so you could identify to whom one should not give a successful drug.
One reason for this Healy identifies himself – people are heterogeneous, so a drug is going to work only in a percentage of people, which increases the size of the required cohort. Another reason for large cohorts is if a drug has a subtle but important effect (statins, for example, need to be prescribed to a large population for the health service overall to see a benefit; if they can be so prescribed relatively risk-free, there is a case for saying it ought to be done, but to determine this requires a large study).
Healy seems right, however, that you shouldn’t use a larger cohort than you actually need, as this can make problems disappear. But really, that just seems to be a point about data-fiddling rather than a complaint fundamental to RCTs.
The lesson we can draw for evaluating chemical safety, I suppose, is that we need a healthy respect for the limitations of the kind of data we think is good (such as the gold standard of RCT in medicine, or possibly good laboratory practice (GLP) in the life sciences), and also not to be too hastily dismissive of data which may appear weak, especially if we position it next to something we might believe to be a gold standard (anecdotal evidence in medicine, or small-scale animal studies in life sciences).
If our a priori assumptions about the intrinsic value of the source of some new information are too rigid, when we approach interpreting it we will make mistakes – our assumptions will bias our interpretation. It seems obvious when stated in the abstract, but this project would not exist if there were not genuine concerns about it happening in reality.
We’ll be publishing examples of where data is over-played because of its source in due course.