As most people know, there’s an FDA Adverse Event Reporting System, which is supposed to capture any sort of problems that turn up with approved drugs. Certainly if you have any kind of job in the industry, you know about it – every corporate training program includes a section about how if you hear about any problem with any of the company’s products, you have a legal obligation to report it as quickly as possible, or the company will face the possibility of fines and/or legal action. It’s important to have such a registry, but how good is it?
Not as good as you’d like. Every time I read about people really digging into the numbers there, a big part of the story seems to be trying to curate the raw data into a useful form. This new paper is an excellent example – it’s a collaboration between the Shoichet group at UCSF, Novartis, and Oracle Health Sciences to see what sorts of useful trends can be extracted from the FAERS data. About half the reports in there are from health care professionals, and about one-third are from the patients themselves. 3% of them are from lawyers, and 9% are so unspecified that no one seems to know where they came from at all.
By now there are over nine million reports in it, not only does the number keep growing, the rate of growth is growing as well. About 1% of the reports are just straight duplicates (same patient, same drug, same problem, same day) and many others are the same patient reported over multiple incidents. As with any open-reporting system, there are biases in what people will send in. That is, death or serious injury has a lot better chance of being reported than a benign reaction, and that’s reflected in the statistics (15% of the reports are deaths, whereas – last I heard – the US pharmacopeia does not kill off 15% of its customers). The tricky part, though, is that many of these reports are for drugs that are prescribed for life-threatening conditions, making it harder to assign them to the treatment as opposed to the underlying disease. About 5% of the reports actually describe the condition the drug is prescribed for as the adverse event, but that drops off steeply after 2011, which the authors attribute to a revision in FDA reporting guidelines that took place that year (and none too soon, apparently).
Another big problem in using the FAERS data is the way that individual drugs are mapped in it. As the paper notes, there are 378 synonyms for fluoxetine (commonly known as Prozac), and the same situation obtains for a great many other drugs. If you want to get anything useful out of drug/adverse event correlations, you have to find a way to deal with that. These authors have tried to do just that, and they find that there are 2729 unique ingredients present in the database. Interestingly, over 800 of them have never had an adverse report at all. The distribution is a tailed one, as you’d expect – a bit over 40% of the ingredients account for over 90% of the reports. Other open-reporting artifacts show up as well. Sorted by date, there are spikes around the first of every month, and at the first of every year (which I suppose reflects a backlog during the holiday season?)
Looking at individual drugs, you can see the influence of events. The reporting trends for Vioxx (rofecoxib) definitely show myocardial infarction and cerebrovascular problems as a large percentage of adverse events right from the start, for example. The majority of these were from physicians, until the paper in 2000 that suggested MI as a consequence of the drug. After that, physician reports stayed pretty much constant, while reports from lawyers began to rise. By the time the warning box was added to the label in 2002, physician reports had actually decreased, but lawyer reports continued to rise further and by then made up the majority of records.
Can you use the FAERS data to mine out correlations that you didn’t know about before? From what I see in this paper, I’d advise caution. The authors show the comparison between rosiglitazone and pioglitazaone, two closely related PPAR ligands prescribed to Type II diabete patients. Once you get all the reports rounded up that actually deal with these drugs, and once you strip out the ones that say that they cause Type II diabetes, which are not too useful, you can still see rather large differences between the two. Rosiglitazone has been linked to cardiovascular side effects much more than pioglitazone, but the public nature of this linkage clearly affects the statistics in FAERS as well – statistically, most of the difference between the two comes from a burst of reports before 2002 that coincide with the publicity around the side effects. Meanwhile, pioglitzone has far more reports of bladder cancer, especially coming on since 2009, but this appears to have a strong lawyer-driven component in the reporting. There may well be a link there, but the FAERS system is a noisy way to try to prove it. In general, the authors recommend careful attention to monthly reporting trends as a way to try to even out bursts of reports due to press coverage and the like. Doing so with pioglitazone versus rosiglitazone suggests that there may be something going on, lawyers and all.
Another recommendation is that the data should be correlated, when possible, to the known pharmacokinetics of the drug being checked. Tracking across different dosages and can be a reality check, as can drugs with similar mechanisms but different PK. Tracking across different formulations would also be valuable, but it’s very hard to do in the present system. One gets the impression that this level of detail is rarely studied in the database, but considering the other confounding factors, it should get more use than it does.
In the end, the authors propose that the first thing that FAERS needs is a more rational approach to drug synonyms, and they suggest their own work in clearing up the tangle as a starting point. A more interactive and automated form of reporting would also be useful, to reduce reporting errors and make the data easier to handle in the end. Until these (and other) improvements are made, what you take away from reading this paper is that the whole adverse-event reporting system is something of a missed opportunity for it to be far more than it is.