Skip to main content


Adverse Events: A Look Under the Hood

As most people know, there’s an FDA Adverse Event Reporting System, which is supposed to capture any sort of problems that turn up with approved drugs. Certainly if you have any kind of job in the industry, you know about it – every corporate training program includes a section about how if you hear about any problem with any of the company’s products, you have a legal obligation to report it as quickly as possible, or the company will face the possibility of fines and/or legal action. It’s important to have such a registry, but how good is it?

Not as good as you’d like. Every time I read about people really digging into the numbers there, a big part of the story seems to be trying to curate the raw data into a useful form. This new paper is an excellent example – it’s a collaboration between the Shoichet group at UCSF, Novartis, and Oracle Health Sciences to see what sorts of useful trends can be extracted from the FAERS data. About half the reports in there are from health care professionals, and about one-third are from the patients themselves. 3% of them are from lawyers, and 9% are so unspecified that no one seems to know where they came from at all.

By now there are over nine million reports in it, not only does the number keep growing, the rate of growth is growing as well. About 1% of the reports are just straight duplicates (same patient, same drug, same problem, same day) and many others are the same patient reported over multiple incidents. As with any open-reporting system, there are biases in what people will send in. That is, death or serious injury has a lot better chance of being reported than a benign reaction, and that’s reflected in the statistics (15% of the reports are deaths, whereas – last I heard – the US pharmacopeia does not kill off 15% of its customers). The tricky part, though, is that many of these reports are for drugs that are prescribed for life-threatening conditions, making it harder to assign them to the treatment as opposed to the underlying disease. About 5% of the reports actually describe the condition the drug is prescribed for as the adverse event, but that drops off steeply after 2011, which the authors attribute to a revision in FDA reporting guidelines that took place that year (and none too soon, apparently).

Another big problem in using the FAERS data is the way that individual drugs are mapped in it. As the paper notes, there are 378 synonyms for fluoxetine (commonly known as Prozac), and the same situation obtains for a great many other drugs. If you want to get anything useful out of drug/adverse event correlations, you have to find a way to deal with that. These authors have tried to do just that, and they find that there are 2729 unique ingredients present in the database. Interestingly, over 800 of them have never had an adverse report at all. The distribution is a tailed one, as you’d expect – a bit over 40% of the ingredients account for over 90% of the reports. Other open-reporting artifacts show up as well. Sorted by date, there are spikes around the first of every month, and at the first of every year (which I suppose reflects a backlog during the holiday season?)

Looking at individual drugs, you can see the influence of events. The reporting trends for Vioxx (rofecoxib) definitely show myocardial infarction and cerebrovascular problems as a large percentage of adverse events right from the start, for example. The majority of these were from physicians, until the paper in 2000 that suggested MI as a consequence of the drug. After that, physician reports stayed pretty much constant, while reports from lawyers began to rise. By the time the warning box was added to the label in 2002, physician reports had actually decreased, but lawyer reports continued to rise further and by then made up the majority of records.

Can you use the FAERS data to mine out correlations that you didn’t know about before? From what I see in this paper, I’d advise caution. The authors show the comparison between rosiglitazone and pioglitazaone, two closely related PPAR ligands prescribed to Type II diabete patients. Once you get all the reports rounded up that actually deal with these drugs, and once you strip out the ones that say that they cause Type II diabetes, which are not too useful, you can still see rather large differences between the two. Rosiglitazone has been linked to cardiovascular side effects much more than pioglitazone, but the public nature of this linkage clearly affects the statistics in FAERS as well – statistically, most of the difference between the two comes from a burst of reports before 2002 that coincide with the publicity around the side effects. Meanwhile, pioglitzone has far more reports of bladder cancer, especially coming on since 2009, but this appears to have a strong lawyer-driven component in the reporting. There may well be a link there, but the FAERS system is a noisy way to try to prove it. In general, the authors recommend careful attention to monthly reporting trends as a way to try to even out bursts of reports due to press coverage and the like. Doing so with pioglitazone versus rosiglitazone suggests that there may be something going on, lawyers and all.

Another recommendation is that the data should be correlated, when possible, to the known pharmacokinetics of the drug being checked. Tracking across different dosages and can be a reality check, as can drugs with similar mechanisms but different PK. Tracking across different formulations would also be valuable, but it’s very hard to do in the present system. One gets the impression that this level of detail is rarely studied in the database, but considering the other confounding factors, it should get more use than it does.

In the end, the authors propose that the first thing that FAERS needs is a more rational approach to drug synonyms, and they suggest their own work in clearing up the tangle as a starting point. A more interactive and automated form of reporting would also be useful, to reduce reporting errors and make the data easier to handle in the end. Until these (and other) improvements are made, what you take away from reading this paper is that the whole adverse-event reporting system is something of a missed opportunity for it to be far more than it is.

9 comments on “Adverse Events: A Look Under the Hood”

  1. Matt says:

    What about the fda’s sentinel initiative?

    Woodcock pumped this like crazy a few years back.

  2. Me says:

    You see alot of this in pharmacovigilance side of regulatory filings: when the agency (whichever – the other PV platforms are no better than FAERS) requests you assess a safety signal, you need to filter out the garbage, and end up paying attention to things like physician-reported cases where there is clear linearity and a positive dechallenge (ie symptoms recede when treatment discontinues and return when treatment resumes). Sometimes you see a report of an adverse event 3 weeks after taking the drug.
    Once i worked on a hay fever drug where there was a ‘burst’ in reporting of a specific adverse event, which ended up being correlated to a post on a ‘natural health’ blog trying to sell a ‘herbal’ hay fever remedy, which stated that the drug in question caused this adverse event (even though it had only sporadically been reported before).
    Definitely a mixed bag

  3. Bruce Preston says:

    In my experience clusters around the first-of-year and first-of-month are typically a result of incompletely specified dates; “June 2015” becomes “June 1st 2015” and plain “2015” becomes “Jan 1st 2015”.

    1. RM says:

      The “Null Island” effect:

  4. David Young, MD says:

    From the perspective of a physician, the FDA adverse reporting initiative is a joke. It works like this. Pharmaceutical company provides a free access number for patients who are put on a new anticancer drug. Patients are encouraged to call the number for help with taking their new medicine. Patients report typical side effects that are well known for the drug, or other anticancer drugs that are taken in combination. Next, the pharmaceutical company sends adverse reporting forms to physician to fill out. Physicians end up reporting additional instances of well known side effects such as nausea, diarrhea and skin rash. The forms are about 6 pages and take about 20 minutes to complete, but that is if one is just guessing the answers. If I really wanted to report the exact dates, doses and forms of each of the many other medications that the patient is on, it would take 2, 3 or even four hours to complete the form. So, the reports to the FDA (the ones that we don’t toss in the waste basket) are full of limited information, some inaccurate, about well known side effects. (Yes, I had to fill one out mentioned that Capecitibine can cause palmar planter erythema).

    On the other hand, I once had a patient whose blood counts were perfectly normal on her chronic lamictal made by generic company X only to have the blood counts drop sharply 5 weeks after being switched to generic company Y, and later recovering over 6 weeks when she was switched back to company X’s Lamictal. I spend 2 hours writing a complete summary of this to the FDA….. and never heard back.

    1. Emjeff says:

      Your tax dollars at work, Doctor.

  5. lennyjoels says:

    Agree with Dr Young. When patients hear that 1) the FDA will not investigate their report of an adverse event, 2) their report will be shared with the pharmaceutical company or device manufacturer who will contact them, and 3) their privacy is not guaranteed, they don’t want a report made.

  6. Scott Stewart says:

    I have something of the same experience on the veterinary side. NSAID’s (especially carprofen) get blamed for all sorts of things that are not likely the issue, and I spend a maximum of 30 minutes of unpaid time helping get the data and telling the company (and by extension the FDA) “No, I do not think drug x caused side effect y.”

    The only time I ever saw it work the way it was supposed to was with a flea and tick product that the post approval vigilance picked up a significant increase in pemphigus foliaceus and it was removed from the market.

  7. Anon says:

    I wonder if this system ever picked up a link between talc and cancer.

Comments are closed.