There’s a truly disturbing paper out in PLoSONE with potential implications for a lot of assay data out there in the literature. The authors are looking at the results of biochemical assays as a function of how the compounds are dispensed in them, pipet tip versus acoustic, which is the sort of idea that some people might roll their eyes at. But people who’ve actually done a lot of biological assays may well feel a chill at the thought, because this is just the sort of you’re-kidding variable that can make a big difference.
Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets.
Lovely. There have been some alarm bells sounded before about disposable-pipet-tip systems. The sticky-compound problem is always out there, where various substances decide that they like the plastic walls of the apparatus a lot more than they like being in solution. That’ll throw your numbers all over the place. And there have been concerns about bioactive substances leaching out of the plastic. (Those are just two recent examples – this new paper has several other references, if you’re worried about this sort of thing).
This paper seems to have been set off by two recent AstraZeneca patents on the aforementioned EphB4 inhibitors. In the assay data tables, these list assay numbers as determined via both dispensing techniques, and they are indeed all over the place. One of the authors of this new paper is from Labcyte, the makers of the acoustic dispensing apparatus, and it’s reasonable to suppose that their interactions with AZ called their attention to this situation. It’s also reasonable to note that Labcyte itself has an interest in promoting acoustic dispensing technology, but that doesn’t make the numbers any different. The fourteen compounds shown are invariably less potent via the classic pipet method, but by widely varying factors. So, which numbers are right?
The assumption would be that the more potent values have a better chance of being correct, because it’s a lot easier to imagine something messing up the assay system than something making it read out at greater potency. But false positives certainly exist, too, so the authors used the data set to generate a possible pharmacophore for the compound series using both sets of numbers. And it turns out that the one from the acoustic dispensing runs gives you a binding model that matches pretty well with reality, while if you use the pipet data you get something broadly similar, but missing some important contributions from hydrophobic groups. That, plus the fact that the assay data shows a correlation with logP in the acoustic-derived data (but not so much with the pipet-derived numbers) makes it look like the sticky-compound effect might be what’s operating here. But it’s hard to be sure:
No previous publication has analyzed or compared such data (based on tip-based and acoustic dispensing) using computational or statistical approaches. This analysis is only possible in this study because there is data for both dispensing approaches for the compounds in the patents from AstraZeneca that includes molecule structures. We have taken advantage of this small but valuable dataset to perform the analyses described. Unfortunately it is unlikely that a major pharmaceutical company will release 100’s or 1000’s of compounds with molecule structures and data using different dispensing methods to enable a large scale comparison, simply because it would require exposing confidential structures. To date there are only scatter plots on posters and in papers as we have referenced, and critically, none of these groups have reported the effect of molecular properties on these differences between dispensing methods.
Some of those other references are to posters and meeting presentations, so this seems to be one of those things that floats around in the field without landing explicitly in the literature. One of the paper’s authors was good enough to send along the figure shown, which brings some of these data together, and it’s an ugly sight. This paper is probably doing a real service in getting this potential problem out into the cite-able world: now there’s something to point at.
How many other datasets are hosed up because of this effect? Now there’s an important question, and one that we’re not going to have an answer for any time soon. For some sets of compounds, there may be no problems at all, while others (as that graphic shows) can be a mess. There are, of course, plenty of projects where the assay numbers seem (more or less) to make sense, but there are plenty of others where they don’t. Let the screener beware.
Update: here’s a behind-the-scenes look at how this paper got published. It was not an easy path into the literature, by any means.
Second update: here’s more about this at Nature Methods.