Here’s a thing about research (and drug discovery in particular) that makes it a bit different from many other occupations: you can go for extended periods without even being sure that you’re doing what you’re supposed to be doing. This thought came to mind yesterday when (on Twitter) Ash Jogalekar quoted a biotech veteran as saying that the most likely result of a high-throughput screening campaign was nothing and that the second most likely result was crap. Michael Gilman (biotech veteran himself) then chimed in to say that he would definitely prefer the “nothing”, because digging through all the crap was so deadly.
I can endorse both of those viewpoints, but I wanted to add some refinements. First off, it’s actually pretty unusual to get flat-out nothing from a screen. In fact, a real blank would strongly suggest that something went wrong, because you always get some sort of result, even if it’s just noise. That’s why any new screening campaign generally starts off with a test set, X hundred or X thousand compounds that get run as a pilot. If the output is flat zero for every well, something is very likely wrong. On the other end of the scale, if you get a 10% solid hit rate, something is definitely wrong. A more believable raw hit rate is something below 1% for a lot of targets (more if you’re screening a target class that you know the test deck has liked in the past, of course). But a ten per cent hit rate for what’s supposed to be a randomish selection of compounds is just not going to happen; your assay window is too wide or your assay format is just messed up at a fundamental level.
“Nothing” in screening terms generally means “nothing more than the usual crap”. Every assay technique is vulnerable to false positives. Some of these are specific to a particular readout – intrinsically fluorescent compounds or fluorescence interfering ones, for example, and some of them (aggregation) can mess up a whole range of assays. You should, then, expect to see your old friends. If you’re running a luciferase-driven assay, for example, you surely have some luciferase inhibitors in your collection (every collection has some!) If they don’t show up as “hits”, you have a problem.
What about an assay that returns a few hundred compounds even after the frequent-hitters for that assay technique have been identified? That is the situation I described in the opening paragraph, because that’s research, because you have no idea if those compounds are legitimate hits or if they’re a pile of junk that was generated by a false positive mechanism that you don’t know about yet. The newer and more exciting the assay technique, the greater the chances are that this has happened. The wilder and more squirrely the target, the greater the chances for false positives, too, since the intrinsic hit rate is likely quite small.
The kicker is that wilder and trickier targets tend to get screened in newer or more complex assay formats and against weirder compound collections, because they’ve either already been tried in the more normal stuff or there’s nothing else that has a hope of working. So you’re getting it from both directions. But your only choice is to start working your way through the pile, because that’s what you came here for, right? To find a drug lead? This is the situation that Gilman is describing – weeks or even months of chasing things down class by class, compound by compound. Slam the doors, kick the tires, do it again.
There’s another problem that occurs more often at the frontier, too: lack of control compounds. You would like to test your assay(s) with a compound that’s known to do what you’re looking for, to make sure that everything’s working and that you can find such things, but what if no such compounds exist yet? You’re left trying to develop assays in the hopes that they’ll do what you want – I mean, they look like they should work – but you’re never quite sure. In these cases, if all you get is a collection of odds and ends when you run the screen, is that because it’s a hard target and not much was going to show up anyway? Or is it because there’s something wrong with your screen? These are, sadly, not mutually exclusive. Positive controls and validated assays have got to come from somewhere, though. . .
The “Are we doing this right?” feeling does not go away after the screen, either – in fact, dealing with it is an important part of being able to do research. Have you picked the right compound series to expand on, or is it a dead end that needs to be abandoned? Are you screening in the appropriate cell lines? How about that in vivo model that you’re heading for – do you trust it? You’re going to have to trust something at that level, because that’s what’s going to recommend a compound to the clinic. Oh God, the clinic. Is the whole idea behind the project even sound enough to have effects in humans? The Phase II failure rate will show that the answer to this question is often “Nope”, and many of those folks thought they’d answered their questions as well as they could, too, before the ground truth landed.
This is, in the end, only one way to answer such questions and deal with such doubts: run the flippin’ experiments. Set them up as well as you can, with all the brainpower and effort you can bring to the task, and then run the experiments and see what you get. There are plenty of jobs where you know what’s going to happen, but you chose this one instead!