The pharmaceutical industry is overdue for another crazy idea. Go back to the mid/late 1980s, for example, and you hit the first swell of computer-aided “rational” drug design. It’s hard to remember, but the hype was that random screening and old-fashioned analog synthesis were both dead. Shooting arrows blindly, hoping to hit your target? Stale! Armed with the structures of the enzymes and receptors, the computationally literate would zzzziiip-thunk! right into the bulls-eye with just a few – heck, maybe even one – compound.
That description may sound exaggerated, but it’s not far from how Vertex and others were selling themselves at the time. Lilly used to have their supercomputer facility as part of the visiting-bigwigs tour, to give you the idea that this mighty machine was cranking out wonder drugs even as you watched. Agouron used to take folks into their wide-screen 3-D room to watch molecules docking with receptors in sort of a med-chem IMAX experience. All this really put the fear into people, and I don’t think a single company didn’t feel the cold wind blowing on them.
The way you dealt with that was to go out and buy some hardware, and hire some people who seemed to know how to use it. Pretty soon, you could see beautiful electron-density-map renditions of your very own molecules, docking into your very own drug targets. Was that really the structure of your compound as it met the protein? Heck, was that really the structure of the protein as it met your compound? Well. . .
Did any of this get anywhere? Not the first wave, that’s for sure. The technology was relentlessly oversold. Looking back on it, the idea of doing computational drug design with, say, 1987 technology is good for a sardonic chuckle. We have enough trouble doing it with 2002 technology, thank you very much.