We’re seeing a lot of bivalent molecules in drug discovery these days, especially with the popularity of bifunctional protein degrader ligands. The general structure of such thing is (ligand)—-linker—-(ligand), with the two ligands chosen (in the case of targeted protein degradation) to bring a ubiquitin ligase complex up close to some protein you’ve marked for destruction. But there are a lot of other bifunctional species that fit the same pattern. Here’s a look back at the history of G-protein-coupled-receptor ligands set up that same way.
It goes back to 1982 work from the lab of Phil Portoghese (a name that medicinal chemists will definitely be familiar with!) That took a naltrexone derivative (naltrexamine) and put molecules of it on each end of a series of polyethyleneglycol (PEG) linkers, to try to bind simultaneously to adjacent opioid receptors. 1982 was not a time when you did this with membrane preps of cloned receptors – that first paper is done with guinea pig ileum and mouse vas deferens tissue preps, and I’m pretty sure the present authors (NIDA-NIH( are putting that detail in there just to raise the eyebrows of the later generations.
I was an undergraduate chemistry student when this work appeared, and was not a regular J. Med. Chem. reader at the time, although I do remember seeing some of the follow-up papers as the years went on. It’s worth remembering that when this research started, no one had any idea that GPCR receptor dimers or oligomers might be defined functional units – this was an effort to see how far apart they might be, just for starters. That first paper did conclude that they were in fact getting more binding than the sum of the parts and that this did depend on the length of the linker. Those two conclusions, in fact, continue to hold up for the general bifunctional-molecule field, and with many of the same qualifications and complications that Portoghese encountered during the 1980s.
For example, see this 2007 paper from the Whitesides group, looking at effective molarity and linker length. In that case, they’re tying a known carbonic anhydrase ligand to the protein covalently at a known distance from the binding site and giving it various tether lengths to find its way. The fundamental lesson is that too long a linker can do you some harm (lots of floppy entropy to overcome), but too short a linker is deadly, because you’re just not going to reach the binding site at all. Targeted protein degradation projects have seen similar effects.
But there are others laid on top of those. It would be easy to imagine such linkers as just inert spacers, thingies that hold the business end of these molecules apart. But that would be a mistake, too, because the linkers can participate in the binding events, too. The Sharpless group’s first report in 2002 of a ligand formed by in situ click chemistry is an example. Two ligands for the far ends of acetylcholinesterase’s roomy binding site did their own azide/alkyne cycloaddition when brought into proximity in there, and formed a femtomolar inhibitor. The X-ray crystal structure showed that the triazole linker itself participated in the binding, which is where some of that impressive affinity came from (it’s worth noting that that crystal structure is in itself an interesting and complicated story).
The spanning-two-adjacent-receptors work described in the first link of this post shows some of these effects, too – the NIDA authors have done a lot of work on bifunctional dopamine receptor ligands, and they’ve found that the linker length, functionality, and flexibility have to be considered as key variables. The targeted protein degradation field can tell you a lot about that, too – length, as mentioned before, is the first consideration, but there are all sorts of cases where what seem to be similar linkers (in length and flexibility) that give very different effects in cellular degradation assays. Add that to the variable effects of the ligands, and you’re in for a real fiesta: for example, you can’t just optimize your bifunctional “head groups” based on potency to the target protein, because the eventual degradation efficiency does not have to follow that order at all. Nor does the selectivity against related proteins have to translate to selective degradation; you can get surprised in both directions, and you probably will.
In fact, the TPD world is probably even crazier than the linked-receptor one, because in the latter case, you’re (at least some of the time) spanning the distance between proteins that are already naturally in proximity. GPCRs form all sorts of dimers and oligomers on the cell surface, in important patterns that we’re still trying to figure out. But targeted protein degradation is all about bringing proteins together that normally have no business with each other at all. You’re forcing some hapless target protein into the proximity of a ubiquitin ligase complex that just assumes that hey, here’s another protein next to me, let’s do that voodoo that I do and ubiquitinate the crap out of it. I speak technically here, you understand. Normally this ligase complex wouldn’t even be seeing your target protein, but you’re trying to rewire that system for fun and profit.
That means that the ternary complex (target protein, bifunctional degrader, and ubiquitin ligase) is a wild frontier of molecular interaction. Some things are going to match up as these species are brought together, and some things are going to clash, and at the moment we really don’t have a good way to anticipate what’s going to happen. TPD remains a rather. . .empirical. . .field for now, which in practice means that you’d better try this and try that and try that other thing over there, what the heck. It would make everyone feel better if that weren’t the case, and everyone would be far more efficient steely-eyed protein degradation masters sitting in mission control and pointing out targets, but that is a vision for the future. For now, it’s similar to the traditional ag-chem development model of “spray and pray”.
One last note on the whole bifunctional idea: when I used to see papers with such linked molecules in them back in the early 1990s, in my first years in med-chem, I would (being honest here) just roll my eyes. The whole idea seemed too simplistic, too academic, and too odd. You couldn’t just go around sticking two molecules together with a little connecting chain and expect that to work, right? And if it worked in an in vitro assay, well OK, nice paper, but you couldn’t just go around trying to turn such weirdo molecules into drugs, right? Wrong, wrong, and wrong. As discussed above, it’s actually a complicated thing to get to work, but it can be a really good idea if you have a good reason for sticking two proteins together. It’s quite possible that drug-discovery progress in this area was delayed by the rule-of-five years when people were terrified of high molecular weights, but since that has been breaking down under the weight of evidence, people are a lot more willing to explore the larger species needed for linked-bifunctionals. Don’t overlook, though, that a less specific thing that slowed these ideas down was that people thought that these molecules just looked strange.