I’d class this letter to Nature as “interesting but wrong”. Here’s the argument:
. . .A patent is granted only when a compound’s application can be classified as both ‘new’ and ‘invented’. A highly effective compound thrown up by an AI algorithm could indeed be new. Whether it is ‘invented’, however, is debatable. This is because the inventor might be considered as either the algorithm (so not a person) or its programmer.
It could be argued that if there is a connection between the program and the compound’s structure, then it is predictable by experts and so no longer inventive. Or, if the programmer can’t explain how the AI algorithm found the structure, then he or she didn’t invent anything. . .
Actually, the two key thing a patent is granted for are “novelty” and “utility”, but I’ll keep going, because the letter’s line of thought is not completely crazy. When a lead-discovering AI program- we’re stipulating that such a thing will exist for the purpose of argument, although it doesn’t quite yet – suggests a useful compound de novo, that compound does not yet exist. You need physical data to prove that you made what you’re claiming in your patent, and you need data to show that it’s useful for what you’re claiming as well. Without these, compounds in a patent are not exemplified, and are protected (if that’s the word) only by a layer of legal tissue paper. And remember, drug industry patents are generally directed towards chemical matter: we claim these new compounds (novelty) that are good for this (utility). Now when Person A has an idea for a new compound and tells Person B to go make it, and it works, Person B is not the inventor. Person A is. Doing just what someone else told you to do is not an inventive step. So that’s where this letter-writer is coming from: if an AI tells you to make the compound and you make it, you’re not the inventor, and the inventor is. . . ?
But hold on. We’re talking about a fancier version of virtual screening here, and no one thinks that virtual screening algorithms are destroying the patent system. One big reason is that the AI will not provide the final drug structure. I’m sure it will do its best, but hey, so do we, and time and chance happeneth to them all. What it will provide, one hopes, are very interesting lead compounds that will then be taken on by human drug developers. The final compound will look different.
That’s because an AI program will be doing very well indeed to come up with potent structures against a single target protein – it won’t be optimizing for oral bioavailability, toxicity, plasma half-life, avoidance of active metabolites, heterogeneity in liver enzymes among the patient population, interaction with other common drugs in that population, ease of industrial synthesis, development of a useful formulation, avoidance of less useful (but more stable) polymorphic forms, stability on storage. . .and a big pile of other issues that actual drug development has to deal with all the time. No, by the time all that gets worked out, there will have been some inventive steps taken by human inventors.
And that last line quoted above doesn’t quite work, either: when you run a high-throughput screen, can you explain why one particular compound type hit while the others didn’t do as well? Only ex post facto – otherwise, why did you run the screen? We often can’t explain why one structure is so much better than another: but discovering one is indeed an invention. A reductionist view would be that the AI is no more responsible for the invention than was a multichannel pipet or a fluorescent plate reader. All of these are tools, used by an inventive human to discover something.
So a human will use a new, powerful tool to help discover that compounds of a certain type are active against a given target. Other humans will discover further chemical matter of that that will work even better in real cells, real animal, and humans, and that chemical matter will be the subject of a patent. And the pharmaceutical industry’s patent structure will hold up just fine.