Skip to main content

Drug Industry History

Two Tribes

I’m sitting in an MIT conference on AI in drug discovery/development as I write this. One of the speakers here (Mathai Mammen, J&J/Janssen) just made a good point – not a new one, but a solid one that deserves some thought. He called for “bilingual” people, by which he means people who have some fluency in data science and some fluency in one or more of the various fields that make up drug research.

That split has been a noticeable fact over my whole career. In my own field, there’s always been a gap (greater or lesser depending on the people and the circumstances) between the molecular modelers and the medicinal chemists. The cohort that really bridges the two was small when I started in the industry in 1989 – as well it might be – and although it’s larger now, it hasn’t grown as much as you might think. They’re still two separate fields, and their practitioners are still very capable of talking past each other. I remember hearing, years back, a prediction that over time every medicinal chemist would just naturally have computational chemistry as part of their tool kit, and that distinction between the tribes would disappear. Hasn’t happened.

Why not? There are several reasons. The sorts of people who get interested in these areas tend to be a bit different from the start, I think. People tend to get into both the subject matter of a given field and the tools and technology used to realize it. Lead guitarists tend to know an awful lot about the different brands of electric guitars, pickups, slides, amplifiers and so on. Oil painters develop a lot of opinions, backed up by experience, about different brands of paint, canvas, types of brushes, and all the rest. Meanwhile, medicinal chemists tend to be people who get into (or got into!) organic synthesis, biochemistry, chemical biology and such, and have both knowledge of and affection for the tools of those trades. And for their part, the folks who really know and practice molecular modeling tend to be people who really got into. . .coding. Programming, algorithmic optimization, data handling, with a strong side interest in computer hardware and various software packages and tools. These are different people. We need both types (and more besides!), but pretending that they’re not different people with different interests is not useful.

Another reason that these populations haven’t converged, I think, is that they haven’t felt the need to. For one thing, both groups have been content to let the specialists in the other amass the knowledge needed to do their jobs – there are only so many hours in the day and so many neurons in one’s head. And to be really honest about it, neither group has felt that a detailed knowledge of the other’s field is necessarily worth the effort. Most medicinal chemists, as mentioned, get into the area via organic chemistry. We know how to make molecules (and by “know” I mean both intellectually and through physical experience), and later on we discover how to apply that knowledge to work in drug research. A computational chemist will (rightly) not see the point in knowing the ins and outs of metal-catalyzed couplings, which purification techniques are the first to try, which sorts of assays tend to give more actionable data, all the bench-level stuff that is the foundation for most of their med-chem colleagues. Likewise, the med chem crowd has not seen any advantage to learning about the efficiencies of various sorting and sampling methods, how to configure a computational task to take advantage of GPU hardware, or the strengths and weaknesses of the various levels and flavors of quantum-mechanical approximations.

At an even higher level, one that’s even more uncomfortable to face, medicinal chemists would probably get more interested in learning the details if the computational approaches produced useful recommendations more often. Or perhaps if they hadn’t been burned as many times in the past. To be sure, the computational chemists would get more interested in the nuts and bolts of organic chemistry and assays if that knowledge had a chance of actually making their lives easier, too. As it stands, there’s a “Why should I” aspect that’s hard to get over.

But that’s the promise of machine learning/AI: that it might actually start providing some attention-getting answers to “Why should I”? If ML can start spitting out actionable predictions and insights that we wouldn’t have seen on our own, well, attention is going to get paid. We may well be on the threshold of that now – many ML/AI people would say that we passed that threshold a while back, but the real test is convincing a skeptical audience from outside your field. As much as I can sound like a skeptical member of that skeptical crowd, I will be very happy indeed if the convincers start coming. We’re about to find out.

54 comments on “Two Tribes”

  1. lab_tech says:

    “computational chemists would get more interested in the nuts and bolts of organic chemistry and assays if that knowledge had a chance of actually making their lives easier”

    The problem with AI and organic synthesis is testing even the modest modest predictions IRL (not to speak of the more grandiose claims). I think if most of the AI for drug discovery crowded spent a few weeks having to weigh out reagents, clean glassware, purify compounds, babysit the rotavap, prepare samples for analysis, etc. they would give up hope. Even the efforts towards “automation” don’t alleviate most of these aspects…even for well established systems like peptide synthesis one still has to weight out all the reagents, clean up afterwards, and do lots of little things are are difficult to automate. And good bioassays can be even more complicated. I’d love a closed loop drug discovery system as much as anyone, but without a practical way of producing the compounds by non-specialists we might be stuck with AI for drug repurposing, as shown recently by the nice MIT work.

    1. c says:


      I know the grueling nature of experimental chemistry, I used to do it. I know how chemists clutch to it as a badge of value. I’ve switched over to software development now.

      I think the average GPU-jock/comp-chemist is mostly just disappointed that most state-of-the-art lab chemistry amounts to thousands of hours of sloppy manual labor.

      For instance, is it not totally backwards how normal it is for scientists-in-training in the 21st century to clean their own glassware by hand (and to have their own strange opinions about how best to do it)? Or, take their NMRs one at a time while sitting idly at the console? Or babysit some “tricky” separation?

      Chemistry pumps out thousands of PhDs with “intuition” (dogma) about atoms every year. It’s not hard for these people to avoid a real scientific education and actually end up as simply bad engineers.

      1. Kevin says:

        No one sits ideally at the NMR machine waiting for their sample to run – that’s why sample changers exist! (If of course chemists did do this then they might not leave their samples on the sample changers, which is a perennial headache for anyone who runs an NMR machine).

        1. Singapore chemist says:

          I wish that was the case in my lab…

  2. Tribesman says:

    I would argue that there are actually three tribes. The computational people are split into people who can program (“code”) and people who don’t code, but instead are experts in various commercial software packages. Most computational chemists fall into the later category. These two groups are also motivated by different things and often can talk pet each other as well.

    1. Leaderr says:

      The later group you talk about also discover lead molecule for a given new lead discovery program from “another lead” in your collection or from some one else’s lead for patent busting!

    2. Anon says:

      The later group you talk about also discover lead molecule for a given new lead discovery program from “another lead” in your collection or from some one else’s lead for patent busting!

    3. Peter Kenny says:

      I would concur with ‘three tribes’ although I’d make the cut between physics-based computational chemistry and data-based computational chemistry (rather than coders and non-coders). It’s important for both computational tribes to be aware that drug design is, in essence, incremental in nature and the objective is to generate the necessary data as efficiently as possible. All three tribes would be advised to heed the words of Manfred Eigen: “A theory has only the alternative of being right or wrong. A model has a third possibility: it may be right, but irrelevant.” I have linked a blog post that may be relevant as the URL for this comment.

    4. Got Horlicks? says:

      Arguably, a 4th tribe is the persistent cargo culture tribe of drug forecasting, founded upon various ad hoc combinations of rudimentary molecular properties that endow magical integer rules for divining what is a drug (or not), as sanctioned by the Hogwarts School of Thermodynamics.

      1. Are you referring to Pharma’s Finest Minds who gave us Ligand Efficiency metrics and repurposed solubility index as property forecast index? Where would be without enthalpy optimization? It is a privilege to share the planet with such intellects.

    5. Tran Script says:

      I think you could argue that there are even more tribes, i.e. on top of the people who code things there are often people who design the models, with the former having more of a CS background and the latter more physics/math.

    6. Ursa Major says:

      I agree with Tribesman’s three tribes. I was deeply interested in chemical reactions, but I was driven into a computational PhD by solvent optimization, product purification and all the tedious analysis on different machines to find out what you have.

      I knew how to use my programs and how to set up my inputs to get relevant results, but they were a dark gray (not quite black) box to me.

  3. Been at this a while says:

    Ideally, there are medchemists who know wet labs and medchemists who know dry labs. While they come from different backgrounds, they learn the same things in Pharma/Biotech and act accordingly. Those from the organic background who are (too?) interested in the nuts and bolts of chemical reactions end up the the process group, right? And those from the computational background who are (too?) interested in the nuts and bolts of software and models end up in the development groups (and companies).

    I am firmly in the last group. From where I sit, the most-successful person is those who grow beyond their training and interests and embrace all aspects of projects and their success. Others, such as myself, like what they do and remain in their niche. Like all support people, they’re/we’re useful but not the direct reason projects move forward.

  4. AT says:

    I agree with the sentiment, as someone who’s transitioning from synthesis to the area of machine learning I’ve noticed a lack of empathy from the computational chemistry community for their wet-lab counterparts. A lack of understanding from both sides, and potentially that of managers seems to lead to new hires fitting into the pre-defined notion that one is either an experimentalist or a computational chemist. Sitting somewhere in between to bridge the gap isn’t as appreciated as it ought to be as of yet. For early stage researchers, especially those attempting to become bilingual this can be discouraging. However, i think in the right environment this type of person could really engage the two sides, and orient resources towards utilizing the best of what computation can offer to solve problems that experimentalists face.

    Especially in the area of machine learning, where often one wants to experiment with the latest algorithms or develop new ones. Bridging the gap can target development so that it solves a problem, rather than being the first study to apply yet just another algorithm.

    It’s sad to see that the two sides all too often view themselves as different.

    1. HFM says:

      I agree with you – everyone loves a bilingual manager, but it’s rough out there for the individual contributors. People want to hire for a very defined role. There’s no job opening for “I am about to save you months of work, because you don’t know what you don’t know, but I do”.

      I’m a bioinformatician who has made a career of being a “translator”. I’m really quite useful, if I do say so myself – and it’s fun. But starting out is rough, and even when you’re established, you’re going to get side-eye for not knowing as much as the specialists do…in any of the half-dozen specialties that might be relevant.

    2. tlp says:

      So true. I remember in grad school we had a higher manager from Roche telling us how it’s valuable to change research field, learn new things and all that. But once you get to job application process, “Sorry, you seem to not have enough of valuable experience of flipping through phosphines in your Buchwald coupling setups”

      1. philip alabi says:

        Lol. Flipping through phosphines.

  5. dearieme says:

    Diversity is our strength.

  6. Med(iocre) Chemist says:

    Honestly, I’m totally fine if the computational people skip out on the benchwork aspect (reactions and purifications). The aspect that seems to be severely lacking among those folks is just plain old physical organic chemistry. Protonation states, pKas, tautomers, and aqueous stability. It’s dispiriting to hear someone tell you they’ve designed the perfect molecule for your target, only to have to gently point out that their acyclic hemiaminal will fall apart instantly in water or that their perfectly positioned enol might prefer the keto form. At least that’s been my academic experience.

  7. anon says:

    Aren’t med-chemists pretty good at their jobs already. By good, I mean if they were given a target, they could deliver a molecule with a reasonable binding constant and other properties (tox, etc.). Sure, this could make discovery chemists’ lives better and speed up the whole process. It seems like a whole lotta effort for something that doesn’t really make a huge difference in the big picture.

    1. tommysdad says:

      You’re kidding, right?

    2. p>0.05 says:

      Yes, it’s the biologists who can’t tell you whether or not the compound works or will work. Pretty hard to get where you’re going with a busted road map

  8. Bunsen Honeydew says:

    Did anybody else read the title of the post and think of Frankie Goes to Hollywood?

    1. Who Dr. says:

      I did immediately, then that other song Relax came to mind, but I am at work right now, so thought that wasn’t approriate.

  9. Anonymous says:

    I’ve had mixed experiences with computational colleagues. Some were chemists-become-programmers and talking with them was informative and educational. Some were programmers with zero knowledge of chemistry. Some of my favorite suggested structures: (1) a Texas carbon (5 covalent bonds to C) (2) a bent (non-planar) benzene ring (3) “There’s a binding site over here. Can you put a methyl group on the molecule over here [on a benzene ring, not at a carbon but sticking out from the C=C double bond. I proposed that we make bicyclo[4.1.0]heptanes, cycloheptatrienes, and even tropyliums. But then, of course, you lose the aromaticity AND ALL OF THE PREVIOUS WORK goes out the window. I really liked the tough challenge of “putting something methyl-like over here” but the reality (the boss) wouldn’t consider ditching prior work and starting work on a whole new compound class.)

    I’ve conversed (read: argued) with colleagues about relying on only one functional for DFT (when almost every “good” group optimizes by testing many functionals and basis sets). In The Pipeline recently raised the issue of grid size as another important parameter in DFT calculations (Get Ready to Recalculate, 17 July, 2019).

    I had colleagues who were so smart to incorporate pH into their calculations (to obtain ratios of +/neutral/- species) but who didn’t know the actual pH of the extracellular and intracellular compartments and the actual (presumed) site of action so they just calculated everything using pH 7.0.

    When it comes to data sets for AI / ML predictions, I often question the source(s) of the data and the reliability of the assay(s). To non-chemists, non-biologists the data is a bunch of numbers to crunch. Every life scientist knows to look at the assay itself and decide whether it is really giving useful information. (I have been involved with several projects based on flawed assays and have heard some entertaining seminars on the same theme.)

    For me, I think the Two Tribes are co-workers who are, mostly, very defensive about their work and non-co-workers that I talk to at seminars and meetings who take more time to explain to me how things work, why they made their computational choices, and admit to the potential problems with any given approach.

  10. Christophe Verlinde says:

    A late comment on your post on “A Deep Learning Approach to Antibiotic Discovery”.
    The claim is that “halicin” compound, a kinase inhibitor, discovered by machine learning had never been observed to block bacterial growth. This is plainly not true.

    Halicin was identified in a HTS in 2017 as a bactericidal of M.tuberculosis
    by veterinary researchers from Cornell.
    I found this link in the Pubchem report of the compound:
    Go to the table in section 8.1 and select the 3rd page of that table.

    Strangely the Broad institute and MIT authors of the Cell paper never mention this.
    Apparently their machine learning techniques are not capable of surveying published results of others.

  11. Walther White says:

    Funny though, J&J has rejected my job applications several times, and I’m one of the people who is highly computer literate and well-versed in medicinal and organic chemistry. I do a lot of computer modeling, not just for drugs but also catalysts and polymers. Before I did my PhD in medchem, I was a physical organic chemist.

  12. anonymous says:

    Usually modern, recently trained computational chemists are a know-all individuals that upset both the Med Chemists and the Biophysics/Structural folks. They can always design the perfect compound and they know everything about proteins and pockets, dynamics and crystal structures etc. They are always willing to do anything else in drug discovery apart from computational chemistry. In my experience waste of FTEs.

  13. Elias J Heisenberg says:

    What was not mentioned is that the hybrid med chemist/data scientist can end up someone that everyone admires in a “dinner party” (that’s usually coffee at the corridor) but there will be no real (scientific) position for them. Because either it will be impossible/extremely hard to learn in a critical depth both these non-overlapping beasts of subjects or it will be impossible/extremely hard to practice them. Do you think that switching from coding “in the zone” to immersing yourself in the set of SAR/biology/ADMET/synthetic chemistry of a project many times during the day is something that can work for most people? And to be honest, there are hardly any such positions in pharma/biotech at least at a starting-in-the-field level.

  14. Sulphonamide says:

    For the person able to “get by” in multiple diverse fields, possibly the only satisfactory career is leading something (e.g. major role in a modestly-sized start-up company, your own academic group, expert advisor for a VC fund etc) where do you have your own true experts you can tap into / contract where absolutely required, but still need to speak multiple languages to keep them all satisfactorily herded and on-mission. Otherwise, yes being the one-whatevered in the land of the whatever-minus-one, doesn’t tend to be a great selling point in a job market full of specialists (who needs a medicinal chemist with expertise in writing their own patents – that is what the company patent attorney is for?).

    1. Lambchops says:

      I think this is almost true, but there are a few non-leadership roles here and there that can suit those who like working in multiple disciplines and where they can enjoy the journey of learning something new and becoming the “go to” person for a range of different areas once they’ve learnt more about the area than many of their colleagues.

      Of course the areas that spring to mind (medical writing/communications, consultancy, analyst/technical work for government bodies with broad remits etc) come with their own caveats – work in the ‘wrong’ place and you’ll face being another pair of hands churning things out without thinking and often the work, while interesting, can be prescriptive and come with a lack of autonomy.

      1. HFM says:

        I spent a few years post-college as the “token techie” in a biology group. I did legitimately have the skills for both. I wasn’t lab-head material, obviously – but I knew what the biologists were doing, was capable of doing it myself, and also I knew how to wrangle the tech stuff to make their lives easier. I programmed robots, wrote data pipelines, set up servers, made graphs, etc. I made things happen that the pure biologists just wouldn’t have been able to do.

        Trouble is, it’s well nigh impossible to get hired, and even harder to get paid. To be blunt, I had pedigree for days. I had high-tier biology publications, and an engineering degree from MIT. The best I could do was a general lab-tech position – scrubbing test tubes for a hair above minimum wage. All the rest of it was “other duties as assigned”.

        Yes, I interviewed elsewhere. That was…entertaining. And not especially productive. One hiring manager flat-out told me I was unemployable, because I talked like a PhD but didn’t have one. I was displeased – my landlord wasn’t taking payment in units of “sounds too smart to be a lab tech” – but yeah, he was right.

        I was hoping to work for a few years and pay off my student loans. Instead, when I gave up and went to grad school, my stipend was a significant raise. I cannot in good conscience tell younger interdisciplinary workers to try their luck at the BS level. Do internships, sure, but you’re going to need a PhD if you ever want to be paid what you’re worth.

    2. loupgarous says:

      When I was a second-year biomedical engineering student, our instructor in the differential equations part of calculus was this AI guy who delighted in having us work problems at the blackboard while he heckled us. He also, everytime we got into new material, said “I have to teach you this, but please, after you graduate, have your company mathematician solve problems like this.”

      Which told me more about his confidence in his teaching ability (and our college’s AI program) than about his opinion of us mere larval-stage engineers.

  15. transitional chemist says:

    It doesn’t matter how many languages we can speak if we don’t make the effort to listen to each other.

    1. Anonymous says:

      >This< is very true…

  16. Barry says:

    It is not necessary for molecular modelers to be conversant in, or interested in many of the details of organic synthesis. Our overlap is mostly in the free energy of intermolecular interaction, and the kinetics thereof. That frees them to ignore most of org. syn, and frees med. chemists to ignore data architecture, and choice of programming language, and a world of what I’d call “details” although they’re fighting matters to a modeler.

  17. Dominic Ryan says:

    Each discipline brings a unique perspective to the team but the disfunction described above says more about a lack of experience in discovery and ego than they do about any one discipline. When there is a ‘translation gap’ it because of a combination of distrust and a failure to address the team objective. The ultimate objective is usually clear enough: pick your required list of properties and assay metrics for your DC. That is not the same as the day to day objective which I suggest is this:
    “How will the proposed changes inform our next steps?”
    The CADD person or the MedChem person both need to answer that question with an appreciation for the data threshold. That is actually a critical component of every discipline on the team and I would argue is the basis for ‘translation’. Data that falls ‘below’ that threshold is just noise. The threshold could be significant figures (IC50 data with 3 digits perhaps), cLogD where the pKa part is on a funky group, a sidechain orientation with zero occupancy, a predicted DeltaG of less than 2-10Kcal/mol. I could go on and these have been topics for years.
    Ideally the data are solid enough to enable clear SAR probing. Sometimes the lack of discriminatory data justifies probing the area for greater SAR knowledge. Sometimes it doesn’t. That is what the team needs to resolve. Anyone generating data should be able to characterize the limits of interpretability. That includes a medicinal chemist making choices based on years of experience, or in some cases years of assumptions, and the CADD person who ‘likes’ a docking mode for similar reasons. Coding expertise is a very handy skill because it can speed up many steps but mostly because it helps clarify an interpretation of robustness and interpretability.
    No team has infinite resources. Risk assessment, trade-offs and compromises are at the heart of the game. It is remarkably easy to let ego serve as a basis for dismissal of proposals that lack adequate ‘translation’ but then the team fails because those trade-offs get very difficult to manage.

  18. tommysdad says:

    Looking forward to Derek’s post on Vinod Khosla’s presentation this morning at MIT AI meeting this morning!

    1. Derek Lowe says:

      Unfortunately, I can’t make the second day of the conference! Tell us more. . .

      1. Tommysdad says:

        Human judgement & intuition will be removed from DD by 2030.
        Hypothesis-free discovery will replace hypothesis-based discovery.
        “Therapeutics people will not make as much progress as people fresh to the space”.

        The first statement being the most provocative, of course!!

        1. MoMo says:

          Now the statements are just getting ridiculous and they must of been pumping nitrous oxide or hallucinogens into that room at MIT- human free judgement and intuition wont be needed for DD.

          All these AI people and the automated chemistry zombies need to get real and not live out some fantasies right out of Michael Crichton’s Prey.

          1. Tommysdad says:

            To be fair, only Khosla is going that far.

          2. loupgarous says:

            I was thinking Crichton’s Terminal Man, actually. Except that the guys who think computers will send us home to update our resumes aren’t killing women in Southern California, but delivering lectures at MIT, now.

          3. philip alabi says:

            Lol. I wonder how human intuition can be relegated. To achieve this, Computers have to store so much data and also be able to think.

        2. human says:

          it’s quite plausible if by human he meant himself

  19. anon says:

    Unfortunately, most of biologist I have known are mathematically challenged.
    Programming in software engineering requires strict logic ( computer operates by 0 and 1s). Visual intuition and hand waving fuzzy logic by most biologist can not pass the test. The advent of computer has prompted the world to make striking distinction between those who can and those who can not do math.
    Those who can are paid much more.

    1. DH says:

      On the other hand, those who live strictly in AI/ML land often have no clue about the degree to which messy, real-world biological systems such as cell cultures or the human body fail to conform to their mathematical abstractions.

      1. philip alabi says:

        True. It will take a very long time for wet lab (biology especially) to be subjected to the approximations and predictions of AI / ML. I’m just a grad student, I may not know much.

  20. anon the II says:

    I think your insights on this are about right. However, you can be a member of both tribes. I’ve been mostly a synthetic organic chemists (medicinal chemist) most of my life, but I was the modeler (and a compound maker) at a biotech for a while until we got bought. So I believe that I am a lowly warrior in both tribes.

    Clark Still was a chief in both tribes.

  21. KwadGuy says:

    I’ve been at this for longer than most people.

    When I started, the experimentalists didn’t even know what modeling was.

    Then they learned and warily offered to collaborate.

    But after a few failed attempts to believe what the computational people suggested, they decided the only person who could be believed was someone who was a chemist first, and a modeler second.

    Modelers who could walk the walk and talk the talk with the chemists, therefore, got their voices heard.

    But, and this but is huge, the modelers who were also chemists were rarely hardcore modelers capable of programming and methodlogy development. They were usually chemists who had a change of heart and decided they liked modeling enough to do a postdoc where they ran some simulations.

    There’s nothing wrong with that, but these people didn’t come up with the new modeling ideas that would propel the field forward, and they weren’t able to navigate around the roadblocks that are inevitably encountered when you run off the shelf software.

    Things are a bit better now. The CCG/Schrodinger/Biovia type packages are all a lot more mature than when I started, and so you can do more without knowing how to program (or how, exactly the software even does what it does). Arguably there are fewer methodological strides required tor modeling to make an impact now, as well, so maybe the people with deep understanding and mad coding skills aren’t as necessary.

    Until you move into AI, which is, I guess, the jumping-off point for today’s column. And there you’re still in a somewhat wild West where it’s not enough to just push buttons and run someone else’s scripts. In particular, if you don’t understand the deep crevices of the code you are likely to create black box predictors that are complete junk. New fancy labels on old soap. Do you understand the fundamentals of data representation? Do you understand the limitations of existing computational approaches? Do you understand the fundamentals of data overfitting? Do you understand the fundamentals of statistical significance? No? Then you’re going to be at risk for making the same mistakes that modelers have made for decades in over-promising and under-delivering EVEN IF you understand chemistry. You can talk the talk, but you can’t walk the walk. Or maybe vice-versa. But the bottom line is: It’s all gonna bottom out again. And worse, because with AI everyone is happy and content with a black box labeled AI, whereas in earlier iterations of computational panaceas, at least people would challenge the black box and demand you explain why x is related to y.

    So, as is almost always the case, a deep understanding of the subject matter is more important than the envoy who can explain it to the other side. Your chemist/modeler hyrbrid phenotypes can make the sale, and they can be welcomed into the party behind doors closed to the computation-only enemy. But at the end of the day, it’s not enough.

  22. kjk says:

    In my opinion, the main difference between the tribes is mastery of tools vs knowing how to create/find new tools.

    In the lab you don’t have a lot of flexibility to which tools you use. Here are the machines, here are what they do. But you need to know them *well* lest the protocol gets messed up. There are so many details and nuances that you need a good “compass” to navigate everything, lest your experiments get meaningless results.

    For computers, it is much more forgiving to make mistakes as you can just fix them and re-run the code (testing expensive computations on smaller scales first). However, there are *so many* tools available available “pip install this_wonderful_package” etc and you constantly write new functions that are in effect tools that simplify tasks.

    These ways of thinking are, in my beliefs, even more divergent than *physics vs anthropology*. This is because both involve designing: experimental physics has more “build shiny new machine” than most bio or biochem labs, and much more of physics is computational. Anthropology is front and center writing (though writing is needed in every field) which isn’t as different as computer programming as it first appears. On the other hand, wet-lab is much more about being there, performing experiments.

    I think it is a shame we don’t understand eachothers worlds. There are still many bugs in published computational code; the habit of precision in the wet-lab world would do wonders to improve reliability. On the other hand, it would be nice for the wet-lab people to use basic SQL queries to answer questions like “what are all the reactions you did with THF as a solvent” rather than having to sift through inane amounts of cellulose or spreadsheets to get this answer. And both sides could learn from the humanities world: write better documentation of what you are doing so other people don’t get lost!!!

  23. compist says:

    I’m late to this thread, but how depressing. There’s a loud minority of computational scientists (now mostly jumping on the AI bandwagon) who don’t know enough actual science to realise when their predictions are total garbage. Most of us aren’t that bad… surely…?

  24. Bishop says:

    Have experience this as well. I am a wet-lab bioorganic chemist… and I have established strong collaborations with computational and statistical analysis labs (as well as microbiologists and virologists). I think the research benefits from the different perspectives, mindsets and skillsets that the different groups bring to the project (especially bigger projects). Communication is key to synergy between the computational and wet-lab efforts. I am a strong believer that we need to train scientists that can operate at the interface between disciplines to help bridge this gap. I also think we need to increase the exposure of chemistry and other science majors at the undergraduate level to computer skills and computational methods and tools.

Comments are closed.