Here is the original study published in nature microbiology.
https://www.nature.com/articles/s41564-025-02142-0
Wanted to share what I thought the interesting parts. From the university press release.
"To date, AI has been leveraged as a tool for predicting which molecules might have therapeutic potential, but this study used it to describe what researchers call “mechanism of action” (MOA) — or how drugs attack disease.
MOA studies, he says, are essential for drug development. They help scientists confirm safety, optimize dosage, make modifications to improve efficacy, and sometimes even uncover entirely new drug targets. They also help regulators determine whether or not a given drug candidate is suitable for use in humans... A thorough MOA study can take up to two years and cost around $2 million; however, using AI, his group did enterololin’s in just six months and for just $60,000.
Indeed, after his lab’s discovery of the new antibiotic, Stokes connected with colleagues at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) to see if any of their emerging machine learning platforms could help fast-track his upcoming MOA studies.
In just 100 seconds, he was given a prediction: his new drug attacked a microscopic protein complex called LolCDE, which is essential to the survival of certain bacteria.
“A lot of AI use in drug discovery has been about searching chemical space, identifying new molecules that might be active,” says Regina Barzilay, a professor in MIT’s School of Engineering and the developer of DiffDock, the AI model that made the prediction. “What we’re showing here is that AI can also provide mechanistic explanations, which are critical for moving a molecule through the development pipeline.”
> Indeed, after his lab’s discovery of the new antibiotic, Stokes connected with colleagues at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) to see if any of their emerging machine learning platforms could help fast-track his upcoming MOA studies.
It must be so cool to work at a university. You can just walk across campus to meet with experts and learn about or apply the cutting edge of any given field to solve whatever problem you're interested in.
Can confirm it is often very cool to be able to do this.
Working in a university that has significant research facilities is awesome.
Even when you work in administration, learning opportunities abound and are easy to seize.
I’m too shy to just walk into a random lab and ask questions, but three times a year, my boss likes to organize a tour of a different research facility and I really appreciate that.
That's only if there are experts there. The average college is not really what you're thinking it is.
You can email them and they are usually quite happy to talk.
Of course not when they get a media storm like these people. But I regularly correspond with experts in adjacent fields who have interesting papers put out.
Can confirm that most of the times when reproducing/implementing a paper or trying to extend it to another field, researchers are pretty OK (some very enthusiastic) to chat over email about it. As long as you've actually read the paper(s) or read the code (if any), and there's no expected free-work...
I sometimes get unpublished artefacts (matlab/python/fortran code, data samples) just by... asking nicely, showing interest. And I'm not even in academia or a lab.
In my sample of 1 with cold email to a researcher was that they are enthusiastic when someone has read their paper and ask relevant questions.
I don't remember the paper subject neither the researcher name (more than 20yr ago) but I remember that she was an ornitologist, the subject was quite niche and the response I received to my questions was longer than the article that prompted me to asks them.
The Reverse Gell-Mann Amnesia Effect-- one vastly underestimates the work it took to reach a conclusion in a book they haven't finished reading. Then, without reading another page, they assume everything in the entire bookshelf is at the same shallow level as their own misapprehension of the book they didn't finish.
I rankly speculate: for the set of low-effort comments on HN, there are more Reverse Gell-Manns than there are Gell-Manns.
There are 1000s of state and community colleges. Not everyone is doing groundbreaking research. Some places, they just teach, which is fine. Additionally, I have no idea what you're trying to say.
There are experts at every single state and community college, with PhDs being thr typical floor.
> Not everyone is doing groundbreaking research.
They don't need to be, they only need to be able to understand recent, groundbreaking papers in their fields, and be able to bounce ideas with colleagues in different disciplines who walk up to them.
Insinuating that faculty staff are either on the bleeding edge or useless is a false dichotomy.
Is DiffDock a large language model?
Because that is what the general public believes AI means, and Open AI say they are building thinking machines with it, and this headline says ”predicted”.
It's a 3D equivariant graph neural network; a class of models that was hot before LLMs stole the limelight. https://en.wikipedia.org/wiki/Graph_neural_network
No it’s a diffusion model trained on proteins
What????
We knew that LolCDE was a vulnerability to e coli since well before 2016 and knew inhibitors of the complex, globomycin being one of them, which they knew about since 1978
https://journals.asm.org/doi/full/10.1128/jb.00502-16
https://pubmed.ncbi.nlm.nih.gov/353012/
Is enterololin just another from of globomycin?
Is AI smart or are scientists just getting dumber?
AI just picked up these references and gave an answer. Scientists in this field should have read these papers instead of relying on AI.
From what I understand they used a diffusion model (diffdock) to predict the mechanism. These types of models are not LLMs that need to be trained on text
There are probably 37,392 papers on this. So your "should" is probably just impossible for humans.
This is such a ridiculous argument. They could have read five papers, couldn’t they?
Picking up five needles is trivial. Picking up five needles in 50000 haystacks is difficult.
How do they know which five papers to read?
Yes.
Does anyone have the pre-print? I'm not affiliated with a university any more and the usual suspects don't upload papers overnight any more.
Their inboxs might be overflowing but researchers usually happy to email a copy if you don't have access elsewhere
> A thorough MOA study can take up to two years and cost around $2 million; however, using AI, his group did enterololin’s in just six months and for just $60,000.
Beautiful, finally something for ai/machine learning that is not a coding autocomplete or image generation usage.
It would be very interesting to keep track of this area for the next 10 years, between alpha fold for protein folding and this to predict how it will behave, how cost is reduced and trials get fast tracked
Based on the paper for DiffDock (https://arxiv.org/abs/2210.01776) it looks like it was a great use case for a diffusion model.
> We thus frame molecular docking as a generative modeling problem—given a ligand and target protein structure, we learn a distribution over ligand poses.
I just hope work on these very valid use cases doesn’t get negatively impacted when the AI bubble inevitably bursts.