points by KLK2019 7 days ago

In light of the meta context that this article reinforces the view that ai can replace researchers job I found this part of the artcile very true to how I use AI tools at work.

"Stokes stresses that while the prediction was intriguing, it was just that — a prediction. He would still have to conduct traditional MOA studies in the lab.

“Currently, we can’t just assume that these AI models are totally right, but the notion that it could be right took the guesswork out of our next steps,”...so his team, led in large part by McMaster graduate student Denise Catacutan, began investigating enterololin’s MOA, using MIT’s prediction as a starting point.

Within just a few months, it became clear that the AI was in fact right.

“We did all of our standard MOA workup to validate the prediction — to see if the experiments would back-up the AI, and they did,” says Catacutan, a PhD candidate in the Stokes Lab. “Doing it this way shaved a year-and-a-half off of our normal timeline.”

Workaccount2 7 days ago

AI is becoming a difficult term to grapple with, especially because the public just assumes AI = ChatGPT = "ChatGPT discovered a new medicine"

In reality, a lot of research uses a variety of different general ML tools that have almost nothing to do with transformers, much less LLMs.

  • dvfjsdhgfv 7 days ago

    You know this water-muddling technique is being used on purpose, don't you? Most of the time to attract money. At least in this case the aim is noble.

agentcoops 7 days ago

Same. I note in-advance that I'm not sure whether you yourself are referring to use of LLM tools in your research or rather the results of your own domain specific application of deep learning etc -- here, I assume the former.

I feel like the common refrain of most LLM success stories over the past year is that these tools are of significantly greater help to specialists with "skin in the game", so to speak, than they are to complete amateurs. I think a lot of complaints about hallucinations reflect the experience of people who aren't working at the edge of a field where they've read all the existing literature and there simply aren't other places to turn for further leads. At the frontier, moreover, the probability that there exists a paper or book that covers the exact combination of topics that interests you is actually rather low; peer discussions are terrific, but everyone is time-starved.

Thus I find the synthetic ability of LLMs to tie together one's own field of focus with those you've never thought about or are less familiar with to be of incomparable utility. On top of that, the ability to help formulate potential hypotheses and leads -- where of course you the researcher are ultimately going to carry out the investigation or, in the best case, attempt to replicate results. Conversely, when I'm uncertain of my own conclusions, I often find myself feeding the best LLM I have access to the data I reasoned from to see whether it independently gets to the same place. I'm not concerned about hallucinations because I know there's nobody but me ultimately responsible for error -- and, at the fringe of knowledge, even a total fabrication can inspire a new (correct) approach to the matter at hand.

I think if I had to succinctly describe my own experience it would be that I never get stuck any more for days, weeks, months without even a hint of where to turn next.

Related, there's an ancient Palantir blog post (2010!) that always stuck in my memory about a chess tournament that allowed computers, grandmasters, amateurs and any combination of the above to enter [0]. At that time, the winning combination turned out to be amateurs with the best workflow for interfacing with machine. The moral of the story is probably still true (workflow is everything), but I think these new tools for the first time are really biased towards experts, i.e. the best workflow now is no longer "content neutral" but always emerges from a particular domain.

[0] https://web.archive.org/web/20120916051031/http://www.palant...

  • dvfjsdhgfv 7 days ago

    While I agree, one must be careful anyway. I'm ignorant in most fields, reasonably good at two, and quite good (but far from excellent) in one. So while there is a lot to learn in the former, when it comes to the latter, all LLMs, including SOTA models, give me a very high percentage of answers that are misleading, wrong, dangerously incomplete, only superficially correct, amalgamate of correct and incorrect bits etc. Knowing this first hand, repeatedly, on hundreds and hundreds of issues, I basically built a deep, methodological distrust towards LLMs answers. In the end, I assume the answer to be wrong, but I look for verifiable hints that could lead me in the right direction. This is my default working mode in my niche.