Ask HN: Relatively SoTA LLM Agents from Scratch?

3 points by solsane 3 days ago

As we know, OpenAI is not so open.

In 2023, I was playing with transformers, RNNs and I had an understanding how it worked from top to bottom (e.g. made my own keras, could whiteboard small nets) and I can throw things together in keras or tf pretty quick

I got a job and never touched that again. Data and compute notwithstanding, how hard would it be to make a pet project foundation model using the latest techniques? I’ve heard about MoE, things like that and I figure we’re not just throwing a bunch of layers and dropout in Keras anymore.

bjourne 2 days ago

Read this article: https://dl.acm.org/doi/10.1145/3712285.3759827 Training algorithms are relatively simple (base training, fine-tuning, RL), but the scale is critical. I.e., the engineering infrastructure. The authors recommend a 128 GPU cluster minimum and many petabytes of training data.