deepdarkforest 2 days ago

> It handles complex coding tasks with minimal prompting...

I find it interesting how marketers are trying to make minimal prompting a good thing, a direction to optimize. Even if i talk to a senior engineer, i'm trying to be specific as possible to avoid ambiguities etc. Pushing the models to just do what they think its best is a weird direction. There are so many subtle things/understandings of the architecture that are just in my head or a colleagues head. Meanwhile, i found that a very good workflow is asking claude code to come back with clarifying questions and then a plan, before just starting to execute.

  • KronisLV 2 days ago

    > Meanwhile, i found that a very good workflow is asking claude code to come back with clarifying questions and then a plan, before just starting to execute.

    RooCode supports various modes https://docs.roocode.com/basic-usage/using-modes

    For example, you can first use the Ask mode to explore the codebase and answer your questions, as well as ask you its own about what you want to do. Then, you can switch over to the Code mode to do the actual implementation, or the model itself will ask you to switch to it in other modes, because it's not allowed to change files in the Ask mode.

    I think that approach works pretty well, especially when you document what needs to be done in a separate Markdown file or something along the lines of it, that can be then referenced if you have to clean the context, like a new refactoring task for what's been implemented.

    > I find it interesting how marketers are trying to make minimal prompting a good thing, a direction to optimize.

    This seems like a good thing, though. You're still allowed to be as specific as you want to, but the baseline is a bit better.

  • ls-a 2 days ago

    This works well with managers. They think if the task title on jira is a one liner, then it's that simple to implement.

    • consp 2 days ago

      Usually it's exactly the opposite, more often due to missing containment and requirements so you get a vague oneliner. Infinity hours it is ...

      • ls-a 2 days ago

        Then you have engineers that will agree with the manager on everything

  • igleria 2 days ago

    > I find it interesting how marketers are trying to make minimal prompting a good thing

    They do that because IMHO the average person seems to prefer something to be easy, rather than correct.

  • Ancapistani 2 days ago

    > Even if i talk to a senior engineer, i'm trying to be specific as possible to avoid ambiguities etc.

    Sure - but you're being specific about the acceptance criteria, not the technical implementation details, right?

    That's where the models I've been using are at the moment in terms of capability; they're like junior engineers. They know how to write good quality code. If I tell them exactly what to write, they can one-shot most tasks. Otherwise, there's a good chance the output will be spaghetti.

    > There are so many subtle things/understandings of the architecture that are just in my head or a colleagues head.

    My primary agentic code generation tool at the moment is OpenHands (app.all-hands.dev). Every time it makes an architectural decision I disagree with, I add a "microagent" (long-term context, analogous to CLAUDE.md or Devin's "Knowledge Base").

    If that new microagent works as expected, I incorporate it into either my global or organization-level configs.

    The result is that it gets more and more aligned with the way I prefer to do things over time.

  • c048 2 days ago

    This is why I don't listen at all to the fearmongers that say programmers will disappear. At most, our jobs will slightly change.

    There will always be people that describe a problem, and you'll always need people actually figuring out what's actually wrong.

    • croes 2 days ago

      The problem isn’t the AI but the management that believes the PR. It doesn’t matter if AI can replace developers but if the management thinks it can.

      • breakpointalpha 2 days ago

        That's only a problem in the short term.

        Watch the company fire 50% of the engineering team then hit a brick wall at 100mph.

    • ACCount36 2 days ago

      What makes you look at existing AI systems and then say "oh, this totally isn't capable of describing a problem or figuring out what's actually wrong"? Let alone "this wouldn't EVER be capable of that"?

      • benterix 2 days ago

        > What makes you look at existing AI systems and then say "oh, this totally isn't capable of describing a problem or figuring out what's actually wrong"?

        I wouldn't say they're completely incapable.

        * They can spot (and fix) low hanging fruit instantly

        * They will also "fix" things that were left out there for a reason and break things completely

        * even if the code base fits entirely in their context window, as does the complete company knowledge base, including Slack conversations etc., the proposed solutions sometimes take a very strange turn, in spite of being correct 57.8% of the time.

        • ACCount36 2 days ago

          That's about right. And this kind of performance wouldn't be concerning - if only AI performance didn't go up over time.

          Today's AI systems are the worst they'll ever be. If AI is already capable of doing something, you should expect it to become more capable of it in the future.

          • binary132 2 days ago

            why is “the worst they’ll ever be” such a popular meme with the AI inevitabilist crowd and how do we make their brains able to work again?

            • insignificntape 2 days ago

              It's a self-evident truth. Even if today, at this very moment AI hits a hard plateau and there's nothing we can do to make AI better, ever, then this still holds true. It simply means we'll keep what we have right now. Any new model will be a step back and thus be discarded. So what we have today is the worst, and the best it will ever be. But barring that extremely unlikely scenario, like GPT-3 to GPT-4 and Claude 3 to Claude 4, we will see improvements (either incremental or abrupt) over the coming weeks/months/years. Any failed experiments will never see the light of day and the successful experiments will become Claude X or GPT X, etc.

            • ACCount36 2 days ago

              It's popular because it's true.

              By now, the main reason people expect AI progress to halt is cope. People say "AI progress is going to stop, any minute now, just you wait" because the alternative makes them very, very uncomfortable.

              • benterix 2 days ago

                Well, to use the processor analogy, with models we reached the situations where the clocks can't do that much more. So the industry switched to multiplying cores etc. but you can actually see the slope plateauing. There are wild developments for the general public like the immediate availability of gpt-oss-120b that I'm running on my MBP right now, there is Claude Code that can work for weeks doing various stuff and being right half of the time, that's all great, but we can all see development of the SOTA models has slowed down and what we are seeing are very nice and useful incremental improvements, not great breakthroughs like we had 3-4 years ago.

                (NB I'm a very rational person and based on my lifelong experience and on how many times life surprised me both negatively and positively, I'd say the chance of a great breakthrough occurring short term is 50%, but it has nothing to do or cannot be extrapolated from the current development as this can go any way actually. We already had multiple AI winters and I'm sure humanity will have dozens if not hundreds of them still.)

                • ACCount36 2 days ago

                  Plateauing? OpenAI's o1 is revolutionary, less than a year old, and already obsolete.

                  Are you disappointed that there's no sudden breakthrough that yielded an AI that casually beats any human at any task? That human thinking wasn't obsoleted overnight? That may or may not happen yet. But a "slow" churn of +10% performance upgrades results in the same outcome eventually.

                  There's only this many "+10% performance upgrades" left between ChatGPT and the peak of human capabilities, and the gap is ever diminishing.

                  • insignificntape 2 days ago

                    I think the reason people feel it's plateauing is because the new improvements are less evident to the average person. When we saw GPT-4 I think we all had that "holy shit" moment. I'm talking to a computer, it understands what I'm saying, and responds eloquently. The Turing test, effectively. That's probably the most advanced benchmark humans can intuitively assess. Then there's abstract maths, which most people don't understand, or the fact that this entity that talks to me like an intelligent human being, when left to reason about something on its own devolves into hallucinations over time. All real issues, but much less tangible, since we can't relate it to behaviours we observe or recognize as meaningful in humans. We've never met a human that can write a snake game from memory in 20 seconds without errors, but can't think on its own for 5 minutes before breaking down into psychosis, which is effectively what GPT-4 was/is. After the release of GPT-4 we strayed well outside of the realm of what we can intuitively measure or reason about without the use of artificial benchmarks.

              • disgruntledphd2 2 days ago

                > By now, the main reason people expect AI progress to halt is cope. People say "AI progress is going to stop, any minute now, just you wait" because the alternative makes them very, very uncomfortable.

                OK, so where is the new data going to come from? Fundamentally, LLMs work by doing token prediction when some token(s) are masked. This process (which doesn't require supervision hence why it scaled) seems to be fundamental to LLM improvement. And basically all of the AI companies have slurped up all of the text (and presumably all of the videos) on the internet. Where does the next order of magnitude increase in data come from?

                More fundamentally, lots of the hype is about research/novel stuff which seems to me to be very, very difficult to get from a model that's trained to produce plausible text. Like, how does one expect to see improvements in biology (for example) based on text input and output.

                Remember, these models don't appear to reason much like humans, they seem to do well where the training data is sufficient (interpolation) and do badly where there isn't enough data (extrapolation).

                I'd love to understand how this is all supposed to change, but haven't really seen much useful evidence (i.e. papers and experiments) on this, just AI CEOs talking their book. Happy to be corrected if I'm wrong.

                • insignificntape 2 days ago

                  That's not true. And trust me, dude, it scares the living ** out of me, so I wish you were right. Next-token prediction is the AI-equivalent of a baby flailing its arms around and learning basic concepts about the world around it. The AI learns to mimic human behavior and recognize patterns, but it doesn't learn how to leverage this behavior to achieve goals. The pre-training is simply giving the AI a baseline understanding of the world. Everything that's going on now, getting it to think (i.e. talking to itself to solve more complex tasks), or getting it do do maths or coding, is simply us directing that inherent knowledge it's gathered from its pre-training and teaching the AI how to use it.

                  Look at Claude Code. Unless they hacked into private GitHub/GitLab repos... (which, honestly, I wouldn't put beyond these tech CEO's, see what CloudFlare recently found out about Perplexity as an example), but unless they really did that, they trained Claude 4 on approximately the same data as Claude 3. Yet for some reason its agentic coding skills are stupidly enhanced when compared to previous iterations.

                  Data no longer seems to be the bottleneck. Which is understandable. At the end of the day, data is really just a way to get the AI to make a predicion and run gradient descent on it. If you can generate for example a bunch of unit tests, you can let the AI freewheel its way into getting them to pass. A kid learns to catch a baseball not by seeing a million examples of people catching balls, but instead by testing their skills in the real world, and gathering feedback from the real world on whether their attempt to catch the ball was successful. If an AI can try to achieve goals and assess whether or not its actions lead to a successful or a failed attempt, who needs more data?

                • fragmede 2 days ago

                  Fundamentally the bottleneck is on data and compute. If we accept as a given that a) some LLM is bad at writing eg rust code because there's much less of it on the Internet compared to say react js code but that b) the LLM is able to generate valid rust code and c) the LLM is able to "tool use"the rust compiler and a runtime to validate the rust it generates, and iterate until the code is valid, and finally d) use that generated rust code to train on, then it seems that barring any algorithmic improvements in training, that the additional data should allow later versions of the LLM to be better at writing rust code. If you don't hold a-d to be possible then sure, maybe it's just AI CEOs talking their book.

                  The other fundamental bottleneck is compute. Moore's law hasn't gone away, so if the LLM was GPT-3, and used 1 supercomputer's worth of compute for 3 months back in 2022, and the supercomputer used for training is, say, three times more powerful (3x faster CPU and 3x the RAM), then training on a latest generation supercomputer should lead to a more powerful LLM simply by virtue of scaling that up and no algorithmic changes. The exact nature of the improvement isn't easily back of the envelope calculatable, but even with a laymen's understanding of how these things work, that doesn't seem like an unreasonable assumption on how things will go, and not "AI CEOs talking their book". Simply running with a bigger context window should allow the LLM to be more useful.

                  Finally though, why do you assume that, absent papers up on arvix, that there haven't and won't be any algorithmic improvements to training and inference? We've already seen how allowing the LLM to take longer to process the input (eg "ultrathink" to Claude) allows for better results. It seems unlikely that all possible algorithmic improvements have already been discovered and implemented. Just because OpenAI et Al aren't writing academic papers to share their discovery with the world and are, instead, preferring to keep that improvement private and proprietary, in order to try and gain a competitive edge in a very competitive business seems like a far more reasonable assumption. With literal billions of dollars on the line, would you spend your time writing a paper, or would you try and outcompete your competitors? If simply giving the LLM longer to process the input before user facing output is returned, what other algorithmic improvements on the inference side on a bigger supercomputer with more ram available to it are possible? Deepseek seems to say there's a ton of optimization still as of yet to be done.

                  Happy to hear opposing points of view, but I don't think any of the things I've theorized here to be totally inconceivable. Of course there's a discussion to be had about diminishing returns, but we'd need a far deeper understanding is the state of the art on all three facets I raised in order to have an in depth and practical discussion on the subject. (Which tbc I'm open to hearing, though the comments section on HN is probably not the platform to gain said deeper understanding of the subject at hand).

                • ACCount36 2 days ago

                  We are nowhere near the best learning sample efficiency possible.

                  Unlocking better sample efficiency is algorithmically hard and computationally expensive (with known methods) - but if new high quality data becomes more expensive and compute becomes cheaper, expect that to come into play heavily.

                  "Produce plausible text" is by itself an "AGI complete" task. "Text" is an incredibly rich modality, and "plausible" requires capturing a lot of knowledge and reasoning. If an AI could complete this task to perfection, it would have to be an AGI by necessity.

                  We're nowhere near that "perfection" - but close enough for LLMs to adopt and apply many, many thinking patterns that were once exclusive to humans.

                  Certainly enough of them that sufficiently scaffolded and constrained LLMs can already explore solution spaces, and find new solutions that eluded both previous generations of algorithms and humans - i.e. AlphaEvolve.

                  • dvfjsdhgfv 2 days ago

                    I don't think anybody argues there will be no progress. We just disagree about the shape of the curve.

          • IsTom 2 days ago

            We're somewhere on an S-curve and you can't really determine on which part by just looking at the past progress.

          • croes 2 days ago

            That’s not how it works. There are already cases where the fix of one problem made a previous existing capability worse.

            • ACCount36 2 days ago

              That's exactly how it works. Every input of AI performance improves over time, and so do the outcomes.

              Can you damage existing capabilities by overly specializing an AI in something? Yes. Would you expect that damage to stick around forever? No.

              OpenAI damaged o3's truthfulness by frying it with too much careless RL. But Anthropic's Opus 4 proves that you can get similar task performance gains without sacrificing truthfulness. And then OpenAI comes back swinging with an algorithmic approach to train their AIs for better truthfulness specifically.

              • binary132 17 hours ago

                that must be why gpt5 can’t count the number of B’s in “blueberry”

              • croes 2 days ago

                Depends on the input. More BS training data leads to worse answers and the good sources are nearly all already used.

                The next round of data is partially AI generated what leads to further deterioration

      • satyrun 2 days ago

        At this point it is just straight denial.

        Like when a relationship is obviously over. Some people enjoy the ending fleeting moments while others delude themselves that they just have to get over the hump and things will go back to normal.

        I suspect a lot of the denial is from the 30 something CRUD app lottery winner. One of the smart kids all through school, graduated into a ripping CRUD app job market and then if they didn't even feel the 2022 downturn, they now see themselves as irreplaceable CRUD app genius. Something understandable since the environment has never signaled anything to the contrary until now.

        • sho_hn 2 days ago

          My psychological reaction to what's going on is somehow pretty different.

          I'm a systems/embedded/GUI dev with 25 years of C++ etc., and nearly every day I'm happy and grateful to be the last generation to get really proficient before AI tools made us all super dependant and lazy.

          Don't get me wrong, I'm sure people will find other ways to remain productive and stand out from each other - just a new normal -, but I'm still glad all that mental exercise and experience can't be taken away from me.

          I'm more compelled to figure out how I can contribute to making sure younger colleagues learn all the right stuff and treat their brains with self-respect than I feel any need to "save my own ass" or have any fears about the job changing.

          • jononor 2 days ago

            You made me think of the role of mental effort/exercise. In parts of the western world, we are already experiencing a large increase in dementia/alzheimer and related. Most of it is because we are doing better with other killers like heart etc, and many cancers also. But is said that mental activity is important to stave off degenerative diseases of the brain. Could widespread AI trigger a dementia epidemic? It would be 30 years out, but still...

      • croes 2 days ago

        Turn the question around „oh, this totally is capable of describing a problem and figuring out what's actually wrong“

        Even a broken clock is right two times a day.

        The question is reliability.

        What worked today may not work tomorrow and vice versa.

  • nojito 2 days ago

    Because people are overprompting and creating crazy elaborate harnesses. My prompts are maybe 1 - 2 sentences.

    There is a definite skill gap between folks who are using these tools effectively and those who do not.

Ratelman 2 days ago

Interesting/unfortunate/expected that GPT-5 isn't touted as AGI or some other outlandish claim. It's just improved reasoning etc. I know it's not the actual announcement and it's just a single page accidentally released, but it at least seems more grounded...? Have to wait and see what the actual announcement entails.

  • throwaway525753 2 days ago

    At this point it's pretty obvious that the easy scaling gains have been made already and AI labs are scrounging for tricks to milk out extra performance from their huge matrix product blobs:

    -Reasoning, which is just very long inference coupled with RL

    -Tool use aka an LLM with glue code to call programs based on its output

    -"Agents" aka LLMs with tools in a loop

    Those are pretty neat tricks, and not at all trivial to get actionable results from (from an engineering point of view), mind you. But the days of the qualitative intelligence leaps from GPT-2 to 3, or 3 to 4, are over. Sure, benchmarks do get saturated, but at incredible cost and forcing AI researchers to make up new "dimensions of scaling" as the ones they were previously banking on stalled. And meanwhile it's all your basic next token prediction blob running it all, just with a few optimizing tricks.

    My hunch is that there won't be a wondorous life turning AGI (poorly defined anyway), just consolidating existing gains (distillation, small language models, MoE, quality datasets, etc.) and finding new dimensions and sources of data (biological data and 'sense-data' for robotics come to mind).

    • binary132 2 days ago

      This is the worst they’ll ever be! It’s not just going to be an ever slower asymptotic improvement that never quite manages to reach escape velocity but keeps costing orders of magnitude more to research, train, and operate….

  • nialv7 2 days ago

    I wonder whether the markets will crash if gpt5 flops. Because it might be the model that cements the idea that, yes, we have hit a wall.

  • qsort 2 days ago

    I'm the first to call out ridiculous behavior by AI companies but short of something massively below expectations this can't be bad for openai. GPT-5 is going to be positioned as a product for the general public first and foremost. Not everyone cares about coding benchmarks.

    • nialv7 2 days ago

      llama 4 basically (arguably) destroyed Meta's LLM lab, and it wasn't even that bad of a model.

      • sho_hn 2 days ago

        Did it? Could you summarize the highlights? Morale, brain drain, ...?

    • benterix 2 days ago

      > massively below expectations

      Well, the problem is that the expectations are already massive, mostly thanks to sama's strategy of attracting VC.

  • ben_w 2 days ago

    OpenAI's announcements are generally a lot more grounded than the hype surrounding them and their stuff.

    e.g. if you look at Altman's blog of "superintelligence in a few thousand days", what he actually wrote doesn't even disagreeing with LeCun (famously a nay-sayer) about the timeline.

    • naveen99 2 days ago

      Few thousands days is decades.

      • ben_w 2 days ago

        "A few thousand days" is a minimum of 5.5 years; LeCun has similar timelines.

  • Imustaskforhelp 2 days ago

    Yeah, I guess it wouldn't be that big but it will have a lot of hype around it.

    I doubt it can even beat opus 4.1

netown 2 days ago

> GPT-5 will have "enhanced agentic capabilities” and can handle “complex coding tasks with minimal prompting.”

this seems to be directly targeted at anthropic/claude, wonder if it leads anywhere or if claude keeps it's mystical advantage (especially with new claude models coming out this week as well).

> GPT-5 will have four model variants, according to GitHub...

i also find it interesting that the primary model is the logic-focused one (likely very long and deep reasoning), whereas the conversational mainstream model is now a variant. seems like a fundamental shift in how they want these tools to be used, as opposed to today's primary 4o and the more logical GPT-4.1, o4-mini, and o3.

therodeoen 2 days ago

they are comparing it to llama 4 and cohere v2 in the image…

nxobject 2 days ago

Is the announcement implying that "mainline" GPT-5 is now a reasoning model?

> gpt-5: Designed for logic and multi-step tasks.

  • hobofan 2 days ago

    That doesn't imply reasoning and more likely expresses that it's focused on better tool calling capabilities.

  • blixt 2 days ago

    I think the promise back when all the separate reasoning / multimodal models were out was that GPT-5 would be the model to bring it all together (which mostly comes down to audio/video I think since o3/o4 do images really well).

  • om8 2 days ago

    Of course it is. GPT-5 is one of the most anticipated things in AI right now. To live up to the hype, it needs to be a reasoning model.

fnord77 2 days ago

sama posted a picture of the death star yesterday

t0lo 2 days ago

[flagged]

  • can16358p 2 days ago

    I did ask for this and looking forward for GPT-5.

    I'm sure many other people will be excited too.

  • ronsor 2 days ago

    Well, I did ask for this. If you don't like it, you don't have to use it.

    • lelanthran 2 days ago

      > Well, I did ask for this. If you don't like it, you don't have to use it.

      If they ever decide to sell tokens at a profit, even the ones who did ask for it won't be able to use it :-)

      (Last I checked, OpenAI has a burn-rate of $5b/m on revenues of $0.5b/m. Maybe it's changed now)

      • wongarsu 2 days ago

        The flatrate pricing models seem unsustainable. But API pricing at the big 3 AI providers is more expensive than per-token pricing for hosted versions of comparable open source models (for models that have an equivalent). And those hosters have no reason to run at a loss. So in terms of pure compute cost of running the LLM, API pricing should be pretty profitable. They just spend a lot of money advancing the state of the art. More than they can make back at the current market size, hence all the posturing about AI becoming ubiquitous and taking over all work

    • lm28469 2 days ago

      > you don't have to use it.

      Except when reaching out to customer support

      Except when looking for a job

      Except when reading virtually anything new on the web

      Except when interacting with like 25% of social media posts

      &c.

    • t0lo 2 days ago

      It's easy to say that except this is technology that affects all of us and set us down a definite path without consultation of the public.

      • newswasboring 2 days ago

        Maybe this will help public realize they don't hate AI, they hate current form of capitalism. It took something which reduces work and made it a bad thing.

        • lm28469 2 days ago

          > It took something which reduces work and made it a bad thing.

          It doesn't reduce work, it improves productivity, and virtually none of the productivity boost of the last 50 years benefited the worker. So you end up working the same hours, producing more and not being paid for the difference

          https://economicsfromthetopdown.com/wp-content/uploads/2020/...

          • newswasboring 2 days ago

            Improved productivity is reduced work. We dont have to work the same hours. Labor doesn't always have to relent.

            • lm28469 2 days ago

              idk man, I'm still working the same hours as 10 years ago, and my retirement age went up since then, if anything I'm working way more, certainly more than my parents and grandparents

              • newswasboring 2 days ago

                Yes precisely what I am trying to say. This is not an outcome of technology, its an outcome of how our socio-economic system is set up. The company owners could have easily given you the benefit of technology improvement, made a 3 day work week or made a 4 hour work day and hired more people or reduced their own ambitions. Instead they chose to squeeze everything out of you.

                • lm28469 2 days ago

                  We agree then. Politicians in my country were saying automation would bring the 3 days work week, in the early 1980s aha

        • MoonObserver 2 days ago

          This has been my thinking as well. Quite anti-human that a technology that improves automation and productivity works against common interest rather than for it.

  • konart 2 days ago

    >and you likely didn't either

    First of all this is weird statement that I almost exclusively see in english speaking community. I wonder why this pattern ("and neither do you") is so disgustingly popular even in press.

    Second: most things ever created by men (not to mention created by nature) were never asked for by anyone.

    I never asked for a computer. Or even a pencil. Not until one was introduced into my life.

  • exe34 2 days ago

    Do you make the same comment on forums about old people nappies?