Yes, GPT-5 is more of an iteration than anything else, and to me this says more about OpenAI than the rest of the industry. However, I think the majority of the improvements over the past year have been difficult to quantify using benchmarks. Users often talk about how certain models "feel" smarter on their particular tasks, and we won't know if the same is true for GPT-5 until people use it for a while.
The "GPT-5 will show AGI" hype was always a ridiculously high bar for OpenAI, and I would argue that the quest for that elusive AGI threshold has been an unnecessary curse on machine learning and AI development in general. Who cares? Do we really want to replace humans? We should want better and more reliable tools (like Claude Code) to assist people, and maybe cover some of the stuff nobody wants to do. This desire for "AGI" is delivering less value and causing us to put focus on creative tasks that humans actually want to do, putting added stress on the job market.
The one really bad sign in the launch, at least to me, was that the developers were openly admitting that they now trust GPT-5 to develop their software MORE than themselves ("more often than not, we defer to what GPT-5 says"). Why would you be proud of this?
> Users often talk about how certain models "feel" smarter on their particular tasks, and we won't know if the same is true for GPT-5 until people use it for a while.
The idea that models “feel” smarter may be 100% human psychology. If you invest in a new product, admitting that it isn’t better than what you had is hard for humans. So, if users say a model “feels” smarter, we won’t know that it really is smarter.
Also, if users manage to improve quality of responses after using it for a while, who says they couldn’t have reached similar results if they stayed using the old tool, tweaking their prompts to make that model perform better?
AGI doesn't really replace humans, it merely provides a unified model that can be hooked up to carry out any number of tasks. Fundamentally no different than how we already write bespoke firmware for every appliance, except instead of needing specialized code for each case, you can simply use the same program for everything. To that extent, software developers have always been trying to replace humans — so the answer from the HN crowd is a resounding yes!
> We should want better and more reliable tools
Which is what AGI enables. AGI isn't a sentience that rises up to destroy us. There may be some future where technology does that, but that's not what we call AGI. As before, it is no different than us writing bespoke software for every situation, except instead of needing a different program for every situation, you have one program that can be installed into a number of situations. Need a controller for your washing machine? Install the AGI software. Need a controller for your car's engine? Install the same AGI software!
It will replace the need to write a lot of new software, but I suppose that is ultimately okay. Technology replaced the loom operator, and while it may have been devastating to those who lost their loom operator jobs, is anyone today upset about not having to operate a loom? We found even more interesting work to do.
I appreciate the well-crafted response, but respectfully disagree with this sentiment, and I think it's a subtle point. Remember the no free lunch theorems: no general program will be the best at all tasks. Competent LLMs provide an excellent prior from which a compelling program for a particular task can be obtained by finetuning. But this is not what OpenAI, Google, and Anthropic (to a lesser extent) are interested in, as they don't really facilitate it. It's never been a priority.
They want to create a digital entity for the purpose of supremacy. Aside from DeepMind, these groups really don't care about how this tech can assist in problems that need solving, like drug discovery or climate prediction or discovery of new materials (e.g. batteries) or automation of hell jobs. They only care about code assistance to accelerate their own progress. I talk to their researchers at conferences and it frustrates me to no end. They want to show off how "human-like" their model is, how it resembles humans in creative writing and painting, how it beats humans on fun math and coding competitions that were designed for humans with a limited capacity to memorize, how it provides "better" medical opinions than a trained physician. That last use case is pushing governments to outlaw LLMs for medicine entirely.
A lab that claims to push toward AGI is not interested in assisting mankind toward a brighter future. They want to be the first for bragging rights, hype, VC funding, and control.
> no general program will be the best at all tasks.
Perhaps I wasn't entirely clear, but AGI isn't expected to be the best at all tasks. The bar is only as compared to a human, which also isn't the best at all tasks.
But you are right that nobody knows how to make them good at even some tasks. Hence why everyone is so concerned about LLMs writing code. After all, if you had "true" AGI, what would you need code for? It is well understood that AGI isn't going to happen. What many are banking on, however, is that AGI can be simulated if LLMs can pull off being good at one task (coding).
> They want to be the first for bragging rights, hype, VC funding, and control.
That's the motivation for trying to create AGI (at least pretending to), but not AGI itself.
Fair enough. I respect the objective of making a better coding assistant, and I use LLMs for this purpose all the time. I think this is why I would give Anthropic a pass on more things than some of the others, since they are clearly interested in that application, while the others seemed almost begrudgingly pushed into it. If others focused on this application early on, the agentic approach probably would have progressed faster.
But I think we do the discipline a disservice by referring to coding assistance as AGI. Also, having them be good enough that they can write their own code autonomously is a nightmare scenario to me, but I know many others don't feel that way.
The devs have been co-opted into marketing roles now, too - they have to say it's that good to keep the money coming in. IMO this reinforces the original post - this all feels like a scramble.
Whether it's indicative of patterns beyond OpenAI remains to be seen, but I don't expect much originality from tech execs.
With those people being business owners, investors, etc, 100% of the time.
The other 99% would like automation to make their lives easier. Who wouldn't want the promised tech utopia? Unfortunately, that's not happening so it's understandable that people are more concerned than joyous about AI.
I’m not sure the typical small business owner is thinking about the second and third order effects of reducing their labor costs from a Kantian categorical imperative perspective.
One, a lot of human jobs have been replaced by machines before.
Refrigerators eliminated the jobs of all those guys who used to deliver ice for your icebox every day. And so on. Those were real human beings and I'm sure many of them had families. There was real pain but it was ultimately probably a huge net positive. On a much larger scale, the microcomputer revolution of ~1975-present certainly does not seem to have reduced the number of human jobs.
Two, I am not the biggest fan of capitalism, but this is an area where it works pretty well as a self-balancing system because companies still need to compete with each other. If competing companies A and B each eliminate a bunch of human jobs thanks to AI, they're still locked in an existential struggle. They need to outcompete and outperform each other. They will shift that money to other expenditures: on AI tech, humans doing other jobs, capital expenditures, whatever. Jobs will be created or sustained in other companies providing those goods and services.
It's not foolproof, and it can certainly devastate particular regions, because the money may now flow out of those regions instead of being spent on local salaries.
There is a lot of change, and a lot of very very real pain to come, but if it is anything at all like past technology revolutions the net gains will also be real.
>One, a lot of human jobs have been replaced by machines before. Refrigerators eliminated the jobs of all those guys who used to deliver ice for your icebox every day. And so on. Those were real human beings and I'm sure many of them had families.
The fallacy here is in supposing that the mechanisms that kept those people from starving in the 1920s still exists and remains effective, that the "people replaced" have some other industry to move into. But we live in a post-industry nation... all that god offshored. There is nothing more to make, or build, or repair, not to any scale that would employ everyone meaningfully. And while I suppose some like you imagine that we'll all sit around day trading and speculating on bitcoin for a living, this means that places like China would have to manufacture everything and grow everything and that they'd be willing to do that so that they could have the bitcoin tablescraps you toss to them from time to time.
I've heard your argument all my life, starting all the way back in the late 1980s when the government was first talking about making China the "most favored nation" status that would permit it. Maybe back then people could still believe it but now it rings hollow as hell.
>Two, I am not the biggest fan of capitalism,
I am. I am a big fan, when it's used well everyone benefits. But you still have to police it a little to deter fraud, and we've all been the victims of the biggest fraud ever. And we can't even talk about it here, hurts too many feelings.
>hey will shift that money to other expenditures: on AI tech, humans doing other jobs
Or, maybe instead of shifting to "humans doing other jobs", someone runs the numbers and discovers there are still 30 years worth of profit (or even just 10 years) selling the product to Europe or whereever even if they don't hire anymore humans, and since this exceeds their projected career duration, there's no need to look past that very distant horizon. And it doesn't matter that here or there you're even correct (that some companies might shift to "humans doing other jobs", because I only have to be partially correct and you have to be entirely correct... if some companies do it as I hint, then those companies outcompete your companies which go under, and it still results in massive unemployment.
The fixes for all of these things are simple, clear, and effective, but are politically untenable. Even if people could have been eventually persuaded that they were necessary, those people are now outvoted by many more people who have been brought in who have no loyalty to this country (and it really applies to many countries, not just the one I'm in) and would cockblock the fixes.
> How can one run a business by replacing humans, if no humans are left with enough income to buy your products?
If you control all of the wealth and resources and you have fully automated all of the production of everything you could ever want, then why would you need other humans to buy anything?
The focus now is not the model, but the Product - "here we improve the usuability by removing the choice between models", "here is a better voice for tts", "here is a nice interface for previewing html"
Only about 5 minutes of the whole presentation are dedicated to enterprise usage (COO in an interview sort of indirectly confirms that haven't figured it out yet).
And they are cutting the costs already (opaque routing between models for non-API users is a clear sign of that). The term "AGI" is dropped, no more exponential scaling bullshit - just incremental changes over the time and only over select few domains.
Actually it is a more welcoming sign and not concerning at all that this technology matures and crystallizes around this point. We will charitably forget and forgive all the insane claims made by Sam Altman in the previous years. He can also forget about cutting ties with Microsoft for that same reason.
Latency users experience while getting their answers is a big part of the LLM experience.
Well done model routing is a tremendous leap forward to minimize the latency & improve the user experience.
E.g. I love Gemini 2.5 Pro. But it's darn slow (sorry GDM!). I love the latency I'm getting from 4o. The solution? Just combine them under one prompt, with well done model routing.
Is GPT5 router "good enough"? We'll see.
I think OpenAI is a smart company. And Sama is a tremendous leader. They're moving in the right direction.
Some of the problems with GPT-5 in ChatGPT could actually be due to new model that is in place to route requests to the actual GPT-5 models. There are four models in the GPT-5 family, and I could reproduce the faulty "blueberry" test result only with the "gpt-5-chat" (aka "gpt-5-main") model through the API. This model is there to answer (near) instantly and it falls in the non-thinking category of LLMs. The "blueberry" test represents what they are particularly bad at (and what OpenAI set out to solve with o1). The other thinking models in the family, including gpt-5-nano, solve this correctly.
The messaging is all over the place anyway. Not so long ago OAI was talking about faster iterations and warning people to not expect huge leaps. (A position that makes sense imo). Yet people talk about AGI in a serious manner?
> We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents "join the workforce" and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.
See the sibling comment from AlexandrB. Altman and tons of other hype men in tech do this thing where they make outrageous promises, then retcon as “just jokes” whichever ones don’t come true, so that they can never be disproven. It’s a swindle made all the more irritating by the enablers like you who go “why did you take the joke seriously?” to get cred on the internet while helping the scam continue.
Or to put it another way, do you think Altman denounced all the hype (and subsequent investment dollars) he got because of the “AGI achieved internally” post? Did he say to anyone “hey, that was a meme post, don’t take it seriously”? Or did he milk it for all it was worth before only later quietly climbing down from that when it was no longer paying dividends. Again, duplicitous and disingenuous behavior.
I find this pattern in tech hype really frustrating. Someone in a leadership role in a major tech company/VC promises something outrageous. Time passes and the promise never materializes. People then retcon the idea that "everybody knew that wasn't going to happen". Well either "everybody" doesn't include Elon Musk[1], Sam Altman, or Marc Andreessen[2] or these people are liars. No one seems to be held to their track record of being right or wrong, instead people just latch on to the next outrageous promise as if the previous one was fulfilled.
There's also this deluded-CEO/grounded-CEO routine between Altman and Nadela. Altman will be quoted saying something outrageous in social/mainstream-media which Nadela can then later tone down, add nuance and be realistic about in some firechat or podcast to address the minority who understands, will listen, and would criticize.
It does look like "A lie will make it halfway around the world while the truth is busy lacing its boots" is a major part of a communication strategy.
> Elon Musk[1], Sam Altman, or Marc Andreessen[2] ... these people are liars.
Bingo. These people are salesmen & marketers. Lying to sell a product (including gathering funding & pumping company stock) is literally the job description. If they weren't good at it, they wouldn't hold the positions they do.
Is it a given that they need to unrealistically hype everything? To me it just seems like he's killing any and all credibility he had
Probably a bad long term strategy?
I mean other non-AI companies use hype too sure.. but it's maybe a little sprinkle of 1.1x on top aimed to highlight their best features. Here we're going full on 100x of reality
> To me it just seems like he's killing any and all credibility he had. Probably a bad long term strategy?
He's already got more money than God and there's an infinite supply of suckers who think wealth and skill/intelligence are correlated for him to keep feeding off of (see also Goop and Tesla, incredibly successful companies also run by wealthy hucksters). Sam Altman will be just fine.
It's not a given but Altman is a public figure for a reason while I don't know the names of any of the other CEOs off the top of my head. He talks a lot and when he talks, it's about AI. Even talking about the dangers of AI is hype because it implies it's an important topic to discuss now because it's imminent.
For sure, but not for that reason; there is currently no one with a plan how to go from current (LLMs) to a better model. It's some 'more focused training' 'better prompting' 'agentic' 'smarter lookups' 'better tooling'. But fundamentally, this model is simply shagged out and it'll get a little better with the above, but the jump everyone is waiting for cannot happen without a new model invention.
No one you know of but I'm sure people are thinking about it.
Which reminds me that one of the most obvious failings of LLMs is they never say "I've been thinking about that and have a new idea." The thinking leaning thing needs work.
I asked it to make a drawing of the US with every state numbered from biggest to smallest with 1 being the largest.
Maine was #89 (That is not a typo.) and Oregon was #1.
OpenAI as a company simply cannot exist without a constant influx of investor money. Burning money on every request is not a viable business model. Companies built on OpenAI/Anthropic are similarly deeply unprofitable businesses.
OpenAI needs to convert to a for-profit to get any more of the funding that Softbank promised (that its also unclear how Softbank itself would raise) or to get significant cash from anyone else. Microsoft can block this and probably will.
It all reminds me of that Paddy's Dollars bit from it's always sunny.
"We have no money and no inventory... there's still something we can do... that's still a business somehow..."
I tried a fairly basic Pokemon Go question - which pokemon are resistant to ghost attacks, and it got it wrong - said normal types are immune. Which is wrong. ASI is not quite with us yet.
burning money worked for Uber. As long as they can IPO or get cheap debt from governm friends any valuation can work. Uber lost double digit billions as an app with no edge or anythin. It always made no sense beyond 1 billion
Uber's whole schtick is being what was an already profitable business model (Taxis) with lower overhead/easier access.
That money they burned was on customer acquisition, building infrastructure, etc. The unit economics of paying to be driven to the airport or Benihanas was always net positive.
They weren't losing money on every customer, even paying ones. There just isn't a business model here.
I wouldn't say they had no edge. They had a huge advantage over traditional taxi companies. You can argue that a local Uber-like app could be easily implemented, that's where the investors came in to flood the markets and ensure other couldn't compete easily.
The situation is in no way similar to OpenAI's. OpenAI truly has no edge over Anthropic and others. AGI is not magically emerging from LLM's and they don't seem to have an alternative (nobody does but they promised it and got the big bucks so now it's their problem).
Uber raised something like $50 billion in debt and equity before it went public, but after 15 years of losing money, it has finally started making profits… just in time for Waymo to arrive and eat its lunch. Of course, Uber could themselves get into the self-driving game, but their entire profit story to investors relies on pushing costs away from them onto drivers; it vanishes entirely if they have to maintain their own fleet.
Uber is profitable on a cash basis, but if you’re a public investor, you got fleeced by the early-stage venture money and debtholders. I don’t think it will ever pay back what it raised.
The way they do this in Europe is that an enterpreneur buys a fleet of cars and then gets a visa for a number of folks from Bangladesh and other areas who don not own any of these cars and ride them in turns (they also sleep like 10 in one appartment but that's a different story). The owner gets the money and distributes them to the actual drivers. Uber says they are innocent as they are not in an emplyer-employee position with any of these drivers.
This model worked for the fleet owners so far because the Saudi gave enough money so that both (1) the customers were happy, (2) the cash from the ride could be divided between owners and drivers in a way these drivers complained only to a certain extent.
But the last two years (the only profitable ones) are much worse, both for the drivers and fleet owners. There is still sunk cost in there, but once the cars get old enough they will need to think well whether to buy/lease the next batches.
the 1.5 mil bonus to tech staff announcement prior to chatgpt 5 release makes even more sense now. They knew it would be difficult to manage public expectations and wanted to counter the short-term (in the best case) drop in morale in the company.
I think that 1.5m bonus is likely stocks with 500B valuation. There are other rumors they want outsiders to be allowed to buy stocks with 500B valuation.
"...while Uber has achieved profitability, some analyses suggest that a substantial portion of these profits may come from an increased revenue share at the expense of drivers' earnings".
So let's imagine is 2040 and OpenAI is finally profitable. Now, Uber did this by increasing prices, firing some staff and paying smaller wages to drivers. And all this while having near-monopoly in certain areas. What realistic measures would they need to take in order to compete with, say, Google? Because I just wish them good luck with that.
I had it create a map "in the style of a history textbook." It came up with something that looks worse than I imagined: https://pasteboard.co/3zGy5ti4hHuT.jpg
Isn’t it old news that the full for-profit is not happening and they renegotiated the terms that would make the current proposed PBC a solution as it meets the economic terms?
I have no idea if OpenAI succeeds or not but I find arguments like yours difficult to understand. Most businesses are not using these systems to draw a map. Maybe the release of 5 is lackluster but it does not change that there is some value in these tools today and ignoring R&D (which is definitely a huge cost) they run at a profit.
> ignoring R&D (which is definitely a huge cost) they run at a profit.
how can you say such a hand wavy comment with a straight face? you can't just ignore a huge cost for a company and suddenly they are profitable. that's Enron level moronic. without constant R&D, the company gets beat by competitors that continue R&D. the product is not "good enough" to not continue improving.
if i ignored my major costs in my finances, i could retire, but i can't go to the grocery store and walk out with a basket of food while telling them that i'm ignoring this major cost in my life.
I don’t know why so many take these discussion with such a high emotional level. Has the ability to constructively discuss a topic been lost? I know you usually respond with high emotion and brash but at least try to be constructive.
It’s a valid point and that’s the biggest question when it comes to the medium to long term business plan. Those R&D costs are an important part of it. My point is that since runtime is profitable there is a lot more runway to figure out how to tweak R&D spend in such a way that it becomes a viable business for the long term.
There are a lot of questions that they need to answer to get to pure profitability but they are also the fastest growing company on a MAU number in history with a product that you can see has a chance at become profitable from all sides. They may fail or become sidelined but the hyperbole and lack of critical discussion here is disappointing.
I like how when your illogical notion is challenged, you respond by saying the challenger is being emotional.
There is no point in saying that an AI company can just ignore its R&D. There is no company without the R&D. Because of that, any conversation pretending it doesn't exist is pointless. There is no constructive conversation with that as the premise.
You’re arguing against a point I’m not making. I’m not saying R&D isn’t necessary or that it “doesn’t exist”, I’m saying that operationally, the service itself runs at a profit before accounting for R&D. That matters because it means they have a viable revenue engine that could, in theory, fund a sustainable R&D budget if they adjusted spend.
That’s a very different conversation than “pretend R&D doesn’t matter.” No one is suggesting they stop building; the question is whether they can align the burn rate with the revenue base over time. Companies make those tradeoffs constantly when maturing from heavy investment to profitability.
And yes, you are being emotional, not because I disagree with you, but because your language is inflammatory and brutish. It’s hard to have a constructive discussion when every response is dialed to 11. Misframing the premise as “ignoring a huge cost” isn’t debate, it’s a straw man, and it sidesteps the real question of whether the underlying business model works once R&D is right-sized.
Would love to have a real critical discussion on why you disagree but please leave the bad language out of it. It’s boring and I know it’s your typical route in these types of discussions but at least have a valid retort.
Correct, past R&D spend is already sunk and can’t be undone. But that’s why it’s useful to separate sunk costs from future operating costs when evaluating viability.
The relevant question is whether the ongoing revenue from the existing product is strong enough to support a sustainable level of R&D going forward. If your runtime margins are healthy, you have options: scale back R&D burn, focus on incremental improvements, or use the profits to fund more ambitious projects.
The entire US stock market is propped up by big tech companies spending massively on Data Centers and GPUs for AI. OpenAI is valued higher than Netflix.
A company that can pull in single digit billions in revenue for hundreds of billions in expenses just doesn't make sense.
> Most businesses are not using these systems t̶o̶ ̶d̶r̶a̶w̶ ̶a̶ ̶m̶a̶p̶.̶
FTFY
And no - while it might be obvious from the outside in that it probably won't happen, the continued existence of the business is still predicated on conversion to a for-profit. They don't just need the amount of money they've already "raised", they need too keep getting more money forever.
FTFY? Cute, but you’re arguing against a strawman. My point wasn’t that companies are using GPT to draw maps, it’s that dismissing the tech based on one goofy output ignores the far more common, revenue-generating use cases already in production.
As for “single-digit billions in revenue vs. hundreds of billions in expenses,” that’s just bad math. You’re conflating the total AI capex from hyperscalers with OpenAI’s own P&L. Yes, training is capital-intensive, but the marginal cost to serve (especially at scale) is much lower, and plenty of deployments are already profitable on an operating basis when you strip out R&D burn.
The funding structure question is fair, the for-profit conversion path matters but pretending the whole business is propped up solely by infinite investor charity is just wrong.
Microsoft AI Revenue In 2025: $13 billion, with $10 billion from OpenAI, sold "at a heavily discounted rate that essentially only covers costs for operating the servers."
Am I right to say that "AGI" was just...cancelled again?
Did we just get scammed right in front of our eyes with an overhyped release and what is now an underwhelming model if the point was that GPT-5 was supposed to be trustworthy enough for serious use-cases and it can't even count or reason about letters?
So much for the "AGI has been achieved internally" nonsense with the VCs and paid shills on X/Twitter bullshitting about the model before the release.
Not only "AGI" is cancelled but they also sort of admitted that so-called "scaling" "laws" don't work anymore. Scaling inference kinda still works, but obviously is bounded by context size and haystack-and-needle diminishing accuracy. So the promise of even steadily moving towards AGI is dubious at best.
> They admitted that they were, and I am not lying about this, paywalling chat colors. […] This is a feature that a company adds when they are out of ideas
This observation + sherlocking cursor suggests that perhaps sherlocking is the ideation strategy. Curious to see if they’re subsidizing token costs specifically to farm and Sherlock ideas
Yeah, I agree with the OP here. After all this time, being able to change the chat colors at this point has some real We-reached-the-bottom-of-the-backlog energy, and they're just now implementing the ideas that weren't considered important enough before by the PMs to consider.
It hardly feels like a next generation release.
As a related anecdote (not saying that this is industry standard, just pointing out my own experience), the startup I work for launched their app four years ago, and, for all four of those years, we've had "Implement a Dark Mode design" sitting at the bottom of our own backlog. Higher priority feature requests are always pre-empting it.
The core product failure here is overhyping incremental improvement, eroding trust.
PMs operating at this level ought to be bringing in some low cost UX improvements alongside major features. That simply isn't a sign that they've run ought of backlog. (That said, it is rather pathetic to paywall this)
A moment's consideration ought to show that Open AI has plenty of significant work they they can be doing, even if the core model never gets any better than this.
Issues like this are why I don't use ai agents for code. I don't want to sift through the bullshit confidently spewed out by the model.
It doesn't understand anything. It can't possibly "understand my codebase". It can only predict tokens, and it can only be useful if the pattern has been seen before. Even then, it will product buggy replicas, which I've pointed out during demos. I disabled the ai helpers in my IDEs because the slop the produce is not high quality code, often wrong, often misses what I wanted to achieve, often subtly buggy. I don't have the patience to deal with that, and I don't want to waste the time on it.
Time is another aspect of this conversation, with people claiming time wins, but the data not backing it up, possibly due to a number of factors intrinsic to our squishy evolved brains. If you're interested, go find gurwinder's article on social media and time - I think the same forces are at work in the ai-faithful.
There is a threshold that every developer needs for them to make it be worth their time. For me that has already been met. Your comment makes me think that you don't believe it will start producing higher quality code than you anytime soon.
I think most of us are in the camp that even though we don't need AI right now we believe we will not be valuable in the near future without being highly proficient with the tooling.
the whole event was shit, but we're all past the point where we can just say that, because the technology is now so entrenched that it's become unavoidable, so everything has now to jump through hoops to justify its existence and its greatness
Yes, GPT-5 is more of an iteration than anything else, and to me this says more about OpenAI than the rest of the industry. However, I think the majority of the improvements over the past year have been difficult to quantify using benchmarks. Users often talk about how certain models "feel" smarter on their particular tasks, and we won't know if the same is true for GPT-5 until people use it for a while.
The "GPT-5 will show AGI" hype was always a ridiculously high bar for OpenAI, and I would argue that the quest for that elusive AGI threshold has been an unnecessary curse on machine learning and AI development in general. Who cares? Do we really want to replace humans? We should want better and more reliable tools (like Claude Code) to assist people, and maybe cover some of the stuff nobody wants to do. This desire for "AGI" is delivering less value and causing us to put focus on creative tasks that humans actually want to do, putting added stress on the job market.
The one really bad sign in the launch, at least to me, was that the developers were openly admitting that they now trust GPT-5 to develop their software MORE than themselves ("more often than not, we defer to what GPT-5 says"). Why would you be proud of this?
> Users often talk about how certain models "feel" smarter on their particular tasks, and we won't know if the same is true for GPT-5 until people use it for a while.
The idea that models “feel” smarter may be 100% human psychology. If you invest in a new product, admitting that it isn’t better than what you had is hard for humans. So, if users say a model “feels” smarter, we won’t know that it really is smarter.
Also, if users manage to improve quality of responses after using it for a while, who says they couldn’t have reached similar results if they stayed using the old tool, tweaking their prompts to make that model perform better?
> Do we really want to replace humans?
AGI doesn't really replace humans, it merely provides a unified model that can be hooked up to carry out any number of tasks. Fundamentally no different than how we already write bespoke firmware for every appliance, except instead of needing specialized code for each case, you can simply use the same program for everything. To that extent, software developers have always been trying to replace humans — so the answer from the HN crowd is a resounding yes!
> We should want better and more reliable tools
Which is what AGI enables. AGI isn't a sentience that rises up to destroy us. There may be some future where technology does that, but that's not what we call AGI. As before, it is no different than us writing bespoke software for every situation, except instead of needing a different program for every situation, you have one program that can be installed into a number of situations. Need a controller for your washing machine? Install the AGI software. Need a controller for your car's engine? Install the same AGI software!
It will replace the need to write a lot of new software, but I suppose that is ultimately okay. Technology replaced the loom operator, and while it may have been devastating to those who lost their loom operator jobs, is anyone today upset about not having to operate a loom? We found even more interesting work to do.
> Which is what AGI enables.
I appreciate the well-crafted response, but respectfully disagree with this sentiment, and I think it's a subtle point. Remember the no free lunch theorems: no general program will be the best at all tasks. Competent LLMs provide an excellent prior from which a compelling program for a particular task can be obtained by finetuning. But this is not what OpenAI, Google, and Anthropic (to a lesser extent) are interested in, as they don't really facilitate it. It's never been a priority.
They want to create a digital entity for the purpose of supremacy. Aside from DeepMind, these groups really don't care about how this tech can assist in problems that need solving, like drug discovery or climate prediction or discovery of new materials (e.g. batteries) or automation of hell jobs. They only care about code assistance to accelerate their own progress. I talk to their researchers at conferences and it frustrates me to no end. They want to show off how "human-like" their model is, how it resembles humans in creative writing and painting, how it beats humans on fun math and coding competitions that were designed for humans with a limited capacity to memorize, how it provides "better" medical opinions than a trained physician. That last use case is pushing governments to outlaw LLMs for medicine entirely.
A lab that claims to push toward AGI is not interested in assisting mankind toward a brighter future. They want to be the first for bragging rights, hype, VC funding, and control.
> no general program will be the best at all tasks.
Perhaps I wasn't entirely clear, but AGI isn't expected to be the best at all tasks. The bar is only as compared to a human, which also isn't the best at all tasks.
But you are right that nobody knows how to make them good at even some tasks. Hence why everyone is so concerned about LLMs writing code. After all, if you had "true" AGI, what would you need code for? It is well understood that AGI isn't going to happen. What many are banking on, however, is that AGI can be simulated if LLMs can pull off being good at one task (coding).
> They want to be the first for bragging rights, hype, VC funding, and control.
That's the motivation for trying to create AGI (at least pretending to), but not AGI itself.
Fair enough. I respect the objective of making a better coding assistant, and I use LLMs for this purpose all the time. I think this is why I would give Anthropic a pass on more things than some of the others, since they are clearly interested in that application, while the others seemed almost begrudgingly pushed into it. If others focused on this application early on, the agentic approach probably would have progressed faster.
But I think we do the discipline a disservice by referring to coding assistance as AGI. Also, having them be good enough that they can write their own code autonomously is a nightmare scenario to me, but I know many others don't feel that way.
The devs have been co-opted into marketing roles now, too - they have to say it's that good to keep the money coming in. IMO this reinforces the original post - this all feels like a scramble.
Whether it's indicative of patterns beyond OpenAI remains to be seen, but I don't expect much originality from tech execs.
> Why would you be proud of this?
Isn't it obvious? They have a huge vested interest in getting people to believe that it's very useful, capable, etc.
> Do we really want to replace humans?
Unfortunately for a substantial number of people the answer to this question seems to be a resounding "yes"
With those people being business owners, investors, etc, 100% of the time.
The other 99% would like automation to make their lives easier. Who wouldn't want the promised tech utopia? Unfortunately, that's not happening so it's understandable that people are more concerned than joyous about AI.
>With those people being business owners, investors, etc, 100% of the time.
How can one run a business by replacing humans, if no humans are left with enough income to buy your products?
I suspect that the desire to "replace humans" runs far deeper than just shortsighted business wants.
I’m not sure the typical small business owner is thinking about the second and third order effects of reducing their labor costs from a Kantian categorical imperative perspective.
Two things.
One, a lot of human jobs have been replaced by machines before. Refrigerators eliminated the jobs of all those guys who used to deliver ice for your icebox every day. And so on. Those were real human beings and I'm sure many of them had families. There was real pain but it was ultimately probably a huge net positive. On a much larger scale, the microcomputer revolution of ~1975-present certainly does not seem to have reduced the number of human jobs.
Two, I am not the biggest fan of capitalism, but this is an area where it works pretty well as a self-balancing system because companies still need to compete with each other. If competing companies A and B each eliminate a bunch of human jobs thanks to AI, they're still locked in an existential struggle. They need to outcompete and outperform each other. They will shift that money to other expenditures: on AI tech, humans doing other jobs, capital expenditures, whatever. Jobs will be created or sustained in other companies providing those goods and services.
It's not foolproof, and it can certainly devastate particular regions, because the money may now flow out of those regions instead of being spent on local salaries.
There is a lot of change, and a lot of very very real pain to come, but if it is anything at all like past technology revolutions the net gains will also be real.
>One, a lot of human jobs have been replaced by machines before. Refrigerators eliminated the jobs of all those guys who used to deliver ice for your icebox every day. And so on. Those were real human beings and I'm sure many of them had families.
The fallacy here is in supposing that the mechanisms that kept those people from starving in the 1920s still exists and remains effective, that the "people replaced" have some other industry to move into. But we live in a post-industry nation... all that god offshored. There is nothing more to make, or build, or repair, not to any scale that would employ everyone meaningfully. And while I suppose some like you imagine that we'll all sit around day trading and speculating on bitcoin for a living, this means that places like China would have to manufacture everything and grow everything and that they'd be willing to do that so that they could have the bitcoin tablescraps you toss to them from time to time.
I've heard your argument all my life, starting all the way back in the late 1980s when the government was first talking about making China the "most favored nation" status that would permit it. Maybe back then people could still believe it but now it rings hollow as hell.
>Two, I am not the biggest fan of capitalism,
I am. I am a big fan, when it's used well everyone benefits. But you still have to police it a little to deter fraud, and we've all been the victims of the biggest fraud ever. And we can't even talk about it here, hurts too many feelings.
>hey will shift that money to other expenditures: on AI tech, humans doing other jobs
Or, maybe instead of shifting to "humans doing other jobs", someone runs the numbers and discovers there are still 30 years worth of profit (or even just 10 years) selling the product to Europe or whereever even if they don't hire anymore humans, and since this exceeds their projected career duration, there's no need to look past that very distant horizon. And it doesn't matter that here or there you're even correct (that some companies might shift to "humans doing other jobs", because I only have to be partially correct and you have to be entirely correct... if some companies do it as I hint, then those companies outcompete your companies which go under, and it still results in massive unemployment.
The fixes for all of these things are simple, clear, and effective, but are politically untenable. Even if people could have been eventually persuaded that they were necessary, those people are now outvoted by many more people who have been brought in who have no loyalty to this country (and it really applies to many countries, not just the one I'm in) and would cockblock the fixes.
> How can one run a business by replacing humans, if no humans are left with enough income to buy your products?
If you control all of the wealth and resources and you have fully automated all of the production of everything you could ever want, then why would you need other humans to buy anything?
The focus now is not the model, but the Product - "here we improve the usuability by removing the choice between models", "here is a better voice for tts", "here is a nice interface for previewing html"
Only about 5 minutes of the whole presentation are dedicated to enterprise usage (COO in an interview sort of indirectly confirms that haven't figured it out yet). And they are cutting the costs already (opaque routing between models for non-API users is a clear sign of that). The term "AGI" is dropped, no more exponential scaling bullshit - just incremental changes over the time and only over select few domains. Actually it is a more welcoming sign and not concerning at all that this technology matures and crystallizes around this point. We will charitably forget and forgive all the insane claims made by Sam Altman in the previous years. He can also forget about cutting ties with Microsoft for that same reason.
> ...when they need to find any way possible to squeeze paid subscribers out of their (money losing) free user base.
Also note that they're losing money on their paid subscribers.
Given the difference between GPT-3 and GPT-4, a fair numbering for "GPT-5" is probably "GPT-4.2".
Latency users experience while getting their answers is a big part of the LLM experience.
Well done model routing is a tremendous leap forward to minimize the latency & improve the user experience.
E.g. I love Gemini 2.5 Pro. But it's darn slow (sorry GDM!). I love the latency I'm getting from 4o. The solution? Just combine them under one prompt, with well done model routing.
Is GPT5 router "good enough"? We'll see.
I think OpenAI is a smart company. And Sama is a tremendous leader. They're moving in the right direction.
Some of the problems with GPT-5 in ChatGPT could actually be due to new model that is in place to route requests to the actual GPT-5 models. There are four models in the GPT-5 family, and I could reproduce the faulty "blueberry" test result only with the "gpt-5-chat" (aka "gpt-5-main") model through the API. This model is there to answer (near) instantly and it falls in the non-thinking category of LLMs. The "blueberry" test represents what they are particularly bad at (and what OpenAI set out to solve with o1). The other thinking models in the family, including gpt-5-nano, solve this correctly.
so can we please stop talking of AGI until counting letter in a word are not hard?
The messaging is all over the place anyway. Not so long ago OAI was talking about faster iterations and warning people to not expect huge leaps. (A position that makes sense imo). Yet people talk about AGI in a serious manner?
> We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents "join the workforce" and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.
"Reflections" by Sam Altman, January 2025 - https://blog.samaltman.com/reflections
"AGI achieved internally" -- Sam Altman
Yes a quote from a meme post on Reddit. No doubt he has been overselling the future for a while but why use a quote with the wrong context?
See the sibling comment from AlexandrB. Altman and tons of other hype men in tech do this thing where they make outrageous promises, then retcon as “just jokes” whichever ones don’t come true, so that they can never be disproven. It’s a swindle made all the more irritating by the enablers like you who go “why did you take the joke seriously?” to get cred on the internet while helping the scam continue.
Or to put it another way, do you think Altman denounced all the hype (and subsequent investment dollars) he got because of the “AGI achieved internally” post? Did he say to anyone “hey, that was a meme post, don’t take it seriously”? Or did he milk it for all it was worth before only later quietly climbing down from that when it was no longer paying dividends. Again, duplicitous and disingenuous behavior.
Again, out of all the quotes to call out the weakest one was picked.
So you admit he was lying then. If he is a known liar, then we know what to expect, I guess.
I am not admitting anything as I don’t have any skin in the game. It’s just a weak hyperbolic quote compared to the PR narrative.
[dead]
“Hidden, poorly internally labeled fiat@ account”
I don't think anyone serious is talking about AGI from LLMs, no.
Only if you consider Sam Altman not serious: https://www.tomsguide.com/ai/chatgpt/sam-altman-claims-agi-i...
I find this pattern in tech hype really frustrating. Someone in a leadership role in a major tech company/VC promises something outrageous. Time passes and the promise never materializes. People then retcon the idea that "everybody knew that wasn't going to happen". Well either "everybody" doesn't include Elon Musk[1], Sam Altman, or Marc Andreessen[2] or these people are liars. No one seems to be held to their track record of being right or wrong, instead people just latch on to the next outrageous promise as if the previous one was fulfilled.
[1] https://electrek.co/2025/03/18/elon-musk-biggest-lie-tesla-v...
[2] https://dailyhodl.com/2022/06/01/billionaire-and-tech-pionee...
There's also this deluded-CEO/grounded-CEO routine between Altman and Nadela. Altman will be quoted saying something outrageous in social/mainstream-media which Nadela can then later tone down, add nuance and be realistic about in some firechat or podcast to address the minority who understands, will listen, and would criticize.
It does look like "A lie will make it halfway around the world while the truth is busy lacing its boots" is a major part of a communication strategy.
> Only if you consider Sam Altman not serious
> Elon Musk[1], Sam Altman, or Marc Andreessen[2] ... these people are liars.
Bingo. These people are salesmen & marketers. Lying to sell a product (including gathering funding & pumping company stock) is literally the job description. If they weren't good at it, they wouldn't hold the positions they do.
Being a good salesperson does not require lying. It's not the job that's doing the lying, but the person lying to you.
And yet altman talks about AGI being imminent, but his company has only ever produced LLMs.
Now why would the CEO of an AI company say something like that!?
Is it a given that they need to unrealistically hype everything? To me it just seems like he's killing any and all credibility he had
Probably a bad long term strategy?
I mean other non-AI companies use hype too sure.. but it's maybe a little sprinkle of 1.1x on top aimed to highlight their best features. Here we're going full on 100x of reality
> To me it just seems like he's killing any and all credibility he had. Probably a bad long term strategy?
He's already got more money than God and there's an infinite supply of suckers who think wealth and skill/intelligence are correlated for him to keep feeding off of (see also Goop and Tesla, incredibly successful companies also run by wealthy hucksters). Sam Altman will be just fine.
It's not a given but Altman is a public figure for a reason while I don't know the names of any of the other CEOs off the top of my head. He talks a lot and when he talks, it's about AI. Even talking about the dangers of AI is hype because it implies it's an important topic to discuss now because it's imminent.
I do know that AGI has a different meaning internally to what we think it means:
https://www.fanaticalfuturist.com/2025/01/microsoft-and-open...
Massive grain of salt though.
Maybe the brain drain was real? we'll find out from gemini 3 I guess
For sure, but not for that reason; there is currently no one with a plan how to go from current (LLMs) to a better model. It's some 'more focused training' 'better prompting' 'agentic' 'smarter lookups' 'better tooling'. But fundamentally, this model is simply shagged out and it'll get a little better with the above, but the jump everyone is waiting for cannot happen without a new model invention.
No one you know of but I'm sure people are thinking about it.
Which reminds me that one of the most obvious failings of LLMs is they never say "I've been thinking about that and have a new idea." The thinking leaning thing needs work.
I know multiple people who are working on this: there is just no progress yet that is more of the same and that won't work.
My point is; maybe we can't prove that until deepmind gives us their best shot
But aren't deepmind, in this infinite money AI times, giving it their best shot?
I find this news very exciting to be perfectly honest. It’s finally time to build on the tech we already have.
We're hitting the ceiling of this algorithms already I guess. Someone will make a new and better one. That's how tech works. No worries.
Gemini 3.0 is gonna cook
Yup and look at all that IP they just bought…
I asked it to make a drawing of the US with every state numbered from biggest to smallest with 1 being the largest.
Maine was #89 (That is not a typo.) and Oregon was #1.
OpenAI as a company simply cannot exist without a constant influx of investor money. Burning money on every request is not a viable business model. Companies built on OpenAI/Anthropic are similarly deeply unprofitable businesses.
OpenAI needs to convert to a for-profit to get any more of the funding that Softbank promised (that its also unclear how Softbank itself would raise) or to get significant cash from anyone else. Microsoft can block this and probably will.
It all reminds me of that Paddy's Dollars bit from it's always sunny.
"We have no money and no inventory... there's still something we can do... that's still a business somehow..."
Today I learned that Washington is bigger than Texas, and Alaska is smaller than Florida! https://chatgpt.com/share/6896216c-c57c-8012-8241-604b255191...
PhDs need to catch up!
I tried a fairly basic Pokemon Go question - which pokemon are resistant to ghost attacks, and it got it wrong - said normal types are immune. Which is wrong. ASI is not quite with us yet.
burning money worked for Uber. As long as they can IPO or get cheap debt from governm friends any valuation can work. Uber lost double digit billions as an app with no edge or anythin. It always made no sense beyond 1 billion
Uber's whole schtick is being what was an already profitable business model (Taxis) with lower overhead/easier access.
That money they burned was on customer acquisition, building infrastructure, etc. The unit economics of paying to be driven to the airport or Benihanas was always net positive.
They weren't losing money on every customer, even paying ones. There just isn't a business model here.
> no edge or anythin
I wouldn't say they had no edge. They had a huge advantage over traditional taxi companies. You can argue that a local Uber-like app could be easily implemented, that's where the investors came in to flood the markets and ensure other couldn't compete easily.
The situation is in no way similar to OpenAI's. OpenAI truly has no edge over Anthropic and others. AGI is not magically emerging from LLM's and they don't seem to have an alternative (nobody does but they promised it and got the big bucks so now it's their problem).
> burning money worked for Uber.
TBD. Some people did well while Uber gave money away, but Uber is not net profitable over its lifetime.
Uber raised something like $50 billion in debt and equity before it went public, but after 15 years of losing money, it has finally started making profits… just in time for Waymo to arrive and eat its lunch. Of course, Uber could themselves get into the self-driving game, but their entire profit story to investors relies on pushing costs away from them onto drivers; it vanishes entirely if they have to maintain their own fleet.
Uber is profitable on a cash basis, but if you’re a public investor, you got fleeced by the early-stage venture money and debtholders. I don’t think it will ever pay back what it raised.
Agree.
> Uber could themselves get into the self-driving game
They tried. Made a little progress, killed someone, and gave up (rightfully so).
The way they do this in Europe is that an enterpreneur buys a fleet of cars and then gets a visa for a number of folks from Bangladesh and other areas who don not own any of these cars and ride them in turns (they also sleep like 10 in one appartment but that's a different story). The owner gets the money and distributes them to the actual drivers. Uber says they are innocent as they are not in an emplyer-employee position with any of these drivers.
This model worked for the fleet owners so far because the Saudi gave enough money so that both (1) the customers were happy, (2) the cash from the ride could be divided between owners and drivers in a way these drivers complained only to a certain extent.
But the last two years (the only profitable ones) are much worse, both for the drivers and fleet owners. There is still sunk cost in there, but once the cars get old enough they will need to think well whether to buy/lease the next batches.
Uber had limited underpowered competition, so they could win starvation game.
OpenAI competes with google, who can drop 50B/y into AI hype for very long time.
the 1.5 mil bonus to tech staff announcement prior to chatgpt 5 release makes even more sense now. They knew it would be difficult to manage public expectations and wanted to counter the short-term (in the best case) drop in morale in the company.
I think that 1.5m bonus is likely stocks with 500B valuation. There are other rumors they want outsiders to be allowed to buy stocks with 500B valuation.
"...while Uber has achieved profitability, some analyses suggest that a substantial portion of these profits may come from an increased revenue share at the expense of drivers' earnings".
So let's imagine is 2040 and OpenAI is finally profitable. Now, Uber did this by increasing prices, firing some staff and paying smaller wages to drivers. And all this while having near-monopoly in certain areas. What realistic measures would they need to take in order to compete with, say, Google? Because I just wish them good luck with that.
I had it create a map "in the style of a history textbook." It came up with something that looks worse than I imagined: https://pasteboard.co/3zGy5ti4hHuT.jpg
Isn’t it old news that the full for-profit is not happening and they renegotiated the terms that would make the current proposed PBC a solution as it meets the economic terms?
I have no idea if OpenAI succeeds or not but I find arguments like yours difficult to understand. Most businesses are not using these systems to draw a map. Maybe the release of 5 is lackluster but it does not change that there is some value in these tools today and ignoring R&D (which is definitely a huge cost) they run at a profit.
> ignoring R&D (which is definitely a huge cost) they run at a profit.
how can you say such a hand wavy comment with a straight face? you can't just ignore a huge cost for a company and suddenly they are profitable. that's Enron level moronic. without constant R&D, the company gets beat by competitors that continue R&D. the product is not "good enough" to not continue improving.
if i ignored my major costs in my finances, i could retire, but i can't go to the grocery store and walk out with a basket of food while telling them that i'm ignoring this major cost in my life.
get real
I don’t know why so many take these discussion with such a high emotional level. Has the ability to constructively discuss a topic been lost? I know you usually respond with high emotion and brash but at least try to be constructive.
It’s a valid point and that’s the biggest question when it comes to the medium to long term business plan. Those R&D costs are an important part of it. My point is that since runtime is profitable there is a lot more runway to figure out how to tweak R&D spend in such a way that it becomes a viable business for the long term.
There are a lot of questions that they need to answer to get to pure profitability but they are also the fastest growing company on a MAU number in history with a product that you can see has a chance at become profitable from all sides. They may fail or become sidelined but the hyperbole and lack of critical discussion here is disappointing.
I like how when your illogical notion is challenged, you respond by saying the challenger is being emotional.
There is no point in saying that an AI company can just ignore its R&D. There is no company without the R&D. Because of that, any conversation pretending it doesn't exist is pointless. There is no constructive conversation with that as the premise.
You’re arguing against a point I’m not making. I’m not saying R&D isn’t necessary or that it “doesn’t exist”, I’m saying that operationally, the service itself runs at a profit before accounting for R&D. That matters because it means they have a viable revenue engine that could, in theory, fund a sustainable R&D budget if they adjusted spend.
That’s a very different conversation than “pretend R&D doesn’t matter.” No one is suggesting they stop building; the question is whether they can align the burn rate with the revenue base over time. Companies make those tradeoffs constantly when maturing from heavy investment to profitability.
And yes, you are being emotional, not because I disagree with you, but because your language is inflammatory and brutish. It’s hard to have a constructive discussion when every response is dialed to 11. Misframing the premise as “ignoring a huge cost” isn’t debate, it’s a straw man, and it sidesteps the real question of whether the underlying business model works once R&D is right-sized.
Would love to have a real critical discussion on why you disagree but please leave the bad language out of it. It’s boring and I know it’s your typical route in these types of discussions but at least have a valid retort.
I have no horse in this race, but... Hasn't a huge amount of R&D already been spent? You can't retroactively make that go away.
Correct, past R&D spend is already sunk and can’t be undone. But that’s why it’s useful to separate sunk costs from future operating costs when evaluating viability.
The relevant question is whether the ongoing revenue from the existing product is strong enough to support a sustainable level of R&D going forward. If your runtime margins are healthy, you have options: scale back R&D burn, focus on incremental improvements, or use the profits to fund more ambitious projects.
The entire US stock market is propped up by big tech companies spending massively on Data Centers and GPUs for AI. OpenAI is valued higher than Netflix.
A company that can pull in single digit billions in revenue for hundreds of billions in expenses just doesn't make sense.
> Most businesses are not using these systems t̶o̶ ̶d̶r̶a̶w̶ ̶a̶ ̶m̶a̶p̶.̶
FTFY
And no - while it might be obvious from the outside in that it probably won't happen, the continued existence of the business is still predicated on conversion to a for-profit. They don't just need the amount of money they've already "raised", they need too keep getting more money forever.
FTFY? Cute, but you’re arguing against a strawman. My point wasn’t that companies are using GPT to draw maps, it’s that dismissing the tech based on one goofy output ignores the far more common, revenue-generating use cases already in production.
As for “single-digit billions in revenue vs. hundreds of billions in expenses,” that’s just bad math. You’re conflating the total AI capex from hyperscalers with OpenAI’s own P&L. Yes, training is capital-intensive, but the marginal cost to serve (especially at scale) is much lower, and plenty of deployments are already profitable on an operating basis when you strip out R&D burn.
The funding structure question is fair, the for-profit conversion path matters but pretending the whole business is propped up solely by infinite investor charity is just wrong.
Microsoft AI Revenue In 2025: $13 billion, with $10 billion from OpenAI, sold "at a heavily discounted rate that essentially only covers costs for operating the servers."
Capital Expenditures in 2025: $80 billion
---
Amazon AI Revenue In 2025: $5 billion
Capital Expenditures in 2025: $105 billion
---
Google AI Revenue: $7.7 Billion (at most)
Capital Expenditures in 2025: $75 Billion
---
Meta AI Revenue: $2bn to $3bn
Capital Expenditures In 2025: $72 Billion
---
The math is bad, but its not "bad math."
(Numbers from here: https://www.wheresyoured.at/the-haters-gui/)
Take the emotional level down a notch. You seemed to miss the point. Hyperscaler spend does not equate to OpenAI P&L.
I didn’t read any emotional level in the post you responded to. Where is it?
Does it get the non-drawing written text list version right at least?
It regurgitates the list in text form, which is almost certainly in the training data.
But this company is valued more than Netflix. The bar should not be this low.
Yeah, I was just curious how deep the abyss was in this instance.
Am I right to say that "AGI" was just...cancelled again?
Did we just get scammed right in front of our eyes with an overhyped release and what is now an underwhelming model if the point was that GPT-5 was supposed to be trustworthy enough for serious use-cases and it can't even count or reason about letters?
So much for the "AGI has been achieved internally" nonsense with the VCs and paid shills on X/Twitter bullshitting about the model before the release.
Not only "AGI" is cancelled but they also sort of admitted that so-called "scaling" "laws" don't work anymore. Scaling inference kinda still works, but obviously is bounded by context size and haystack-and-needle diminishing accuracy. So the promise of even steadily moving towards AGI is dubious at best.
How were you still eating that?
> They admitted that they were, and I am not lying about this, paywalling chat colors. […] This is a feature that a company adds when they are out of ideas
This observation + sherlocking cursor suggests that perhaps sherlocking is the ideation strategy. Curious to see if they’re subsidizing token costs specifically to farm and Sherlock ideas
Yeah, I agree with the OP here. After all this time, being able to change the chat colors at this point has some real We-reached-the-bottom-of-the-backlog energy, and they're just now implementing the ideas that weren't considered important enough before by the PMs to consider.
It hardly feels like a next generation release.
As a related anecdote (not saying that this is industry standard, just pointing out my own experience), the startup I work for launched their app four years ago, and, for all four of those years, we've had "Implement a Dark Mode design" sitting at the bottom of our own backlog. Higher priority feature requests are always pre-empting it.
The core product failure here is overhyping incremental improvement, eroding trust.
PMs operating at this level ought to be bringing in some low cost UX improvements alongside major features. That simply isn't a sign that they've run ought of backlog. (That said, it is rather pathetic to paywall this)
A moment's consideration ought to show that Open AI has plenty of significant work they they can be doing, even if the core model never gets any better than this.
The most funny part of the demo was colored chats. That also behind a paywall. I was like are they become instagram
Issues like this are why I don't use ai agents for code. I don't want to sift through the bullshit confidently spewed out by the model.
It doesn't understand anything. It can't possibly "understand my codebase". It can only predict tokens, and it can only be useful if the pattern has been seen before. Even then, it will product buggy replicas, which I've pointed out during demos. I disabled the ai helpers in my IDEs because the slop the produce is not high quality code, often wrong, often misses what I wanted to achieve, often subtly buggy. I don't have the patience to deal with that, and I don't want to waste the time on it.
Time is another aspect of this conversation, with people claiming time wins, but the data not backing it up, possibly due to a number of factors intrinsic to our squishy evolved brains. If you're interested, go find gurwinder's article on social media and time - I think the same forces are at work in the ai-faithful.
There is a threshold that every developer needs for them to make it be worth their time. For me that has already been met. Your comment makes me think that you don't believe it will start producing higher quality code than you anytime soon.
I think most of us are in the camp that even though we don't need AI right now we believe we will not be valuable in the near future without being highly proficient with the tooling.
> even though we don't need AI right now we believe we will not be valuable in the near future
This reads to me like you don't think you're valuable right now either
the whole event was shit, but we're all past the point where we can just say that, because the technology is now so entrenched that it's become unavoidable, so everything has now to jump through hoops to justify its existence and its greatness
[dead]