> The bottleneck isn’t code production, it is judgment.
It always surprises me that this isn't obvious to everyone. If AI wrote 100% of the code that I do at work, I wouldn't get any more work done because writing the code is usually the easy part.
I'm retired now, but I spent many hours writing and debugging code during my career. I believed that implementing features was what I was being paid to do. I was proud of fixing difficult bugs.
A shift to not writing code (which is apparently sometimes possible now) and managing AI agents instead is a pretty major industry change.
Anything you do with AI is improved if you're able to traverse the stack. There's no situation where knowing how to code won't put you above peers who don't.
It's like how every job requires math if you make it far enough.
Well you should be surprised by the number of people who do not know this. Klarna is probably the most popular example where the CEO was all about creating more code, then fired everyone before regretting
Klarna, now there's a company that seems to have no idea what direction it's going in. In the past month, they've announced they're going to be at the forefront of Agentic AI for merchants so... agents can figure out what merchants are selling? They're somehow offering stablecoins to institutional investors to use USDC to extend loans to Klarna? And then they're starting some kind of credit-card rewards program with access to airline lounges?
Sometimes people who don't work in software seem surprised that I don't type faster than I do given my line of work, and I explain to them that typing speed is never the bottleneck in the work that I do. I don't pretend to know for sure if this holds true for every possible software job but it's not a concept I've seen surprise many software engineers. This almost seems like the next level of that; they certainly do more than just write code I want faster, but except for problems where I have trouble figuring out how to express what I want in code, they're not necessarily the solution to any problem I have.
If they could write exactly what I wanted but faster, I'd probably stop writing code any other way at all because that would just be a free win with no downside even though the win might be small! They don't write exactly what I want though, so the tradeoff is whether the amount of time they save me writing it is lost from the extra time debugging the code they wrote rather than my own. It's not clear to me that the code produced by an LLM right now is going to be close enough to correct enough of the time that this will be a net increase in efficiency for me. Most of the arguments I've seen for why I might want to consider investing more of my own time into learning these tools seem to be based on extrapolation of trends to up to this point, but it's still not clear to me that it's likely that they'll become good enough to reach a positive ROI for me any time soon. Maybe if the effort to actually start using them more heavily was lower I'd be willing to try it, but from what I can tell, it would take a decent amount of work for me to get the point where I'm even producing anything close to what I'm currently producing, and I don't really see the point of doing that if it's still an open question if it will ever close the remaining gap.
> I explain to them that typing speed is never the bottleneck in the work that I do.
Never is a very strong word. I'm not a terribly fast typist but I intentionally trained to be faster because at times I wanted to whip out some stuff and the thought of typing it all out just annoyed me since it took too long. I think typing speed matters and saying it doesn't is a lie. At the very least if you have a faster baseline then typing stuff is more relaxing instead of just a chore.
I think it depends on the sort of work you do. We had some hubspot integration which hadn't been touched for three years break. Probably because someone at hubspot sunset their v1 api a few weeks too early... Our internal AI tool that I've build my own agents on updated our data transfer service to use the v3 api. It also added typing, but kept the rather insane way of delivering the data since... well... since it's worked fine for 3 years. It's still not a great piece of software that runs for us. It's better now than it was yesterday though and it'll now go back to just delivering business value in it's extremely imperfect form.
All I had to do was a two line prompt, and accept the pull request. It probably took 10 minutes out of my day, which was mostly the people I was helping explaining what they thought was wrong. I think it might've taken me all day if I had to go through all the code and the documentation and fixed it. It might have taken me a couple of days because I probably would've made it less insane.
For other tasks, like when I'm working on embedded software using AI would slow me down significantly. Except when the specifications are in German.
I'll stare at a blank editor for an hour with three different solutions in my head that I could implement, and type nothing until a good enough one comes to mind that will save/avoid time and trouble down the road. That last solution is not best for any simple reason like algorithmic complexity or anything that can be scraped from web sites.
No shade on your skills, but for most problems, this is already false; the solutions have already been scraped.
All OSS has been ingested, and all the discussion in forum like this about it, and the personal blog posts and newsletters about it; and the bug tracking; and theh pull requests, and...
and training etc. is only going to get better and filtering out what is "best."
A vast majority of the problems I’m asked to solve at work do not have open-source code I can simply copy or discussion forums that already decided the best answer. Enterprise customers rarely put that stuff out there. Even if they did, it doesn’t account for the environment the solution sit in, possible future integrations, off-the-wall requests from the boss, or knowing that internal customer X is going to want some other wacky thing, so we need to make life easy on our future selves.
At best, what I find online are basic day 1 tutorials and proof on concept stuff. None of it could be used in production where we actually need to handle errors and possible failure situations.
Obviously novel problems require novel solutions, but the vast majority of software solutions are remixes of existing methods. I don’t know your work so I may be wrong in this specific case, but there are a vanishingly small number of people pushing forward the envelope of human knowledge on a day-to-day basis.
My company (and others in the same sector) depends on certain proprietary enterprise software that has literally no publicly available API documentation online, anywhere.
There is barely anything that qualifies as documentation that they are willing to provide under NDA for lock-in reasons/laziness (ERPish sort of thing narrowly designed for the specific sector, and more or less in a duopoly).
The difficulty in developing solutions is 95% understanding business processes/requirements. I suspect this kind of thing becomes more common the further you get from a "software company” into specific industry niches.
The point is that the best solution is based on specific context of my situation and the right judgment couldn't be known by anyone outside of my team/org.
How many hours per week did you spend coding on your most recent project? If you could do something else during that time, and the code still got written, what would you do?
Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?
"Writing code" is not the goal. The goal is to design a coherent logical system that achieves some goal. So the practice of programming is in thinking hard about what goal I want to achieve, then thinking about the sort of logical system that I could design that would allow me to verifiably achieve that goal, then actually banging out the code that implements the abstract logical system that I have in my head, then iterating to refine both the abstract system and its implementation. And as a result of being the one who produced the code, I have certainty that the code implements the system I have in mind, and that the system it represents is for for the purpose of achieving the original goals.
So reducing the part where I go from abstract system to concrete implementation only saves me time spent typing, while at the same time decoupling me from understanding whether the code actually implements the system I have in mind. To recover that coupling, I need to read the code and understand what it does, which is often slower than just typing it myself.
And to even express the system to the code generator in the first place still requires me to mentally bridge the gap between the goal and the system that will achieve that goal, so it doesn't save me any time there.
The exceptions are things where I literally don't care whether the outputs are actually correct, or they're things that I can rely on external tools to verify (e.g. generating conformance tests), or they're tiny boilerplate autocomplete snippets that aren't trying to do anything subtle or interesting.
The actual act of typing code into a text editor and building it could be the least interesting and least valuable part of software development. A developer who sees their job as "writing code" or a company leader who sees engineers' jobs as "writing code" is totally missing where the value is created.
Yes, there is artistry, craftsmanship, and "beautiful code" which shouldn't be overlooked. But I believe that beautiful code comes from solid ideas, and that ugly code comes from flawed ideas. So, as long as the (human-constructed) idea is good, the code (whether it is human-typed or AI-generated) should end up beautiful.
Where's the beautiful human generated code? There's the IOCCC but that's the only code comleo that's a competition based on the code itself, and it's not even a beauty pageant. There's some demo scene stuff, which is more of a golf thing. There's random one-offs, like not-Carmack's inverse square, or Duff's device, but other than that, where're the good code beauty pageants?
In my experience (and especially at my current job) bottlenecks are more often organizational than technical. I spend a lot of time waiting for others to make decisions before I can actually proceed with any work.
My judgement is built in to the time it takes me to code. I think I would be spending the same amount of time doing that while reviewing the AI code to make sure it isn't doing something silly (even if it does technically work.)
A friend of mine recently switched jobs from Amazon to a small AI startup where he uses AI heavily to write code. He says it's improved his productivity 5x, but I don't really think that's the AI. I think it's (mostly) the lack of bureaucracy in his small 2 or 3 person company.
I'm very dubious about claims that AI can improve productivity so much because that just hasn't been my experience. Maybe I'm just bad at using it.
Does voice transcription count as AI? I'm an okay typer, but being able to talk to my computer, in English, is definitely part of the productivity speed up for me. Even though it struggles to do css because css is the devil, being able to yell at my computer and have it actually do things is cathartic in ways I never thought possible.
All you did was changing the programming language from (say) Python to English. One is designed to be a programming language, with few ambiguities etc. The other is, well, English.
Speed of typing code is not all that different than the speed of typing English, even accounting for the volume expansion of English -> <favorite programming language>. And then, of course, there is the new extra cost of then reading and understanding whatever code the AI wrote.
The thing about this metaphor that people don't seem to ever complete is.
Okay, you've switched to English. The speed of typing the actual tokens is just about the same but...
The standard library is FUCKING HUGE!
Every concept that you have ever read about? Every professional term, every weird thing that gestures at a whole chunk of complexity/functionality ...
Now, if I say something to my LLM like:
> Consider the dimensional twins problem -- how're we gonna differentiate torque from energy here?
I'm able to ... "from physics import Torque, Energy, dimensional_analysis"
And that part of the stdlib was written in 1922 by Bridgman!
And extremely buggy, and impossible to debug, and does not accept or fix bug reports.
AI is like an extremely enthusiastic junior engineer that never learns or improves in any way based on your feedback.
I love working with junior engineers. One of the best parts about working with junior engineers is that they learn and become progressively more experienced as time goes on. AI doesn't.
People need to decide if their counter to AI making programmers obsolete is "current generation AI is buggy, and this will not improve until I retire" or "I only spend coding 5% of my time so it doesn't matter if AI can instantly replace my coding".
And come on: AI definitely will become better as time goes on.
Exactly. LLMs are faster for me when I don't care too much about the exact form the functionality takes. If I want precise results, I end up using more natural language to direct the LLM than it takes if I just write that part of the code myself.
I guess we find out which software products just need to be 'good enough' and which need to match the vision precisely.
> Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?
It’s sort of the opposite: You don’t get to the proper judgement without playing through the possibilities in your mind, part of which is accomplished by spending time coding.
I think OP is closer to the latter. How I typically have been using Copilot is as a faster autocomplete that I read and tweak before moving on. Too many years of struggling to describe a task to Siri left me deciding “I’ll just show it what I want” rather than tell.
Do people reading this post not understand that this is the output of a prompt like 'analyze <event> with <perspective> arriving at <conclusion>'? Tighten up your epistemology if you're arguing with an author who isn't there.
The very fact that people are arguing with a non-existent author signals that whatever generated the content did a good enough job to fool them today. Tomorrow it will do a good enough job to fool you. I think the more important question is what this means in terms of what is really important and what we should invest in to remain anchored in what matters.
This has been happening a lot recently, where an article immediately sets off all my AI alarm bells but most people seem to be happily engaging with it. I’m worried we’re headed for a dystopian future where all communication is outsourced to the slop machine. I hope instead there is a societal shift to better recognize it and stigmatize it.
I've noticed some of this in recent months. I've also noticed people editing out some of the popular tells, like replacing em-dashes with commas, or at least I think so, because of odd formatting/errors in places where it sounds like the LLM would have used a dash.
But at this point I'm not confident that I'm not failing to identify a lot of LLM-generated text and not making false positives.
> Treat AI as force multiplication for your highest-judgment people. The ones who can design systems, navigate ambiguity, shape strategy, and smell risk before it hits. They’ll use AI to move faster, explore more options, and harden their decisions with better data.
Clever pitch. Don't alienate all the people who've hitched their wagons to AI, but push valuing highly-skilled ICs as an actionable leadership insight.
Incidentally, strategy and risk management sound like a pay grade bump may be due.
Something about the way the article sets up the conversation nags at me a bit - even though it concludes with statements and reasoning I generally agree quite well with. It sets out what it wants to argue clearly at the start:
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished... The Bun acquisition blows a hole in that story.”
But what the article actually discusses and demonstrates by the end of the article is how the aspects of engineering beyond writing the code is where the value in human engineers is at this point. To me that doesn't seem like an example of a revealed preference in this case. If you take it back to the first part of the original quote above it's just a different wording for AI being the code writer and engineering being different.
I think what the article really means to drive against is the claim/conclusion "because AI can generate lots of code we don't need any type of engineer" but that's just not what the quote they chose to set out against is saying. Without changing that claim the acquisition of Bun is not really a counterexample, Bun had just already changed the way they do engineering so the AI wrote the code and the engineers did the other things.
These are all things I'd rather have seen the article set out to talk about as well, instead it opens up to disprove a statement saying AI can write the coding portion of the engineering problem by means of showing it being used that way with Bun to mean Anthropic must not actually think that.
I was thinking the same but it's like they only used AI to handle the editing or something because even throwing it into ChatGPT "how could this article be improved: ${article}" gives:
> Tighten the causal claim: “AI writes code → therefore judgment is scarce”
As one of the first suggestions, so it's not something inherent to whether the article used AI in some way. Regardless, I care less about how the article got written and more about what conclusions really make sense.
> That contradiction is not a PR mistake. It is a signal.
> The bottleneck isn’t code production, it is judgment.
> They didn’t buy a pile of code. They bought a track record of correct calls in a complex, fast-moving domain.
> Leaders don’t express their true beliefs in blog posts or conference quotes. They express them in hiring plans, acquisition targets, and compensation bands.
Not to mention the gratuitous italics-within-bold usage.
No no I agree: “No negotiations. No equity. No retention packages.”
I don’t know if HN has made me hyper-sensitized to AI writing, but this is becoming unbearable.
When I find myself thinking “I wonder what the prompt was they used?” while reading the content, I can’t help but become skeptical about the quality of the thinking behind the content.
Maybe that’s not fair, but it’s the truth. Or put differently “Fair? No. Truthful? Yes.”. Ugh.
People speak in relative terms and hear in absolutes. Engineers will never completely vanish, but it will certainly feel like it if labor demand is reduced enough.
Technically, there’s still a horse buggy whip market, an abacus market, and probably anything else you think technology consumed. It’s just a minuscule fraction of what it once was.
> but it will certainly feel like it if labor demand is reduced enough
All the last productivity multipliers in programming led to increased demand. Do you really think the market is saturated now? And what saturated it is one of the least impactful "revolutionary" tools we got in our profession?
Keep in mind that looking at statistics won't lead to any real answer, everything is manipulated beyond recognition right now.
Demand for software has been tied to demand for software engineering labor. That is no longer true. So demand for software may still go up while demand for labor goes another direction.
Also I do hold a belief that most tech companies are taking a cost/labor reduction strategy for a reason, and I think that’s because we’re closing a period of innovation. Keeping the lights on, or protecting their moats, requires less labor.
Each of the last productivity multipliers coincided with greatly expanded markets (e.g. PC revolution, internet, mobile). Those are at the saturation point. And we've effectively built all the software those things need now. Of course there is still room for innovation in software, but it is not like in the past where we also had to build all the low-hanging fruit at the same time. That doesn't require nearly as many people — and that was already starting to become apparent before anyone knew what an LLM is.
This AI craze swooped in at the right time to help hold up the industry and is the only thing keeping it together right now. We're quickly trying to build all the low-hanging fruit for it, keeping many developers busy (although not like it used to be), but there isn't much low-hanging fruit to build. LLMs don't have the breadth of need like previous computing revolutions had. Once we've added chat interfaces to everything, which is far from being a Herculean task, all the low-hanging fruit will be gone. That's quite unlike previous revolutions where we had to build all the software from scratch, effectively, not just slap some lipstick on existing software.
If we want to begin to relive the past, we need a new hardware paradigm that needs all the software rewritten for it again. Not an impossible thought, but all the low-hanging hardware directions have also been picked at this point so the likelihood of that isn’t what it used to be either.
> Each of the last productivity multipliers coincided with greatly expanded markets
They didn't. But it may be a relevant point that all of that was slow enough to spread that we can't clearly separate them.
Anyway, the idea that any one of those large markets is at saturation point requires some data. AFAIK, anything from mainframe software to phones has (relatively) exploded in popularity every time somebody made them cheaper, so that is a claim that all of those just changed (too recently to measure), without any large thing to correlate them.
> That's quite unlike previous revolutions where we had to build all the software from scratch
We have rewritten everything from scratch exactly once since high-level languages were created in the 70s.
While I agree with the premise of the article, even if it was a bit shallow, this claim made at the beginning is also still true:
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished.”
Software engineering pre-LLMs will never, ever come back. Lots of folks are not understanding that. What we're doing at the end of 2025 looks so much different than what we were doing at the end of 2024. Engineering as we knew it a year or two ago will never return.
I use AI as a smart auto complete - I’ve tried multiple tools on multiple models and I still _regularlt_ have it dump absolute nonsense into my editor - in thr best case it’s gone on a tangent, but in the most common case it’s assumed something (often times directly contradicting what I’ve asked it to do), gone with it, and lost the plot along the way. Of course when I correct it it says “you’re right, X doesn’t exist so we need to do X”…
Has it made me faster? Yes. Had it changed engineering - not even close. There’s absolutely no world where I would trust what I’ve seen out of these tools to run in the real world even with supervision.
When you have that hair raising “am I crazy why are people touting ai” feeling, it’s good to look at their profile. Oftentimes they’re caught up in some ai play. Also it’s good to remember yc has heavy investment in gen ai so this site is heavily biased
Context is king, too: in greenfield startups where you care little about maintenance and can accept redundant front end frameworks and backend languages? I believe agent swarms can poop out a lot lot lot of code relatively quick… Copy and paste is faster though. Downloading a repo is very quick.
In startups I’ve competed against companies with 10x and 100x the resources and manpower on the same systems we were building. The amount of code they theoretically could push wasn’t helping them, they were locked to the code they actually had shipped and were in a downwards hiring spiral because of it.
Here’s the thing - an awful lot of it doesn’t even compile/run, never mind do the right thing. My most recent example was asking it to use terraform to run an azure container app with an environment variable in an existing app environment. It repeatedly made up where the environment block goes, and and cursor kept putting the actual resource in random places in the file.
Hardly, asymptotic behavior can be anything, in fact that's the whole question: what happens to AI performance as we tend to infinity? Asymptoting to `y = x` is very different to levelling off.
Deep learning and transformers have given step functions in AI's capabilities. It may not happen, but it's reasonable to expect another step-function development soon.
I disagree with this article and what it attempts to do: frame the acquisition using a conjecture. The only thing to “believe” are the authors reasons - which are flimsy, because they are the very thing we need to be critical of.
I don’t know why the acquisition happened, or what the plans are. But it did happen, and for this we don’t have to suspend disbelief. I don’t doubt Anthropic has plans that they would rather not divulge. This isn’t a big stretch of imagination, either.
We will see how things play out, but people are definitely being displaced by AI software doing work, and people are productive with them. I know I am. The user count of Claude Code, Gemini and ChatGPT don’t lie, so let’s not kid ourselves.
> The bottleneck isn’t code production, it is judgment.
It always surprises me that this isn't obvious to everyone. If AI wrote 100% of the code that I do at work, I wouldn't get any more work done because writing the code is usually the easy part.
I'm retired now, but I spent many hours writing and debugging code during my career. I believed that implementing features was what I was being paid to do. I was proud of fixing difficult bugs.
A shift to not writing code (which is apparently sometimes possible now) and managing AI agents instead is a pretty major industry change.
Anything you do with AI is improved if you're able to traverse the stack. There's no situation where knowing how to code won't put you above peers who don't.
It's like how every job requires math if you make it far enough.
As someone not close to retirement yet, it's a very sad shift.
Well you should be surprised by the number of people who do not know this. Klarna is probably the most popular example where the CEO was all about creating more code, then fired everyone before regretting
Klarna, now there's a company that seems to have no idea what direction it's going in. In the past month, they've announced they're going to be at the forefront of Agentic AI for merchants so... agents can figure out what merchants are selling? They're somehow offering stablecoins to institutional investors to use USDC to extend loans to Klarna? And then they're starting some kind of credit-card rewards program with access to airline lounges?
At my company doubling the writing-code part of software projects might speed them up 5%. I think even that’s optimistic.
Imperfectly fixing obvious problems in our processes could gain us 20%, easy.
Which one are we focusing on? AI. Duh.
Sometimes people who don't work in software seem surprised that I don't type faster than I do given my line of work, and I explain to them that typing speed is never the bottleneck in the work that I do. I don't pretend to know for sure if this holds true for every possible software job but it's not a concept I've seen surprise many software engineers. This almost seems like the next level of that; they certainly do more than just write code I want faster, but except for problems where I have trouble figuring out how to express what I want in code, they're not necessarily the solution to any problem I have.
If they could write exactly what I wanted but faster, I'd probably stop writing code any other way at all because that would just be a free win with no downside even though the win might be small! They don't write exactly what I want though, so the tradeoff is whether the amount of time they save me writing it is lost from the extra time debugging the code they wrote rather than my own. It's not clear to me that the code produced by an LLM right now is going to be close enough to correct enough of the time that this will be a net increase in efficiency for me. Most of the arguments I've seen for why I might want to consider investing more of my own time into learning these tools seem to be based on extrapolation of trends to up to this point, but it's still not clear to me that it's likely that they'll become good enough to reach a positive ROI for me any time soon. Maybe if the effort to actually start using them more heavily was lower I'd be willing to try it, but from what I can tell, it would take a decent amount of work for me to get the point where I'm even producing anything close to what I'm currently producing, and I don't really see the point of doing that if it's still an open question if it will ever close the remaining gap.
> I explain to them that typing speed is never the bottleneck in the work that I do.
Never is a very strong word. I'm not a terribly fast typist but I intentionally trained to be faster because at times I wanted to whip out some stuff and the thought of typing it all out just annoyed me since it took too long. I think typing speed matters and saying it doesn't is a lie. At the very least if you have a faster baseline then typing stuff is more relaxing instead of just a chore.
I think it depends on the sort of work you do. We had some hubspot integration which hadn't been touched for three years break. Probably because someone at hubspot sunset their v1 api a few weeks too early... Our internal AI tool that I've build my own agents on updated our data transfer service to use the v3 api. It also added typing, but kept the rather insane way of delivering the data since... well... since it's worked fine for 3 years. It's still not a great piece of software that runs for us. It's better now than it was yesterday though and it'll now go back to just delivering business value in it's extremely imperfect form.
All I had to do was a two line prompt, and accept the pull request. It probably took 10 minutes out of my day, which was mostly the people I was helping explaining what they thought was wrong. I think it might've taken me all day if I had to go through all the code and the documentation and fixed it. It might have taken me a couple of days because I probably would've made it less insane.
For other tasks, like when I'm working on embedded software using AI would slow me down significantly. Except when the specifications are in German.
Lots of people have good judgement but don't know the arcane spells to cast to get a computer to do what they want.
I'll stare at a blank editor for an hour with three different solutions in my head that I could implement, and type nothing until a good enough one comes to mind that will save/avoid time and trouble down the road. That last solution is not best for any simple reason like algorithmic complexity or anything that can be scraped from web sites.
No shade on your skills, but for most problems, this is already false; the solutions have already been scraped.
All OSS has been ingested, and all the discussion in forum like this about it, and the personal blog posts and newsletters about it; and the bug tracking; and theh pull requests, and...
and training etc. is only going to get better and filtering out what is "best."
A vast majority of the problems I’m asked to solve at work do not have open-source code I can simply copy or discussion forums that already decided the best answer. Enterprise customers rarely put that stuff out there. Even if they did, it doesn’t account for the environment the solution sit in, possible future integrations, off-the-wall requests from the boss, or knowing that internal customer X is going to want some other wacky thing, so we need to make life easy on our future selves.
At best, what I find online are basic day 1 tutorials and proof on concept stuff. None of it could be used in production where we actually need to handle errors and possible failure situations.
Obviously novel problems require novel solutions, but the vast majority of software solutions are remixes of existing methods. I don’t know your work so I may be wrong in this specific case, but there are a vanishingly small number of people pushing forward the envelope of human knowledge on a day-to-day basis.
My company (and others in the same sector) depends on certain proprietary enterprise software that has literally no publicly available API documentation online, anywhere.
There is barely anything that qualifies as documentation that they are willing to provide under NDA for lock-in reasons/laziness (ERPish sort of thing narrowly designed for the specific sector, and more or less in a duopoly).
The difficulty in developing solutions is 95% understanding business processes/requirements. I suspect this kind of thing becomes more common the further you get from a "software company” into specific industry niches.
The point is that the best solution is based on specific context of my situation and the right judgment couldn't be known by anyone outside of my team/org.
I thought you were going to point how this phrase (and others) make it painfully obvious this article was written by AI.
I don't understand this thinking.
How many hours per week did you spend coding on your most recent project? If you could do something else during that time, and the code still got written, what would you do?
Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?
"Writing code" is not the goal. The goal is to design a coherent logical system that achieves some goal. So the practice of programming is in thinking hard about what goal I want to achieve, then thinking about the sort of logical system that I could design that would allow me to verifiably achieve that goal, then actually banging out the code that implements the abstract logical system that I have in my head, then iterating to refine both the abstract system and its implementation. And as a result of being the one who produced the code, I have certainty that the code implements the system I have in mind, and that the system it represents is for for the purpose of achieving the original goals.
So reducing the part where I go from abstract system to concrete implementation only saves me time spent typing, while at the same time decoupling me from understanding whether the code actually implements the system I have in mind. To recover that coupling, I need to read the code and understand what it does, which is often slower than just typing it myself.
And to even express the system to the code generator in the first place still requires me to mentally bridge the gap between the goal and the system that will achieve that goal, so it doesn't save me any time there.
The exceptions are things where I literally don't care whether the outputs are actually correct, or they're things that I can rely on external tools to verify (e.g. generating conformance tests), or they're tiny boilerplate autocomplete snippets that aren't trying to do anything subtle or interesting.
The actual act of typing code into a text editor and building it could be the least interesting and least valuable part of software development. A developer who sees their job as "writing code" or a company leader who sees engineers' jobs as "writing code" is totally missing where the value is created.
Yes, there is artistry, craftsmanship, and "beautiful code" which shouldn't be overlooked. But I believe that beautiful code comes from solid ideas, and that ugly code comes from flawed ideas. So, as long as the (human-constructed) idea is good, the code (whether it is human-typed or AI-generated) should end up beautiful.
Raising the question: Where is the beautiful machine-generated code?
Where's the beautiful human generated code? There's the IOCCC but that's the only code comleo that's a competition based on the code itself, and it's not even a beauty pageant. There's some demo scene stuff, which is more of a golf thing. There's random one-offs, like not-Carmack's inverse square, or Duff's device, but other than that, where're the good code beauty pageants?
Excellent point. Why are folks downvoting this?
Maybe they’re AIdiots?
In my experience (and especially at my current job) bottlenecks are more often organizational than technical. I spend a lot of time waiting for others to make decisions before I can actually proceed with any work.
My judgement is built in to the time it takes me to code. I think I would be spending the same amount of time doing that while reviewing the AI code to make sure it isn't doing something silly (even if it does technically work.)
A friend of mine recently switched jobs from Amazon to a small AI startup where he uses AI heavily to write code. He says it's improved his productivity 5x, but I don't really think that's the AI. I think it's (mostly) the lack of bureaucracy in his small 2 or 3 person company.
I'm very dubious about claims that AI can improve productivity so much because that just hasn't been my experience. Maybe I'm just bad at using it.
Does voice transcription count as AI? I'm an okay typer, but being able to talk to my computer, in English, is definitely part of the productivity speed up for me. Even though it struggles to do css because css is the devil, being able to yell at my computer and have it actually do things is cathartic in ways I never thought possible.
Depends. What year is it? Voice recognition definitely uses to be considered AI, but today it's well researched and non-exciting.
No, not ai. Just an alternative input method.
All you did was changing the programming language from (say) Python to English. One is designed to be a programming language, with few ambiguities etc. The other is, well, English.
Speed of typing code is not all that different than the speed of typing English, even accounting for the volume expansion of English -> <favorite programming language>. And then, of course, there is the new extra cost of then reading and understanding whatever code the AI wrote.
The thing about this metaphor that people don't seem to ever complete is.
Okay, you've switched to English. The speed of typing the actual tokens is just about the same but...
The standard library is FUCKING HUGE!
Every concept that you have ever read about? Every professional term, every weird thing that gestures at a whole chunk of complexity/functionality ... Now, if I say something to my LLM like:
> Consider the dimensional twins problem -- how're we gonna differentiate torque from energy here?
I'm able to ... "from physics import Torque, Energy, dimensional_analysis" And that part of the stdlib was written in 1922 by Bridgman!
> The standard library is FUCKING HUGE!
And extremely buggy, and impossible to debug, and does not accept or fix bug reports.
AI is like an extremely enthusiastic junior engineer that never learns or improves in any way based on your feedback.
I love working with junior engineers. One of the best parts about working with junior engineers is that they learn and become progressively more experienced as time goes on. AI doesn't.
People need to decide if their counter to AI making programmers obsolete is "current generation AI is buggy, and this will not improve until I retire" or "I only spend coding 5% of my time so it doesn't matter if AI can instantly replace my coding".
And come on: AI definitely will become better as time goes on.
It gets better when the AI provider trains a new model. It doesn't learn from the feedback of the person interacting with it, unlike a human.
Exactly. LLMs are faster for me when I don't care too much about the exact form the functionality takes. If I want precise results, I end up using more natural language to direct the LLM than it takes if I just write that part of the code myself.
I guess we find out which software products just need to be 'good enough' and which need to match the vision precisely.
> Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?
It’s sort of the opposite: You don’t get to the proper judgement without playing through the possibilities in your mind, part of which is accomplished by spending time coding.
I think OP is closer to the latter. How I typically have been using Copilot is as a faster autocomplete that I read and tweak before moving on. Too many years of struggling to describe a task to Siri left me deciding “I’ll just show it what I want” rather than tell.
Do people reading this post not understand that this is the output of a prompt like 'analyze <event> with <perspective> arriving at <conclusion>'? Tighten up your epistemology if you're arguing with an author who isn't there.
The very fact that people are arguing with a non-existent author signals that whatever generated the content did a good enough job to fool them today. Tomorrow it will do a good enough job to fool you. I think the more important question is what this means in terms of what is really important and what we should invest in to remain anchored in what matters.
The article is full of snow clones that I see in AI writing. Or as the AI would put it "that's style *without* authorship".
The point is still valid, although I've seen it made many times over.
This has been happening a lot recently, where an article immediately sets off all my AI alarm bells but most people seem to be happily engaging with it. I’m worried we’re headed for a dystopian future where all communication is outsourced to the slop machine. I hope instead there is a societal shift to better recognize it and stigmatize it.
I've noticed some of this in recent months. I've also noticed people editing out some of the popular tells, like replacing em-dashes with commas, or at least I think so, because of odd formatting/errors in places where it sounds like the LLM would have used a dash.
But at this point I'm not confident that I'm not failing to identify a lot of LLM-generated text and not making false positives.
>instead there is a societal shift to better recognize it
Unlikely. AI keeps improving, and we are already at the point where real people are accused of being AI.
[dead]
> Treat AI as force multiplication for your highest-judgment people. The ones who can design systems, navigate ambiguity, shape strategy, and smell risk before it hits. They’ll use AI to move faster, explore more options, and harden their decisions with better data.
Clever pitch. Don't alienate all the people who've hitched their wagons to AI, but push valuing highly-skilled ICs as an actionable leadership insight.
Incidentally, strategy and risk management sound like a pay grade bump may be due.
Something about the way the article sets up the conversation nags at me a bit - even though it concludes with statements and reasoning I generally agree quite well with. It sets out what it wants to argue clearly at the start:
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished... The Bun acquisition blows a hole in that story.”
But what the article actually discusses and demonstrates by the end of the article is how the aspects of engineering beyond writing the code is where the value in human engineers is at this point. To me that doesn't seem like an example of a revealed preference in this case. If you take it back to the first part of the original quote above it's just a different wording for AI being the code writer and engineering being different.
I think what the article really means to drive against is the claim/conclusion "because AI can generate lots of code we don't need any type of engineer" but that's just not what the quote they chose to set out against is saying. Without changing that claim the acquisition of Bun is not really a counterexample, Bun had just already changed the way they do engineering so the AI wrote the code and the engineers did the other things.
But the engineers can do it because they have written lots of code before. Where will these engineers get their experience in the future.
And what about vibe coding? The whole point and selling point of many AI companies is that you don’t need experience as a programmer.
So they sell something that isn’t true, it’s not FSD for coding but driving assistance.
> Where will these engineers get their experience in the future
The house of the feeble minded: https://www.abelard.org/asimov.php
These are all things I'd rather have seen the article set out to talk about as well, instead it opens up to disprove a statement saying AI can write the coding portion of the engineering problem by means of showing it being used that way with Bun to mean Anthropic must not actually think that.
I mean, it smells an AI slop article, so it's hard to expect much coherence.
I was thinking the same but it's like they only used AI to handle the editing or something because even throwing it into ChatGPT "how could this article be improved: ${article}" gives:
> Tighten the causal claim: “AI writes code → therefore judgment is scarce”
As one of the first suggestions, so it's not something inherent to whether the article used AI in some way. Regardless, I care less about how the article got written and more about what conclusions really make sense.
I guess y'all disagree?
> The Bun acquisition blows a hole in that story.
> That contradiction is not a PR mistake. It is a signal.
> The bottleneck isn’t code production, it is judgment.
> They didn’t buy a pile of code. They bought a track record of correct calls in a complex, fast-moving domain.
> Leaders don’t express their true beliefs in blog posts or conference quotes. They express them in hiring plans, acquisition targets, and compensation bands.
Not to mention the gratuitous italics-within-bold usage.
No no I agree: “No negotiations. No equity. No retention packages.”
I don’t know if HN has made me hyper-sensitized to AI writing, but this is becoming unbearable.
When I find myself thinking “I wonder what the prompt was they used?” while reading the content, I can’t help but become skeptical about the quality of the thinking behind the content.
Maybe that’s not fair, but it’s the truth. Or put differently “Fair? No. Truthful? Yes.”. Ugh.
People speak in relative terms and hear in absolutes. Engineers will never completely vanish, but it will certainly feel like it if labor demand is reduced enough.
Technically, there’s still a horse buggy whip market, an abacus market, and probably anything else you think technology consumed. It’s just a minuscule fraction of what it once was.
> but it will certainly feel like it if labor demand is reduced enough
All the last productivity multipliers in programming led to increased demand. Do you really think the market is saturated now? And what saturated it is one of the least impactful "revolutionary" tools we got in our profession?
Keep in mind that looking at statistics won't lead to any real answer, everything is manipulated beyond recognition right now.
Demand for software has been tied to demand for software engineering labor. That is no longer true. So demand for software may still go up while demand for labor goes another direction.
Also I do hold a belief that most tech companies are taking a cost/labor reduction strategy for a reason, and I think that’s because we’re closing a period of innovation. Keeping the lights on, or protecting their moats, requires less labor.
Each of the last productivity multipliers coincided with greatly expanded markets (e.g. PC revolution, internet, mobile). Those are at the saturation point. And we've effectively built all the software those things need now. Of course there is still room for innovation in software, but it is not like in the past where we also had to build all the low-hanging fruit at the same time. That doesn't require nearly as many people — and that was already starting to become apparent before anyone knew what an LLM is.
This AI craze swooped in at the right time to help hold up the industry and is the only thing keeping it together right now. We're quickly trying to build all the low-hanging fruit for it, keeping many developers busy (although not like it used to be), but there isn't much low-hanging fruit to build. LLMs don't have the breadth of need like previous computing revolutions had. Once we've added chat interfaces to everything, which is far from being a Herculean task, all the low-hanging fruit will be gone. That's quite unlike previous revolutions where we had to build all the software from scratch, effectively, not just slap some lipstick on existing software.
If we want to begin to relive the past, we need a new hardware paradigm that needs all the software rewritten for it again. Not an impossible thought, but all the low-hanging hardware directions have also been picked at this point so the likelihood of that isn’t what it used to be either.
> Each of the last productivity multipliers coincided with greatly expanded markets
They didn't. But it may be a relevant point that all of that was slow enough to spread that we can't clearly separate them.
Anyway, the idea that any one of those large markets is at saturation point requires some data. AFAIK, anything from mainframe software to phones has (relatively) exploded in popularity every time somebody made them cheaper, so that is a claim that all of those just changed (too recently to measure), without any large thing to correlate them.
> That's quite unlike previous revolutions where we had to build all the software from scratch
We have rewritten everything from scratch exactly once since high-level languages were created in the 70s.
While I agree with the premise of the article, even if it was a bit shallow, this claim made at the beginning is also still true:
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished.”
Software engineering pre-LLMs will never, ever come back. Lots of folks are not understanding that. What we're doing at the end of 2025 looks so much different than what we were doing at the end of 2024. Engineering as we knew it a year or two ago will never return.
Does it?
I use AI as a smart auto complete - I’ve tried multiple tools on multiple models and I still _regularlt_ have it dump absolute nonsense into my editor - in thr best case it’s gone on a tangent, but in the most common case it’s assumed something (often times directly contradicting what I’ve asked it to do), gone with it, and lost the plot along the way. Of course when I correct it it says “you’re right, X doesn’t exist so we need to do X”…
Has it made me faster? Yes. Had it changed engineering - not even close. There’s absolutely no world where I would trust what I’ve seen out of these tools to run in the real world even with supervision.
When you have that hair raising “am I crazy why are people touting ai” feeling, it’s good to look at their profile. Oftentimes they’re caught up in some ai play. Also it’s good to remember yc has heavy investment in gen ai so this site is heavily biased
Context is king, too: in greenfield startups where you care little about maintenance and can accept redundant front end frameworks and backend languages? I believe agent swarms can poop out a lot lot lot of code relatively quick… Copy and paste is faster though. Downloading a repo is very quick.
In startups I’ve competed against companies with 10x and 100x the resources and manpower on the same systems we were building. The amount of code they theoretically could push wasn’t helping them, they were locked to the code they actually had shipped and were in a downwards hiring spiral because of it.
Here’s the thing - an awful lot of it doesn’t even compile/run, never mind do the right thing. My most recent example was asking it to use terraform to run an azure container app with an environment variable in an existing app environment. It repeatedly made up where the environment block goes, and and cursor kept putting the actual resource in random places in the file.
The ten dollar word for this is “revealed preferences”
I learned that phrase from one of the bold sentences in this article.
"Believe the checkbook? Why do that when I can get pump-faked into strip-mining my engineering org?"- VPs everywhere
How do I know they didn't buy them just to make sure their competitors couldn't?
Can anyone tell me the leading theory explaining the acquisition?
I can’t see how buying a runtime for the sake of Claude Code makes sense.
The bun acquisition is driven by current AI capabilities.
This argument requires us to believe that AI will just asymptote and not get materially better.
Five years from now, I don't think anyone will make these kinds of acquisitions anymore.
An Anthropic engineer was getting some attention for saying six months: https://www.reddit.com/r/ClaudeAI/comments/1p771rb/anthropic...
I assume this is at least partially a response to that. They wouldn't buy a company now if it would actually happen that fast.
> This argument requires us to believe that AI will just asymptote and not get materially better.
That's not what asymptote means. Presumably what you mean is the curve levelling off, which it already is.
This seems overly pedantic. The intended meaning is clear.
Hardly, asymptotic behavior can be anything, in fact that's the whole question: what happens to AI performance as we tend to infinity? Asymptoting to `y = x` is very different to levelling off.
> This argument requires us to believe that AI will just asymptote and not get materially better.
It hasn't gotten materially better in the last three years. Why would it do so in the next three or five years?
Deep learning and transformers have given step functions in AI's capabilities. It may not happen, but it's reasonable to expect another step-function development soon.
[dead]
I disagree with this article and what it attempts to do: frame the acquisition using a conjecture. The only thing to “believe” are the authors reasons - which are flimsy, because they are the very thing we need to be critical of.
I don’t know why the acquisition happened, or what the plans are. But it did happen, and for this we don’t have to suspend disbelief. I don’t doubt Anthropic has plans that they would rather not divulge. This isn’t a big stretch of imagination, either.
We will see how things play out, but people are definitely being displaced by AI software doing work, and people are productive with them. I know I am. The user count of Claude Code, Gemini and ChatGPT don’t lie, so let’s not kid ourselves.