When interactive compilers and debuggers started to become more available, my father complained that junior devs would just make random changes to the code until it worked, rather than taking the time to understand it, as compared to when you had to wait for your punchcards to go through an overnight batch process.
It seems that lowering friction will always lower understanding, because industrious people will always try to get the most done, and inexperienced people will conflate getting the most done now with getting the most done in general.
To be fair, I would guess that 90% of the programmers I know would never have learned to program if they had been forced to do it on ancient mainframes with punched card readers. Personal computers made programming something that any schmuck could learn. That sure includes me.
But I think all that does is point to the fact that the average programmer's knowledge about computers and programming becomes poorer and poorer as time goes by. We are continuously pessimising for knowledge and skills.
I'm sympathetic to what you are saying, but you should include the increase in the number of things a programmer needs to know and to consider today. I've been programming for 57 years. There is a huge amount to know and the tasks are huge compared to what I knew and did back in 1968.
Over that time I grew as a programmer as the work became more difficult. I couldn't keep up with the technology. In spite of my attempts to keep up, it seemed that every few years I had to further limit the scope of my work in order to cope. Judging by my experience, today competent programmers will fall further and further behind with what they know becoming more obsolete while they restrict their scope so they can learn the new work on the job. Young programmers won't need to know much of what today's competent programmers know. At the same time the increasing complexity of their assignments will require them to go deeper into new matters, and they in turn will become overwhelmed. And so on it will go.
On the other hand, perhaps I don't know what I'm talking about. : )
I don’t think the issue is even even a lack of knowledge.
I don’t fault a C developer for not knowing and appreciating the intricacies of a superscalar pipelined processor using out of order execution.
But when a C developer writes a multithreaded program they need to really understand why multiple threads are necessary, proper use of locking, and how it will impact the overall application. They need to look beyond their tiny bit of code and the assigned task.
Unfortunately a number of developers fully expect to drive on a busy street blindfolded and successfully reach their destination.
> But when a C developer writes a multithreaded program they need to really understand why multiple threads are necessary, proper use of locking, and how it will impact the overall application. They need to look beyond their tiny bit of code and the assigned task.
Not really, just add the "volatile" keyword to global variables at random until the bugs go away!
> if they had been forced to do it on ancient mainframes with punched card readers
Interesting point. I for one started out with punched cards, and your point is valid: you had to think deeply when preparing your next program submission, because turnaround was slo-o-ow and often kept you in the computing center waiting, waiting, waiting.
So one might argue that when we got terminals that submitted "virtual punch cards" and got back "virtual printouts" (this is late 70s), it debased the practice of programming. /s
OTOH in this day and age, the quick compile times of Go are - to put it bluntly - absolutely wonderful.
I happen to believe that not having a kernel debugger forces people to think about their problem on a different level than with a debugger. I think that without a debugger, you don't get into that mindset where you know how it behaves, and then you fix it from there. Without a debugger, you tend to think about problems another way. You want to understand things on a different _level_.
When I first started hacking I had the expectation that every chunk of code I came across was broken in some way.
All of the software I relied upon was broken in some visible way. My Windows 95 installation would have multiple kernel panics per day. My Usenet reader would fail catastrophically when encountering non-ASCII text. My CD-ROM copies of games would freeze until I kicked the side of the computer which consistently worked.
I still see bugs everywhere nowadays, but they're more hidden and, honestly more frustrating since they're so opaque.
A concerted PR operation from OAI and Microsoft pushing the belief that LLMs can 'reason' and thus be trusted with things beyond formulaic high school and college papers.
When we decoupled results from capital. It doesn't matter how buggy your software is if people are forced to use it anyway, especially if you haven't turned a profit in ten years but you still get VC money anyway.
Remember when "running a business" meant "making a good product and making some money in the process"? Yeah, me neither.
Why are we blaming VC for bad products? There are always very profitable companies consistently chunking out bad products. Sometimes it feels like quality and profit is inversely correlated
If you try to do this in a work context, you'll be told you are wasting time. Even if you aren't fired, you will not be considered for promotion: the way to do that is to have "a lot of impact". This means shipping a lot of half-baked stuff. The other piece of the puzzle you need is having good "work ethic". This is best demonstrated via late-night debugging heroics where you patch up the crud you shipped earlier while getting "impact" points. For whatever reason people who run companies believe that their customers wants "lots of crud quickly" instead of quality products.
How true that is depends entirely on what sort of company you're working for. It may be common with SV-style companies (and it shows), but it's not nearly as common in the rest of the software world.
Writing code is easy compared to supporting, debugging and enhancing code. AI is much better at "greenfield" coding where you start from scratch, either on an entire app or a new feature. For anything non-trivial, it is terrible at debugging. At best it is a super-rubber-duck that is nice to talk to that might have a few words buried inside screenfuls of text that help a human realize what might be going wrong. We're still in the honeymoon phase, where AI has only written new stuff and hasn't been around long enough to have to support code with tens- or hundreds-of-thousands of LoC.
I mean, LLMs are new. And if you can't see the difference between an entire profession using broken, hallucinatory tooling to write buggy code, and drivers using more convenient maps, then I'm not sure how to help.
I see the difference. Similarity of course does not mean being identical. I've read about the GPS/LLM similarities on HN, and when you think about it, it behaves a bit like a GPS that thinks up new locations and their paths 5 percent of the time. It is best to do some checks before hitting the gas.
Turns out, knowing stuff is important when you try to do stuff that you claim to be an expert in, instead of outsourcing it to a crappy incorrect tutorial generator. Competitive edge for computer programmers going forward: knowing how computers work.
> Competitive edge for computer programmers going forward: knowing how computers work.
Only if they're young or famous. Otherwise, they're automatically kicked to the curb because the common thinking these days is that anyone with even a single gray hair is "out of touch with technology" even if they've wasted their entire lives staying on the "bleeding edge". Apparently only the young understand technology, and the people who built and maintained it all their lives are just "clueless old farts" like the morons we've got in Congress passing laws about technology.
You had it easy! I'm old enough to remember copy-pasting things from random VBulletin forums and the comment section of PHP documentation. And sometimes from old mailing lists that showed up in search results :).
Senior devs generally know how code works, and all senior devs started as juniors. I think the difference is that every annoying/stupid issue turned into a lesson. Now it's one message to ChatGPT and then forgotten about.
Just ask the LLM to walk you through each line of code, create an explain the dependency graphs and in a relatively short period of time they’ll know exactly how their code works. Using Claude Code is quite useful for this - I use it on GitHub repositories I’m curious about.
I think the problem is "but why?" What incentive would they have when they just need a brief answer or solution, i.e. just make code do this, get the boss their answer on that, etc.?
It's going to be fun when LLM agents do all the communication for us.
And newspapers will have people walking into traffic, and cars will lead to the extinction of horses, and if horse manure keeps piling up at this rate, london will be buried in a decade.
IMO the primary purpose of a code review is to check that what's written is understandable to at least one other developer besides the author. Having a machine be the primary reviewer kind of misses the point.
The problem isn't knowing how transistors work (although in the old days, you often did learn something about that in the process of learning computers), but rather not even knowing how basic code structure works at all, or simple required math, or logic. Not knowing the absolute basics of code and then thinking you're a "coder" because you can blindly copy/paste code without understanding it on any level is straight-up dangerous.
Sure it's a problem if that code ever needs to be fixed or maintained. Or if it irreversibly alters data in a way that the "coder" didn't understand or intend. If it's a prototype or some kind of one-off with limited side effects, I guess there's not much risk.
Well, it's hard to solve a problem you don't understand right? When a problem fundamentally lies in a domain no one understands, how will it ever be fixed or solved? Best you can do is paper over the problem in some way, or somehow get randomly lucky.
When interactive compilers and debuggers started to become more available, my father complained that junior devs would just make random changes to the code until it worked, rather than taking the time to understand it, as compared to when you had to wait for your punchcards to go through an overnight batch process.
It seems that lowering friction will always lower understanding, because industrious people will always try to get the most done, and inexperienced people will conflate getting the most done now with getting the most done in general.
To be fair, I would guess that 90% of the programmers I know would never have learned to program if they had been forced to do it on ancient mainframes with punched card readers. Personal computers made programming something that any schmuck could learn. That sure includes me.
But I think all that does is point to the fact that the average programmer's knowledge about computers and programming becomes poorer and poorer as time goes by. We are continuously pessimising for knowledge and skills.
I'm sympathetic to what you are saying, but you should include the increase in the number of things a programmer needs to know and to consider today. I've been programming for 57 years. There is a huge amount to know and the tasks are huge compared to what I knew and did back in 1968.
Over that time I grew as a programmer as the work became more difficult. I couldn't keep up with the technology. In spite of my attempts to keep up, it seemed that every few years I had to further limit the scope of my work in order to cope. Judging by my experience, today competent programmers will fall further and further behind with what they know becoming more obsolete while they restrict their scope so they can learn the new work on the job. Young programmers won't need to know much of what today's competent programmers know. At the same time the increasing complexity of their assignments will require them to go deeper into new matters, and they in turn will become overwhelmed. And so on it will go.
On the other hand, perhaps I don't know what I'm talking about. : )
I don’t think the issue is even even a lack of knowledge.
I don’t fault a C developer for not knowing and appreciating the intricacies of a superscalar pipelined processor using out of order execution.
But when a C developer writes a multithreaded program they need to really understand why multiple threads are necessary, proper use of locking, and how it will impact the overall application. They need to look beyond their tiny bit of code and the assigned task.
Unfortunately a number of developers fully expect to drive on a busy street blindfolded and successfully reach their destination.
> But when a C developer writes a multithreaded program they need to really understand why multiple threads are necessary, proper use of locking, and how it will impact the overall application. They need to look beyond their tiny bit of code and the assigned task.
Not really, just add the "volatile" keyword to global variables at random until the bugs go away!
> if they had been forced to do it on ancient mainframes with punched card readers
Interesting point. I for one started out with punched cards, and your point is valid: you had to think deeply when preparing your next program submission, because turnaround was slo-o-ow and often kept you in the computing center waiting, waiting, waiting.
So one might argue that when we got terminals that submitted "virtual punch cards" and got back "virtual printouts" (this is late 70s), it debased the practice of programming. /s
OTOH in this day and age, the quick compile times of Go are - to put it bluntly - absolutely wonderful.
In the past 20 years, I’ve seen some people make random changes until the code “worked”.
Fortunately, I’ve had the privilege to be on teams where such were relatively few.
Your father was certainly right.
Linus echoing your father's thoughts in 2006:
I happen to believe that not having a kernel debugger forces people to think about their problem on a different level than with a debugger. I think that without a debugger, you don't get into that mindset where you know how it behaves, and then you fix it from there. Without a debugger, you tend to think about problems another way. You want to understand things on a different _level_.
Full email at https://lkml.org/lkml/2000/9/6/65
[dead]
Where did all of this trust come from?
When I first started hacking I had the expectation that every chunk of code I came across was broken in some way.
All of the software I relied upon was broken in some visible way. My Windows 95 installation would have multiple kernel panics per day. My Usenet reader would fail catastrophically when encountering non-ASCII text. My CD-ROM copies of games would freeze until I kicked the side of the computer which consistently worked.
I still see bugs everywhere nowadays, but they're more hidden and, honestly more frustrating since they're so opaque.
> Where did all of this trust come from?
A concerted PR operation from OAI and Microsoft pushing the belief that LLMs can 'reason' and thus be trusted with things beyond formulaic high school and college papers.
I hadn't realized LLMs were a thing before Windows 95 and Usenet.
Please try to understand the comment you're replying to instead of going with a knee-jerk reaction.
When we decoupled results from capital. It doesn't matter how buggy your software is if people are forced to use it anyway, especially if you haven't turned a profit in ten years but you still get VC money anyway.
Remember when "running a business" meant "making a good product and making some money in the process"? Yeah, me neither.
Why are we blaming VC for bad products? There are always very profitable companies consistently chunking out bad products. Sometimes it feels like quality and profit is inversely correlated
Microsoft isn't getting punished for it's bad experience.
Where I work there is now a retention policy for all Microsoft products, including OneNote.
Yep, if you managed to fight through the interface you are now greeted with an application that will lose your long term notes.
If you try to do this in a work context, you'll be told you are wasting time. Even if you aren't fired, you will not be considered for promotion: the way to do that is to have "a lot of impact". This means shipping a lot of half-baked stuff. The other piece of the puzzle you need is having good "work ethic". This is best demonstrated via late-night debugging heroics where you patch up the crud you shipped earlier while getting "impact" points. For whatever reason people who run companies believe that their customers wants "lots of crud quickly" instead of quality products.
How true that is depends entirely on what sort of company you're working for. It may be common with SV-style companies (and it shows), but it's not nearly as common in the rest of the software world.
Writing code is easy compared to supporting, debugging and enhancing code. AI is much better at "greenfield" coding where you start from scratch, either on an entire app or a new feature. For anything non-trivial, it is terrible at debugging. At best it is a super-rubber-duck that is nice to talk to that might have a few words buried inside screenfuls of text that help a human realize what might be going wrong. We're still in the honeymoon phase, where AI has only written new stuff and hasn't been around long enough to have to support code with tens- or hundreds-of-thousands of LoC.
Personally I fail to see how this worry can be 'new'.
A 'new' transportation worry: many car drivers don't know where to turn or even where they are heading without GPS.
Or more closely related, when I import a library or utilise an external tool, I don’t know how that works either.
When I store a record in the database, I have no idea what Postgres does to do that, it just happens.
I mean, LLMs are new. And if you can't see the difference between an entire profession using broken, hallucinatory tooling to write buggy code, and drivers using more convenient maps, then I'm not sure how to help.
I see the difference. Similarity of course does not mean being identical. I've read about the GPS/LLM similarities on HN, and when you think about it, it behaves a bit like a GPS that thinks up new locations and their paths 5 percent of the time. It is best to do some checks before hitting the gas.
When GPS coordinates are handled by LLM the fear won't be as novel.
Turns out, knowing stuff is important when you try to do stuff that you claim to be an expert in, instead of outsourcing it to a crappy incorrect tutorial generator. Competitive edge for computer programmers going forward: knowing how computers work.
> Competitive edge for computer programmers going forward: knowing how computers work.
Only if they're young or famous. Otherwise, they're automatically kicked to the curb because the common thinking these days is that anyone with even a single gray hair is "out of touch with technology" even if they've wasted their entire lives staying on the "bleeding edge". Apparently only the young understand technology, and the people who built and maintained it all their lives are just "clueless old farts" like the morons we've got in Congress passing laws about technology.
A lot of companies were already outsourcing to companies with humans who don't know how anything works.
Did young coders ever know how their code worked?
Back in my day we copied and pasted random stack overflow answers until something worked like real junior devs
You had it easy! I'm old enough to remember copy-pasting things from random VBulletin forums and the comment section of PHP documentation. And sometimes from old mailing lists that showed up in search results :).
You had copy and paste? Lucky!! In my day we actually had to type code by hand from books or magazines (or worse yet from memory).
That my take on the whole AI coding industry. It’s on part with stackoverflow responses.
Senior devs generally know how code works, and all senior devs started as juniors. I think the difference is that every annoying/stupid issue turned into a lesson. Now it's one message to ChatGPT and then forgotten about.
This is true of all levels of abstraction. The next question is: does this level of abstraction cost more than it’s worth.
Just ask the LLM to walk you through each line of code, create an explain the dependency graphs and in a relatively short period of time they’ll know exactly how their code works. Using Claude Code is quite useful for this - I use it on GitHub repositories I’m curious about.
I think the problem is "but why?" What incentive would they have when they just need a brief answer or solution, i.e. just make code do this, get the boss their answer on that, etc.?
It's going to be fun when LLM agents do all the communication for us.
And newspapers will have people walking into traffic, and cars will lead to the extinction of horses, and if horse manure keeps piling up at this rate, london will be buried in a decade.
wont matter once the agents code review the agent-generated code
If that happens, it'll presumably matter somewhat when it comes to continued employment of said coders.
IMO the primary purpose of a code review is to check that what's written is understandable to at least one other developer besides the author. Having a machine be the primary reviewer kind of misses the point.
Meh. Many coders don't know how transistors work, and they can still be productive.
If many "young coders" don't know how their code work but can solve more problems faster, is it really a problem?
The problem isn't knowing how transistors work (although in the old days, you often did learn something about that in the process of learning computers), but rather not even knowing how basic code structure works at all, or simple required math, or logic. Not knowing the absolute basics of code and then thinking you're a "coder" because you can blindly copy/paste code without understanding it on any level is straight-up dangerous.
Sure it's a problem if that code ever needs to be fixed or maintained. Or if it irreversibly alters data in a way that the "coder" didn't understand or intend. If it's a prototype or some kind of one-off with limited side effects, I guess there's not much risk.
Well, it's hard to solve a problem you don't understand right? When a problem fundamentally lies in a domain no one understands, how will it ever be fixed or solved? Best you can do is paper over the problem in some way, or somehow get randomly lucky.