As with every openai release, I’d take it with try a grain of salt until it sees wide public usage.
So far every iteration of gpt since 4 has been pushed as a replacement for developers, but I have yet to feel that these tools are even consistently useful for my day to day.
Lately copilot has been giving me outright u correct information and when you tell it, it’ll admit it lied and give another incorrect answer. Not even that obscure of a question either. I have zero trust in it being able to generate anything more than bare bones boiler plate (and even then).
I also feel they’ve made responses fairly wordy to make it appear to know stuff.
I think they definitely add some value. For example, I haven’t written complex sql queries since I was in school 20 years ago. Recently Claude helped me write a recursive database query which did an in-database graph traversal. It would have taken me way more time to relearn how to do that from scratch.
But I don’t think many people are that worried about gpt4. We’re worried about what it’ll look like with 5-10 more years of progress. O3 is apparently already better than 99% of competition programmers or something - it just costs thousands of dollars of compute per query. But give them a few short years.
On the other hand one could argue that you should understand the code you write. A super complex in-database graph traversal should probably be properly understood in case performance problems pop up. Or the problem itself should be broken down differently.
Complex SQL and regex are pretty much my only current use cases for it. Saves tons of time doing things you don't do every day so rusty, but still can tell whether it's doing what you were asking for.
Yes, I think this is the major factor to consider. If it’s only the cost that’s stopping it from replacing real people, then it’s a matter of a few to max several years to be more than cost effective.
I have needed 0 LLM tools to efficiently do my job since any release of any models. This predates OpenAI. The code suggestion/autocomplete has been sometimes nice but rarely is what I’m going to type out.
I don’t discourage use of LLMs but I don’t encourage it either. If you need to pay OpenAI to get your work done then that’s on you.
Just curious, does it matter? For some simple boilerplate code, simple test cases and simple skaffolding, which is needed in most applications, do you even care?
But simple scaffolding, test cases, boilerplate was around before LLMs via code generators/scaffolders, so yes, it’s revised history to pretend like an LLM invented those things. None of those things are complex ideas either.
I’m not sure I understand what you’re asking though.
While true, LLMs are just a more advanced version of it. Let's take Go as example as that is a very boilerplatey language; with claude, I only have to write the meat, for instance;
result, err:= doSomething()
and it'll generate the rest around it. Every time. Faster than I would type it and with more eye for detail (I am lazy, I will forget things as I did it now the 10004th time). So you can imagine some boilerplate scaffold thing that would do this err handling for you, but if you have a custom one, for 'old' tools you would have to tell the tool; now it just does it.
It especially becomes clear in frontend tasks: it just generates 100+ lines of react that work and look good; who likes writing they type of thing manually? I know 0 people who do: after that you will want to tweak it, but the 100k+ lines of react we generally have in a project are not something we would want to write these days from scratch. Which old school scaffold etc tool does that? So that it is 80-90% there for SaaS first shot? I know nothing (doesn't exist) without the lovely verboseness of having to type a million tags?
It saves us a lot of time we don't have to spend on boring stuff. We only have to write the actual business logic and data models; the rest drops out. And of course we only have to write that very loosely, not formally.
I love writing code, perhaps these software developers who do not will enjoy a different career once they LLM themselves out of this career. Writing 100k lines of code is a great achievement. Sorry to hear you’re so bored.
What is boring is the time I’ve wasted on a recent project fixing all of my coworkers LLM mistakes. I think I’ve nearly rewritten the entire php wordpress plugin. So his work really was a waste of time.
For golang I have never had a problem writing if err != nil. It takes a few seconds, paying some LLM company to write something I could write myself but I don’t want to because “I’m bored” is ludicrous use of time to me.
But it doesn't cost time is the point; it'll just fall out faster than you can type. Each their own; we save tons of time and errors with it; ymmv and that's fine.
> Users think with data. Imagine you are a cashier at a grocery store. You learn how to calculate the amount by watching the owner do it for a couple of customers.
No one does that. You go to school and learn numbers and basic arithmetic, which gives you all the formulas you need. And even illiterate people work out quickly the minimum they need to do transactions.
> This ability to abstract logic is what separates developers from users
Then everyone is a developer as everyone can write a serie of instructions which abstract action from context. It just that the computer is very limited and you need to be very precise. But what it can do, it can do it very fast. What is instant for us is an eternity in CPU time, and you can do a lot with simple operations if you can do a great number of them.
> Just like a developer's brain can think about a problem and come up with a solution using code, o3 generates a program (i.e., metadata) on the fly to solve the problem
Again that's not happening. You think about a problem and solves it by constraining yourself to the basic operations you have available (the APIs, languages, libraries, platforms). The code is just the written form of that solution. You can't invent operations that does not exist, you can only group them under a new name. Each set of names is another layer of abstraction that make it easier to write the code. But the constraints don't go away.
The code is not how you create a solution. It's how you instruct the machine to replicate the solution you came up with.
As a self-taught developer I absolutely promise you don't need to go to any school to learn the requisite skills to achieve a variety of complex things. You absolutely can learn from both trial and observation.
Yes, everyone actually is potentially a developer, because there are no baselines to the contrary. I have met many developers that should never have been there professionally. Not everybody is capable of actual engineering though, as engineers follow processes and measure things. So many people writing code for a living absolutely cannot measure things.
I think your fundamental problem is loss of empathy. Perhaps you understand your own limitations but do not seem to understand other people learn differently than you do or may have wildly different experiences than you have.
I interact daily with people who did not have the chance to go past primary school or to go to school at all. And they did have to learn skills through alternate means. I myself had to learn computing and programming with little to no electricity at home, and only a few people that owned computers (which I had to borrow).
I do agree that you can learn with trial and observations, but it's way faster to have someone transmit the knowledge either in person or in books. And once learned, it is learned, unless you don't use that piece of knowledge for a long time and you forget about it. And it's exponential, learning things enable you to learn more things. You don't have to restart from scratch for similar situation. Like you don't have to learn letters again to learn a new Latin languages.
> Not everybody is capable of actual engineering though, as engineers follow processes and measure things
They are capable of it. It's just that most people don't care about that if there is another way to get what they need (including getting other people to do it)
> I do agree that you can learn with trial and observations, but it's way faster to have someone transmit the knowledge either in person or in books.
This entirely depends on the teacher or the book author.
The "do not try anything new, but repeat after me" is a popular method of teaching things when we talk about skills, but often the next step is not just pursued but entirely discouraged.
There is a particular point where a teacher says "some of you might fail, but it is a sacrifice I am willing to make" & ask people to do something entirely new by themselves.
That is cross-over point where being self-taught has some advantages, because you have external guidance but not instruction.
If you're the kind of person who has failed a lot learning basics, then those new failures are less of a knock on your ego than if you had coasted through the early years without learning the parts which come after a failure.
Maybe if we all had a clear memory of "learning to walk", it would help - six+ months of falling down and not giving up, which most of us have done once.
> I absolutely promise you don't need to go to any school to learn the requisite skills to achieve a variety of complex things
> Not everybody is capable of actual engineering though, as engineers follow processes and measure things
So how is it that anyone can self learn anything but anyone can't follow processes and measure things? You just have different pet dogmas than the other guy. At least he didn't say you had lack of empathy to feel better.
Wouldn’t the end game here be that the AI just writes straight to machine code? It’s not like some devs are safe, those who maintain coding languages would be out of a job too. I think AI would find it awfully silly to have to code up a language, maintain it, then turn around and use that language to code with. Bare metal AI seems like the end game to me.
Code is written to be auditable by humans, if you just generate machine code how will you know the level of correctness? Would you save the prompts then I have to prompt review before merge? That would end up being much wordier than a code file.
I don't think that makes sense in general, but does remind me of an idea I had where it does make sense: 8-bit program synthesis.
Your dataset is all of the 6502 binaries, corresponding manuals, and descriptions of gameplay generated by multimodal AI looking at videos, screenshots, etc. Maybe even screenshots and memory captures of games could be input along with sequences of key or controller inputs.
But if you feed in a large chunk of the Z80 and 6502 programs and as much metadata as you can and for a training signal make sure that the thing runs without freezing on an emulator and maybe shows the correct loading screen or something.. I think there are enough 8-bit programs and games for this to work, at least to some degree. Maybe. Start with shorter simpler programs and work up. Generating BASIC might be easier in a way, I don't know.
But the idea is you would describe your game along with maybe a screenshot or cover art or something and it would generate the machine code.
You could also do something similar but on a frame-by-frame input-by-input basis like Oasis.
Anyone in this thread want to fund that? I would love to work on it.
We’ve lots of programming languages; most businesses need only an OS (Linux- free), an application language (python, java, C*), a database language (SQL), and a front end be language (js + html). No need to go to bare metal in most cases as generative AI is already good at these.
The mostly unsolved problems right now are around build, deployment, and operations.
Why do developers continue to work on AI? Sooner or later (probably sooner) it will be adequate enough to remove the need for many or most developers.
I listened to this week’s All In podcast and the podcasters (all wildly successful tech entrepreneurs) were quite emphatic that they were seeking a world where they were not beholden to developers. They envision a world where non- or semi-technical employees create the desired software via AI; other AI agents would then figure out how to deploy it.
I feel like we are at a similar point to many other professions in the past facing a rapid technological change. But in the case of software development, developers themselves are hastening their own demise.
I am a lifelong (40+ years of daily coding so far) programmer and it brought me good things. I actually think I am working on my last project, which indeed is to replace programmers. Not all, just for things we as a company do. We are very far with it. My biggest, existential, revelation is that my goals and successful products have always been around replacing programmers. My first sale of a product company (that made me rich enough to retire, which I did not) was a nocode tool I built in the late 90s. As that has been my work and hobby ever since the early 80s, I cannot imagine what I would build if, indeed, I would no longer need to program but can just say what I want. I want nothing, but to program.
The people directly responsible for advancing the latest AI tech are both small in number relative to the entire population of devs and they also have a chance of building generational wealth as a result of it. The rest of us are trying to keep up with AI because it’s both interesting and because we don’t have a choice.
All In hosts are probably more accurately described these days as "extremely wealthy tech investors" and in that context AI hype about eliminating those software devs with their pesky salaries makes quite a bit more sense. If nothing else they'd be extremely out of step with the rest of the market saying anything different and what would they gain by rocking the boat?
I find it fascinating how people that successful don't take that thought process one step further.
If they use AI to build an app by just telling it what to do then everyone on the planet can look at how their app works and describe it to their AI thus making a clone.
Even Eric Schmidt said at Standford that we would be able use AI to copy TikTok and steal all of it's content and users in the next few years. Interestingly though he didn't even think to mention Youtube...
I wouldn't call o3 a reasoning AI yet. It gets better with its abstraction abilities but that's a still a very far step from reasoning and logic. Something like 10%
a little melodramatic at the end.
As with every openai release, I’d take it with try a grain of salt until it sees wide public usage.
So far every iteration of gpt since 4 has been pushed as a replacement for developers, but I have yet to feel that these tools are even consistently useful for my day to day.
Lately copilot has been giving me outright u correct information and when you tell it, it’ll admit it lied and give another incorrect answer. Not even that obscure of a question either. I have zero trust in it being able to generate anything more than bare bones boiler plate (and even then).
I also feel they’ve made responses fairly wordy to make it appear to know stuff.
Can you ask for the prompt when chatting?
I think they definitely add some value. For example, I haven’t written complex sql queries since I was in school 20 years ago. Recently Claude helped me write a recursive database query which did an in-database graph traversal. It would have taken me way more time to relearn how to do that from scratch.
But I don’t think many people are that worried about gpt4. We’re worried about what it’ll look like with 5-10 more years of progress. O3 is apparently already better than 99% of competition programmers or something - it just costs thousands of dollars of compute per query. But give them a few short years.
On the other hand one could argue that you should understand the code you write. A super complex in-database graph traversal should probably be properly understood in case performance problems pop up. Or the problem itself should be broken down differently.
Complex SQL and regex are pretty much my only current use cases for it. Saves tons of time doing things you don't do every day so rusty, but still can tell whether it's doing what you were asking for.
It's basically a replacement for Stack Overflow, made possible by scraping Stack Overflow.
I’ll believe what they say about O3 when it starts seeing use in the wild.
Yes, I think this is the major factor to consider. If it’s only the cost that’s stopping it from replacing real people, then it’s a matter of a few to max several years to be more than cost effective.
I have needed 0 LLM tools to efficiently do my job since any release of any models. This predates OpenAI. The code suggestion/autocomplete has been sometimes nice but rarely is what I’m going to type out.
I don’t discourage use of LLMs but I don’t encourage it either. If you need to pay OpenAI to get your work done then that’s on you.
> but rarely is what I’m going to type out.
Just curious, does it matter? For some simple boilerplate code, simple test cases and simple skaffolding, which is needed in most applications, do you even care?
But simple scaffolding, test cases, boilerplate was around before LLMs via code generators/scaffolders, so yes, it’s revised history to pretend like an LLM invented those things. None of those things are complex ideas either.
I’m not sure I understand what you’re asking though.
While true, LLMs are just a more advanced version of it. Let's take Go as example as that is a very boilerplatey language; with claude, I only have to write the meat, for instance;
result, err:= doSomething()
and it'll generate the rest around it. Every time. Faster than I would type it and with more eye for detail (I am lazy, I will forget things as I did it now the 10004th time). So you can imagine some boilerplate scaffold thing that would do this err handling for you, but if you have a custom one, for 'old' tools you would have to tell the tool; now it just does it.
It especially becomes clear in frontend tasks: it just generates 100+ lines of react that work and look good; who likes writing they type of thing manually? I know 0 people who do: after that you will want to tweak it, but the 100k+ lines of react we generally have in a project are not something we would want to write these days from scratch. Which old school scaffold etc tool does that? So that it is 80-90% there for SaaS first shot? I know nothing (doesn't exist) without the lovely verboseness of having to type a million tags?
It saves us a lot of time we don't have to spend on boring stuff. We only have to write the actual business logic and data models; the rest drops out. And of course we only have to write that very loosely, not formally.
I love writing code, perhaps these software developers who do not will enjoy a different career once they LLM themselves out of this career. Writing 100k lines of code is a great achievement. Sorry to hear you’re so bored.
What is boring is the time I’ve wasted on a recent project fixing all of my coworkers LLM mistakes. I think I’ve nearly rewritten the entire php wordpress plugin. So his work really was a waste of time.
For golang I have never had a problem writing if err != nil. It takes a few seconds, paying some LLM company to write something I could write myself but I don’t want to because “I’m bored” is ludicrous use of time to me.
But it doesn't cost time is the point; it'll just fall out faster than you can type. Each their own; we save tons of time and errors with it; ymmv and that's fine.
> I only have to write the meat, for instance; result, err:= doSomething()
Ever heard of snippets? They do that too, but much more energy efficient and faster.
> Which old school scaffold etc tool does that?
create-react-app and yeoman
Time spent writing code is IMO the smallest part of software engineering.
Optimizations give better results when care is taken first about the most costly parts.
That's the stuff I rather spend my time on when making money with software; all the other parts. One exists with the other.
> Users think with data. Imagine you are a cashier at a grocery store. You learn how to calculate the amount by watching the owner do it for a couple of customers.
No one does that. You go to school and learn numbers and basic arithmetic, which gives you all the formulas you need. And even illiterate people work out quickly the minimum they need to do transactions.
> This ability to abstract logic is what separates developers from users
Then everyone is a developer as everyone can write a serie of instructions which abstract action from context. It just that the computer is very limited and you need to be very precise. But what it can do, it can do it very fast. What is instant for us is an eternity in CPU time, and you can do a lot with simple operations if you can do a great number of them.
> Just like a developer's brain can think about a problem and come up with a solution using code, o3 generates a program (i.e., metadata) on the fly to solve the problem
Again that's not happening. You think about a problem and solves it by constraining yourself to the basic operations you have available (the APIs, languages, libraries, platforms). The code is just the written form of that solution. You can't invent operations that does not exist, you can only group them under a new name. Each set of names is another layer of abstraction that make it easier to write the code. But the constraints don't go away.
The code is not how you create a solution. It's how you instruct the machine to replicate the solution you came up with.
I disagree with everything you commented.
As a self-taught developer I absolutely promise you don't need to go to any school to learn the requisite skills to achieve a variety of complex things. You absolutely can learn from both trial and observation.
Yes, everyone actually is potentially a developer, because there are no baselines to the contrary. I have met many developers that should never have been there professionally. Not everybody is capable of actual engineering though, as engineers follow processes and measure things. So many people writing code for a living absolutely cannot measure things.
I think your fundamental problem is loss of empathy. Perhaps you understand your own limitations but do not seem to understand other people learn differently than you do or may have wildly different experiences than you have.
I interact daily with people who did not have the chance to go past primary school or to go to school at all. And they did have to learn skills through alternate means. I myself had to learn computing and programming with little to no electricity at home, and only a few people that owned computers (which I had to borrow).
I do agree that you can learn with trial and observations, but it's way faster to have someone transmit the knowledge either in person or in books. And once learned, it is learned, unless you don't use that piece of knowledge for a long time and you forget about it. And it's exponential, learning things enable you to learn more things. You don't have to restart from scratch for similar situation. Like you don't have to learn letters again to learn a new Latin languages.
> Not everybody is capable of actual engineering though, as engineers follow processes and measure things
They are capable of it. It's just that most people don't care about that if there is another way to get what they need (including getting other people to do it)
> I do agree that you can learn with trial and observations, but it's way faster to have someone transmit the knowledge either in person or in books.
This entirely depends on the teacher or the book author.
The "do not try anything new, but repeat after me" is a popular method of teaching things when we talk about skills, but often the next step is not just pursued but entirely discouraged.
There is a particular point where a teacher says "some of you might fail, but it is a sacrifice I am willing to make" & ask people to do something entirely new by themselves.
That is cross-over point where being self-taught has some advantages, because you have external guidance but not instruction.
If you're the kind of person who has failed a lot learning basics, then those new failures are less of a knock on your ego than if you had coasted through the early years without learning the parts which come after a failure.
Maybe if we all had a clear memory of "learning to walk", it would help - six+ months of falling down and not giving up, which most of us have done once.
> I absolutely promise you don't need to go to any school to learn the requisite skills to achieve a variety of complex things
> Not everybody is capable of actual engineering though, as engineers follow processes and measure things
So how is it that anyone can self learn anything but anyone can't follow processes and measure things? You just have different pet dogmas than the other guy. At least he didn't say you had lack of empathy to feel better.
> I absolutely promise you don't need to go to any school to learn the requisite skills
It will be borderline impossible for anyone to learn to read/write without either learning it at school or being taught by someone.
Likewise for skills for which the information isn't easily available e.g. how to build a fission reactor.
[dead]
Wouldn’t the end game here be that the AI just writes straight to machine code? It’s not like some devs are safe, those who maintain coding languages would be out of a job too. I think AI would find it awfully silly to have to code up a language, maintain it, then turn around and use that language to code with. Bare metal AI seems like the end game to me.
Code is written to be auditable by humans, if you just generate machine code how will you know the level of correctness? Would you save the prompts then I have to prompt review before merge? That would end up being much wordier than a code file.
Humans write the tests. AI writes the implementation. Just write good tests: property based. Or actual formal specification...
It's much quicker to write code than a formal spec for that code.
Yes. Because at the end, the code should be the spec or the final source of truth for it.
I don't think that makes sense in general, but does remind me of an idea I had where it does make sense: 8-bit program synthesis.
Your dataset is all of the 6502 binaries, corresponding manuals, and descriptions of gameplay generated by multimodal AI looking at videos, screenshots, etc. Maybe even screenshots and memory captures of games could be input along with sequences of key or controller inputs.
But if you feed in a large chunk of the Z80 and 6502 programs and as much metadata as you can and for a training signal make sure that the thing runs without freezing on an emulator and maybe shows the correct loading screen or something.. I think there are enough 8-bit programs and games for this to work, at least to some degree. Maybe. Start with shorter simpler programs and work up. Generating BASIC might be easier in a way, I don't know.
But the idea is you would describe your game along with maybe a screenshot or cover art or something and it would generate the machine code.
You could also do something similar but on a frame-by-frame input-by-input basis like Oasis.
Anyone in this thread want to fund that? I would love to work on it.
We’ve lots of programming languages; most businesses need only an OS (Linux- free), an application language (python, java, C*), a database language (SQL), and a front end be language (js + html). No need to go to bare metal in most cases as generative AI is already good at these.
The mostly unsolved problems right now are around build, deployment, and operations.
No I don't think so. Both human and AI benefits from building on top of existing abstractions: langugages, frameworks and libraries.
That sounds like a cybersec nightmare
Why do developers continue to work on AI? Sooner or later (probably sooner) it will be adequate enough to remove the need for many or most developers.
I listened to this week’s All In podcast and the podcasters (all wildly successful tech entrepreneurs) were quite emphatic that they were seeking a world where they were not beholden to developers. They envision a world where non- or semi-technical employees create the desired software via AI; other AI agents would then figure out how to deploy it.
I feel like we are at a similar point to many other professions in the past facing a rapid technological change. But in the case of software development, developers themselves are hastening their own demise.
Just something to think on.
I am a lifelong (40+ years of daily coding so far) programmer and it brought me good things. I actually think I am working on my last project, which indeed is to replace programmers. Not all, just for things we as a company do. We are very far with it. My biggest, existential, revelation is that my goals and successful products have always been around replacing programmers. My first sale of a product company (that made me rich enough to retire, which I did not) was a nocode tool I built in the late 90s. As that has been my work and hobby ever since the early 80s, I cannot imagine what I would build if, indeed, I would no longer need to program but can just say what I want. I want nothing, but to program.
The people directly responsible for advancing the latest AI tech are both small in number relative to the entire population of devs and they also have a chance of building generational wealth as a result of it. The rest of us are trying to keep up with AI because it’s both interesting and because we don’t have a choice.
You have a choice. What is preventing choice?
All In hosts are probably more accurately described these days as "extremely wealthy tech investors" and in that context AI hype about eliminating those software devs with their pesky salaries makes quite a bit more sense. If nothing else they'd be extremely out of step with the rest of the market saying anything different and what would they gain by rocking the boat?
I find it fascinating how people that successful don't take that thought process one step further.
If they use AI to build an app by just telling it what to do then everyone on the planet can look at how their app works and describe it to their AI thus making a clone.
Even Eric Schmidt said at Standford that we would be able use AI to copy TikTok and steal all of it's content and users in the next few years. Interestingly though he didn't even think to mention Youtube...
I wouldn't call o3 a reasoning AI yet. It gets better with its abstraction abilities but that's a still a very far step from reasoning and logic. Something like 10%
The interesting development of course being the approaching Age of reasonable software made by mere mortals and AI
These articles are like web3/crypto spams.
Praying for the day we can move past talking about how cool autocomplete tricks are.