I do have an actual diagnostic and I had the same experience over the past year with early coding harness at the beginning of the year, then Claude code since its release date. But after 1+year going that direction I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents. I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely. While I got artifacts I can use (libraries, tools, docs), including some things that I’m pretty confident are SoA I do not feel satisfied anymore knowing that I used a model to generate them, even if I was the one designing every part of it. I do feel that I’m lying anytime I come to a colleague to share a new cool tool I have made. And I do not feel that relying on AI actually helped me improve with dealing with my executive function issues.
YMMV but I’m personally feeling burnt out with AI coding agents and ready to go back to the old ways for my next personal project
Almost a decade ago, I moved my career into the management track. I am a director by now and have two more management levels between myself and individual contributors.
I can strongly relate to what you‘re writing, because I share that same sentiment often in my daily (non-AI) work. In fact, coming from that background, the switch from coding to working with agents feels eerily similar to moving into management. You encounter the same challenges minus the „human people and emotions“ part: having to explain properly, the agents doing something different than what you intended, feeling detached from the actual work, only focusing on the bigger picture and so on
To me it feels very natural, it is what I do every day. But then again, I made that choice and it wasn‘t forced on me. So I understand frustration.
I feel lucky to have been promoted to a management position recently, just as I was starting to feel less excited about dev work because of AI. I still enjoy building systems, but I have to admit that the loss of challenge made the work much less enjoyable for me.
Now I have a team of interns to mentor. They're sharp and use AI constantly, so my guidance is less about code and more about UI/UX, understanding what the client actually wants, good work practices, well-documented tickets, thorough reviews, and so on. Thankfully, I like this work, it has been very rewarding.
I have done a lot of introspection on this and realized that I'm very much driven by intrinsic rewards moreso than extrinsic.
I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect.
Every time I have tried to be extrinsically driven (career or OSS wise) it's never worked out anyway. I could have done more to make it successful but I never cared about getting validation or getting users for my stuff (and the stress that brings).
I've been lucky that up until this point, the intrinsic rewards I have gotten from my job have aligned with company goals.
LLMs take all the intrinsic wins and leaves only the extrinsic ones. That makes me sad, but it is what it is I guess.
I have been thinking about a tool for months but didn't have the time. I finally gave in and built it at work in a week with LLM tokens. It worked fantastically. But I felt no accomplishment. It felt just the same as if I downloaded the tool from someone else's repo (and who had an overly eager maintainer that would implement my GitHub issue requests).
The hard part for me is ignoring LLMs in my free time to try and keep some of the intrinsic rewards to myself, without being annoyed that I could do it faster if I just "gave in".
I have found the opposite to be true. I really like getting stuff done for people and struggled for years with all of the specific syntax and details of solving any particular problem. I have a relatively in-depth knowledge of computers and how they work and algorithms and the like but always struggled with the exact details of how to do something so it feels like a blessing to be able to spit ball some conceptual understanding and get back real code. I always struggled with making my ideas real before the novelty of the inspiration wore off unless I happened to get hyper focused on solving a particular problem.
Now I can step through everything in a way that it feels like a super power. I have enough sense and knowledge to I think intuit whether the solution being provided is bloated or perhaps even unnecessary and I can reiterate over it. I've just been using Cursor for work as I adopted a personal restriction to only use AI I can run on my own devices for personal use, but if I'm getting paid and the tools are provided I'm going to do my best to solve the problems that I'm confronted with and so far the LLM connected IDE has been helpful.
It's best in my experience when I use it as a tool to augment trouble shooting and brainstorming but when you are fixing one liner bugs in other people's side it's not like me typing the fix is very different from a machine auto completing it.
It might feel like cheating on a crossword puzzle but that is also something I do if I get stuck and the fun of solving the problem has become a time sink.
I think the real risk is if you don't understand conceptually what you are commiting anymore and I've tried to make sure that I always understand what and how the code is working and also understanding the pitfalls of being able to propose bullshit hypothesis that the agreeability of the LLM will go along with.
I've yet to seriously use a LLM for a personal project and when I tried to use Devstral that ran on my Nvidia 4090 it hallucinated so much that it wasn't super helpful but it still shot out boiler plate code that I could then spend time fixing and helped me overcome my own task paralysis regarding initiating.
We are all motivated by different things and being extrinsically motivated isn't a bad thing at all.
But being more interested in the problems rather than the solutions (and not wanting to "productize the solutions") is why LLMs are demotivating for me.
> But I felt no accomplishment. It felt just the same as if I downloaded the tool from someone else's repo (and who had an overly eager maintainer that would implement my GitHub issue requests).
I get that. I recently watched a "talking head" style video by javid9x, where he said something along the lines of disconnecting from the code emotionally [0]. He has to get into the code to understand that. I get the same feeling, however, for me, it feeds my curiosity and my need of exploration. At least for now, I might add.
That’s exactly it! There is no feeling of accomplishment whatsoever, because we aren’t really accomplishing anything. The LLM is doing all the work. Out pops an application, but it might as well have been written by someone else, because it was, but also it wasn’t!
It’s great that an application now exists where there wasn’t one before, but it’s hollow because I didn’t make it. Nobody made it! It just exists now with nothing actually accomplished by anyone. It’s a very spooky way to conjure things up.
> I have done a lot of introspection on this and realized that I'm very much driven by intrinsic rewards moreso than extrinsic.
> I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect.
..
> LLMs take all the intrinsic wins and leaves only the extrinsic ones.
I'm not sure I understand this. For me, programming was at first a tool to use to satisfy my curiousity. When I first started coding I knew nothing about software patterns, how I should be naming my variables, length of functions, DOD vs OOP, functional vs imperative, single responsibility principle, and on and on.
I wrote a mess of a program and got it to do very cool things (for me). I loved it.
Then I got taught more, got my first jobs, learned why programming large systems needs standards, patterns, etc. I became good at that, and have had a long lucrative career out of it.
But I cannot wait for the day when I no longer need to earn money from programming and I can go back to using it just to do "cool shit". At that point, whether I am hacking and slashing myself, or working with an LLM to do something, I don't care. It is the intrinsic goal of solving a puzzle and programming just happens to be the tool I use.
Thinking more deeply about your words, is it that you enjoy figuring out the instructions to use to solve a problem? In other words, figuring out the algorithm and writing the code out to create something? Would you feel if you just tell the LLM what you want to create and it does it, you've lost the enjoyment?
> Thinking more deeply about your words, is it that you enjoy figuring out the instructions to use to solve a problem? In other words, figuring out the algorithm and writing the code out to create something? Would you feel if you just tell the LLM what you want to create and it does it, you've lost the enjoyment?
So there's a lot of nuance to the "is it that you enjoy figuring out the instructions to use to solve a problem".
At the surface I don't enjoy typing, I don't enjoy fighting syntax checkers, rust's borrow checker, or manual memory management in my personal C projects, typing out the HDL for nand2tetris problems, etc...
However, there have been studies done decades before this LLM boom about the psychological concept called the Generation Effect. While everyone is different and it's not completely black and white, the studies have found that people learn more by actual practice (the act of doing) than just by reading material. That's 100% the case for me.
I can read blogs and resources till the cows come home and I'll have a very surface understanding of a concept. Then I'll go to write the code to implement it and it rarely works right away because there are demonstrable gaps in my understanding. I'll debug it and iterate on it until it works, and that is what actually solidifies the mental model of what I was trying to learn in my mind. Not only can I say for sure that I remember it better, that seems to form connections in my brain that allows me to apply it in other use cases, or build fascinating technical tangents.
I not only get my high from that initial "Aha!" moment when I really feel like I understand a concept enough to actually apply it in other scenarios, but I also get my high from tangents that spawn off of that concept.
In many cases, I can map a direct line of my personal projects to a set of root projects that spawned them off because of ideas I came up with while actually implementing the projects. Since I tried real hard to optimize a C# game engine for an embedded platform, I realized where limitations were and it solidified my knowledge of how old game consoles worked.
This led me down to the interest of creating a GPU out of embedded device that I can pair with I/O constrained embedded devices. This taught me soooo much about the embedded space, and while I heavily improved my C writing abilities it also made me wish I could write C# on embedded.
Since I had learned C for the embedded project (and I knew MSIL from previous deep dives), I realized I can just translate MSIL into C and that would allow me to run C# anywhere (got C# working on an SNES, the linux kernel, and on an ESP32S3).
By implementing that by hand and coming face to face with many small decisions I had to make, that solidified a bunch of concepts in my head around intermediate representations and why they are a massive benefit. Those aha moments (among others) then led me down the path to implementing a just-in-time compilation engine for NES games and the C64 OS into the .net runtime.
The learnings from that have already spawned some other ideas in my mind, which is why I'm now learning Verilog and FPGA development.
None of these projects solved any useful problem (as in nothing was created that I or anyone else would use). The satisfaction and the high I got from them was having the curiosities of a problem, ideas of a solution, and persevering (partially due to being stubborn) through it and actually accomplishing it. The satisfaction that I actually understand the concepts at a foundational level, which actually ends up breeding excitement for a whole other tangent/problem.
These learnings have indirectly helped me in my day job as well. While I'm not working on anything that sophisticated or cool, all of these actual implementations I've learned have given me direct learnings I have been able to successfully use to create better software in other domains.
So it's not the actual typing I enjoy, but the whole picture of what comes out of the end through that typing. LLMs take most of that away. It lets me ideate on a vague solution and then it goes ahead and implements it for me. Even if I'm specific on the details of the algorithm it uses, it subtly fills in the blanks and the missing pieces that I haven't cemented in my brain yet thus making me miss out of the opportunity to do so.
And it steals the accomplishment of the final thing existing. I don't feel an accomplishment by typing in google "I need a C# to C transpiler" and just downloading it. That's what LLMs feel like, even if I'm trying to steer them at a lower architectural level. I don't have the aha moments, I don't have the learnings, and I'm disconnected from the code.
Thus it feels like it's stealing all the intrinsic rewards from me, only leaving the extrinsic ones. And those are not rewards I am particularly motivated by.
Agentic harnesses go in the exact opposite direction to what I'd want to get from LLMs. I don't want another black box to (poorly) work on a black box for me, I want to be better at reaching into and understanding boxes that I already have in front of me. I don't want tools to autocompact contexts and store generated memories to facilitate long runs I have barely any control over, I want tools that allow me to painlessly craft a more relevant context for short ones. I don't want agents to author commits, I want them to use Git (or other tools) to get the information that I'm looking for when it's tedious to do it myself. I don't need them to do the fun and beneficial part of the job for me, I want them to do the boring parts that I already know how to do which block me from proceeding because my brain just isn't interested. Some of those things you can script yourself relatively easily, but the current tooling for LLM coding is absolutely atrocious and disconnected from programmer's needs.
The main output of my work is gaining a better mental model of systems I work with. That's what lets me grow and that's what makes people want to pay me rather than someone else to work on these things. Anything else, including the produced code itself, is secondary to that. In general I find it pretty hard, although not impossible, to use LLMs in a way that doesn't diminish my output, especially with this tooling that seems explicitly designed to make it hard. After all, reviewing things is so much harder than writing them yourself, and you can't feel accomplished by something you haven't done.
> I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely.
100% agree, neither do I, but I see this as an opportunity to think "how can we gain trust in the outputs AI produced for us?"
Is it about tests, reviews, some methodology? Better observability? Formal specification? It's really interesting to think how you can relieve this pain. I think the answer to this question will show the path ahead for agentic coding.
I have never jumped on the train but I am writing a project that uses v4l2 or libcamera. I have been experimenting with both and spent 4 hours reading linux kernel docs, libcamera docs, and not writing any code. I’m okay with that and the project has still moved ahead even though I only have written v4l2 sample code.
>The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges.
Honestly I've had the opposite experience.
If I can leave the boring crap to the LLMs, I can focus more on the deep important bits. The bits where the LLM accuracy is spotty because there's a ton of moving pieces and the "how/what" of the code becomes crucial for auditability and debuggability. The code that I've written bugs in, that Opus has written bugs in that code, where the design around it to make that less catastrophic when it happens is often system-specific and unique.
If I can spend 5 minutes delegating all the tedious plumbing updates around it, then I have more time to put towards the core.
The system design challenge becomes making sure that they are well separated.
Managing fleets of agents hasn't entered into the picture because the needle-moving things there tend to be successive and cumulative, not easily parallelizable. (I believe this is true on the product side as well - 10 crappy MVP features in a week would be way less interesting to me as a user than 1 new feature released in a 3x-more-fleshed-out-way than it would've been three years ago.)
I wonder if this is a new thing or if it is a repeat of the past.
Like ...
When I was young, I wrote this REALLY tight assembly code - loops that were measurably better than C or other high-level languages.
Then obviously assembly was minimized, then forgotten.
Then years later, I found I was happy using even interpreted languages, not even using a compiler.
When first using perl and having a data structure not be as useful to the final output, in a line of code I used a different data structure and sorted the output exactly like I wanted. Too much effort if it had been C, and very much so for assembly language. But I got what I reall
Is AI a repeat of this? instead of assembly language, instead of C, instead of python, do we become high-level-english-language tech folks? Will AI just let us hand off our code and physical design to a fab, and will it make us happier?
I also wonder if SoA to you is how it behaves or how it is, and does it matter if you stop looking at the code just like I stopped comparing the code the C compiler generated to the assembly language I wrote. And what about years later with -O3, will AI have -O3?
I feel like this is where it's going -- it's not where we are, the tools are not reliable enough that it makes sense to step back quite this far, but it feels like where we are going to arrive really soon.
If you look at agile processes one of the biggest criticisms is that there's always a magic "customer" role that needs to prioritize existing work, do acceptance testing for completed tasks, and give requirements deep enough to create real specifications. This often required a lot of attention to detail and very fine grained judgment typically lacking in those that are eager to have a job title of "customer".
And now if you look at dark software factories, these pieces are also basically everything they're missing. The person/people responsible for this role were never seen as being engineers/programmers in those processes, but I think that's where most SWEs will end up, because as these tools mature to the point they manage the code all on their own that's what's going to be left to the SWE in the chair.
The SWE of course won't be the actual customer/stakeholder, they'll be the proxy, the one that has to navigate meetings in meatspace and make soothing noises to the actual customers. Will they be happy doing this? That's a big group of "they" so some will, sure. But I think a lot of people who got into this career consider this the worst part of it, and it's now going to be the whole job.
> I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents.
I've tried to stay away for a variety of reasons (not approving of the way the tech was developed hovering up everyone's data for commercial gain, high amongst them), but the company I'm now part of (due to them buying us) is drinking deep from the GenAI water fountain, so I will very soon have no choice but to engage or be pushed out¹. I get it, I see the benefits, but it feels like turning into a manager (for GenAI agents rather than people, but still…) which is something I've always avoided because I want to tinker, I got into programming and database work because I like to play with the nitty-gritty details and I'm going to have to let that go.
To be frank, there is a sizable part of me that has wanted to be out of tech for a while² for various reasons³ and that part of me would prefer to go waiting tables if that is what it takes to escape! Maybe then I can reclaim tinkering as a hobby.
--------
[1] Redundancy would be nice, with 26 years service the statutory minimums would be more than enough to tide me by for a while, but I expect they'd not do that. I'd instead be put on a PIP for not performing (assuming they can make a case for not engaging with GenAI making me less efficient), and if I still don't play ball that'll be grounds for dismissal.
[2] Or at least take a fairly long sebatical.
[3] Not liking remote teams being a significant one, and even though I go into the office⁴ I'm still remote because most of everyone else is.
AI basically killed my joy for programming. I've been working as swe in bigtech for 8 years. I like learning stuff, actually coding stuff with my hands, gather information and understanding before implementing, polish my code. Thats all gone
Now I am just a monkey thats:
1) add enough context, description and harness to an agent
2) review ouput and repeat 1) if context is lacking
It was bottom to top: from understanding to implementation. You were the owner. Now it is top to bottom: get implementation first, try to get understanding later. Thinking is also delegated. "Think" nowadays is "reformulate,answer questions, add context, try again". This doest feels like I am doing the work, this feels like I am a limiting factor here
Another side effect is that any code now have 0 value. No one evaluating how you guided an agent, what decisions you took. People seeing your work and think "ye, I could vibe code that too with enough time" even if this is not true
And my work isn't css and html (with all respect). It is mostly high performance clusters, parallel computing, OS, low level, SOTA online llm inference etc
Now I am seriously considering blue collar job, as I have more joy building stuff with my hands than to be a passenger/context generator to an ai.
I am not a business driven person, I don't really care how much money my company earns (sorry). I just like to solve technical puzzles and think hard
P.S. yes, there are corner cases ai can't do well: non trivial, highly specific algorithms and implementations; complex patches to gigantic multi domain proprietary code bases, but that's like 5% of my work
The addiction part, the ADHD part and the pending test part.
The fear of becoming addicted to AI is real and I don't think I'll be capable to stop it, considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
My Pro went to Max(5) to Max(20) pretty quickly and I was burning through that weekly limit still, without large agentic workflows that burn tokens. Just me and 4-5 terminals. Sometimes I was happy to hit the limit because I was forced back to normal life.
I've gone back to Pro to stop what was happening.
Now I'm self-aware enough to notice the trend and put up safe guards, but that's because I've always had to adapt my environment to control my behaviour because I know direct behaviour control is abnormally challenging. I fear for those who won't see it coming, until they're in deep.
I find that the new "drug" is constantly hunting down new cheaper models.. z.ai/glm, mistral, deepseek.. if you need to get your fix - find the cheaper path..
> [...] considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
It's so wild that it never dawned on me, why some people around me were so quick with "Let AI do that!". I'm not saying that each and everyone has ADHD, but I think I underestimated a) the flow of dopamine a successful prompt can set free and b) the craving for it by folks that I deemed more stable than myself.
As someone with ADHD, it’s really a problem. I have so many random documents of random outputs from prompts I didn’t track. It’s honestly accelerated some of my worst habits because it feels like I actually completed a task. The reality is I just have folders of half finished projects, which anyone with ADHD can relate to.
I feel kind of lucky in a way that I hate working with AI so much. I'd rather hammer nails through my fingers than spend my time prompting
So my ADHD isn't being satisfied by those little dopamine hits from LLMs, Any time I'm forced to use them I'm mad about it, and can't wait to be done with it
I still have that folder of half finished things just like you, though. It's just not AI generated
I thought about it a little deeper and I think software development has always had the addictive tendency. That hunt for the solution to the problem, has a rush when you complete it.
It’s just that the rush is more frequent, addiction intensity scales with dose and frequency.
Instead of jumping from project to project, I focus on one (maybe a few) and let myself free while agents spew out their output.
Something physical is excellent for me: minor wood carving, origami, drawing exercises, also light physical exercises.
My trick is to (try to) do something that requires high focus, on unrelated matters.
To give a practical example, the simple gesture to connect 2 points on a sheet of paper via a direct, non trembling line, requires high focus: if you try to do it sloppily it is too long, too short, etc. I need to shadow the moment, gain focus, draw the line.
It keeps my brain in focus, busy and engaged.
Videos, podcasts, and in general enything digital seems to distract me away and/or overloads me.
Also, I am back at using pomodoro technique more frequently.
Just some pointers, in case you want to try out, or suggest some you find effective yourself.
The counterweight has been, after using it for a bunch of projects, I have internalized that it will very, very quickly get me to maybe 60% and then I'll have to take it the rest of the way mostly by myself (or handholding it tightly for the remaining 40% at a much slower pace).
In other words, the initial implementation is practically already there, already done. So there's no rush left in generating it - it's only worth bothering if I'm prepared to see it through to 100%.
When it is worth pushing through to 100%, it's pretty great for getting the inertia going though.
I can relate to this. Last October, I had a real epiphany using Claude Code at work. Suddenly, that initial inertia of starting something whether it’s drafting a JIRA ticket, structuring a PR, or just brainstorming completely vanished.
I started using Claude exclusively in plan mode, and within minutes, I’d have full clarity on exactly what I wanted to do and how to do it. With the release of the Opus model, I felt 100% more productive because I stopped spending time on menial tasks like manual coding or documentation. Instead, I shifted my focus to architecting, problem solving, and reviewing code to make it perfect. I even wrote two PyCharm plugins to unify my workflow (one to manage Claude Code sessions as a first class citizen and another to render Markdown in a less eye straining way) so I don't have to leave the IDE.
However, the novelty is starting to wear off. Six months ago, I would have truly admired how efficient and productive the current version of myself has become, but now I just take it for granted. It has become the new normal, and I’m finding myself bored and stuck in a vicious cycle of constantly needing to reach the next level.
"shifted my focus to architecting, problem solving, and reviewing code to make it perfect" aka write couple more prompts and combine results. Pretty exciting
In a paradoxical way, the amount of stuff you can get done in an hour now is like a firehose -- which we rarely experienced in our earlier life -- which can be overwhelming to my brain. So I subconsciously resist starting a session because I never feel fully rested, calm, and focussed to take all that and process it well.
There are also 10x more "active" projects now -- and prioritization and choosing between them at every moment is still a struggle. The tempation to do the fun and novel thing and avoid important but familar boring chores pops up every step of the way and can derail you for days.
I am still trying to create a system that works -- now using the very tools. Long journey ahead.
EDIT: My experience --
I was paying for both Claude Code as well as ChatGPT Pro ...but was heavily almost exlucively using CC for coding work because it was so good. After CC started hammering the session and weekly quotas lately -- I tentatively srated using Codex and find that it seems equally good and almost indistnguishable for my work, and ocassionally shines by one-shotting some tasks. This helped me stay afloat with just 2x$20 spend per month without feeling held-up for ransom. Also never hit codex limits till now.
Leaving a 5 hour session quota unused towards the end, or worse not even starting a 5 hour session clock, was a source of constant anxiety -- that I am wasting precious quota getting nothing done. I think I am getting over that now.
I've been using Augment's agents (VS code, CLI) for 8ish months. It let's me easily switch between GPT and Claude models.
I've found the best results from letting GPT 5.4 code and then asking Opus to do code reviews to a file. I do the review in a different agent session so it's "fresh". Then I review the file, edit until I agree with everything, and let the existing GPT agent session address the items in the review file. I've found Clause agents don't perform as well for me in coding for whatever reason. They feel more sloppy.
I've also been doing a very organic spec-driven development process where I have a md file for each non-trivial project update and use that to define the task and address questions or problems the agent has.
I've also found I can give agents conditional instructions which they will usually use like skills. This gives me a way to easily distribute my instructions to any agent/model on any machine with a single AGENTS.md as the entry point:
Addressing the end of the article, I think that we are all very much still learning how to use AI responsibly. It's like we just discovered alcohol and we're going on a rager every night because we don't know any better yet.
It's too easy to buy €100 of Claude tokens and burn through them to make those dream projects appear as if by magic. There's a middle ground where, for example, instead of building a whole project it could produce a project template and provide guidance as you build. That should take the edge off the task paralysis and hopefully disrupt the addiction loop.
That's how I use LLMs for programming. I predominately use the chatbots instead of the CLI tools. Every so often, I'll ask for a one-shot of some MVP, but then I take that MVP and make all the changes myself. However, I must say that I rarely do the one-shot-and-edit style of development. I find that such a process can save time, but not always.
So the end game for the current generation of AI companies won't be productivity improvements but gambling, just like everything else nowadays. That's why they want to get us all into these massive casinos they call data centers and don't want us to own the slot machines.
So what that you have ideas - other people have them too. It's not ideas that build businesses but knowing right people or ability to sell products.
The gambling trope is so tired. AI development doesn't involve luck to any appreciable degree, certainly not more than hiring people to do a job can be considered "gambling" (you never know what you're going to get!).
It's just paying to get stuff done, which is how it's always been, since the dawn of man.
The gambling part is because of the (hopefully emergent and not purposefully designed) intermittent reinforcement due to the limits. You don't get that with regular hires.
You usually don't get immediate responses from hires which means delayed gratification and avoiding much of the potential dopaminergic effects you get when engaging with LLMs.
You can play overextending the hire analogy all you want but it is simply not the same.
For most people who are not doing their day to day jobs it's just a prompt of their idea roughly sketched out and a miracle happens - LLM fills in the blanks. Every time it's different but it works, sometimes even better than initially expected. That's why the addiction and gambling. Gambling is a lot of things, not only flashing lights or play sounds. Some people claim prediction markets isn't gambling either, though that doesn't change the fact.
How is this different from hiring a designer, telling them "make me a website" and then waiting to see if they resolve the uncertainty into something you like or not?
I tell LLMs what to do in pretty high detail, and they do it. With LLMs I have much less variance than with coworkers.
It is different because for humans, it takes time to produce some result, while AI does it instantly. So if you tell a programmer to do X, you have a week for your adrenaline to cool off. If you tell AI, it will do it in minutes.
> If you're making the argument that LLMs are gambling simply because they're faster than humans
No I am not. It's more addictive because of the timescale. The comparison of AIs to gambling is through addiction mechanism, as I explain elsewhere.
My aunt used to put in (the same) lottery numbers every week. It was gambling, but probably not an addiction in the clinical sense. If she had played slot machines, god forbid, it could have been more problematic. AI is a slot machine, a hire is a lottery ticket.
Then you miss the point - AI use is being compared to gambling because it is addictive, partly due to same mechanism - the results (and rewards) are somewhat random, but it makes you feel as if you're completely in control of the outcome.
It's to the point that I just push the output of that to production and know it'll be OK, except for very large changes where I'm unlikely to have specified everything at the required level of detail. Even then, things won't so much be wrong, as they'll just not be how I want them.
I'd observe that there are professional gamblers, and there are amateur gamblers.
If you know what you're doing, know how to spec a problem space, and can manage the tool competently enough to churn out good results, then everything's fine, and you're maybe being productive or increasing your productivity by some degree. (Professional "Gambler")
If you DON'T know what you're doing, and you're just vibe-coding, then I would argue that it is at least a form of gambling (Amateur "Gambler")
Both of these conditions can also be applied to "hiring people to do a job" however there we can also observe things like reputation, credentials and so on.
"It's just paying to get stuff done..." is, with respect, superflous.
I don't know, I can understand "some people might overdo it and get addicted to LLMs". I can't understand "LLMs are slot machines and that's all they're good for" when I use LLMs every day to do tons of actual work.
I don’t like the gambling comparison either. It’s more like smoking or drinking. It’s an addiction you lean on to help you do something- even if that something is just getting through the day.
Yeah but those are classified as addictions because they have a harm component (lung cancer, liver disease, societal impact). LLMs aren't going to kill you. If anything, it might be like gaming addiction.
If you've gotten to the point where you'd rather talk to an LLM than socialise, go to work, etc, then yes, you definitely have a problem, same as with a gaming addiction.
Saying "LLMs are slot machines" is like saying "video games are slot machines", and nobody says that, even though it's more true of video games (some are actual slot machines/gacha) than of LLMs.
> Saying "LLMs are slot machines" is like saying "video games are slot machines", and nobody says that, even though it's more true of video games (some are actual slot machines/gacha) than of LLMs.
People absolutely do say that video games are slot machines. [0][1]
>AI development doesn't involve luck to any appreciable degree
Reading this while I'm prompting for the third time to fix a 100+ line function is amusing, to say the least. I don't care about the definition of "appreciable", but I definitely have to repeat myself to get stuff done, sometimes even to undo things I never told it to touch.
Exact same kinds of projects with the exact same development environment, models, etc. Either he's never worked with a development team or he doesn't consider things outside his own perspective. shrug
Not in that sense but social media companies already know the value of not giving a user exactly what they want. This keeps them on the platform longer and excited some lizard part of our brain for challenge.
Due to capitalism’s law of all businesses convergening on maximizing profit, it’s just a matter of time until AI companies employ similar techniques with LLMs. We can all imagine how that will look like
Some traits I recognized in many excellent coders i worked with, their drive to optimization, intellectual thirst, critical and creative thinking are attributes i consistently correlated with them being in some sort of neurodivergence spectrum.
Being able to remove the "first step" block is great, but what worries me is that this is coupled with LLMs sycophantic behaviours.
My gut feeling is that coupling the feeling of unblocking ones capabilities with dopamine hits with the constant praising over someone abilities is an intro to psychosis and paranoia for them.
I'm wrestling with this right now. I only use LLMs for design and exploration because I am not employed and can't pay for a subscription right now, and they make the design phase feel like less of a fever dream because checking my ideas doesn't involve hours of scanning search results online and trying to see how my ideas fit with what exists or trying to evaluate if my ideas even make sense, so I feel more encouraged to get started on working, but I often wonder if the prompts are being sycophants
In one case recently I explained a garbage collector design I had been toying with a while ago, but couldn't find research related to my idea or really evaluate if my idea would work. After enough arguing with the prompt it finally "understood", started praising my "novelty", and when I later asked for research related to it I was given a paper that already implemented most of my idea
It was a funny moment of seeing how it was clearly trained on too many online forum comments (simply mentioning reference counting got it on this whole awkward line of false folklore about memory management) before switching to sycophancy, to finally showing me a paper
When I don't have time, I just ask AI to summarize the main points and expand on the point I like. I do this with even HN discussions. I just copy the whole HN page and paste into Claude and ask it to summarize and deduplicate talking points.
Appreciate your nitpick. As I dislike recipes that introduce you to the fine art of wheat milling before getting to the recipe itself, I tried to keep that section short(-ish). I felt the need to provide some context and thoughts, that's why I included it. Not sure what I'll do next time: Either put the conclusion at the beginning and offer some more context and thoughts at the end (then you can drop out if you don't want it), or just leave it out completely. I'll reflect on that.
For me it's different. I am not diagnosed, but I think my executive function doesn't work right. It's really hard for me to start a new task, but when it is interesting enough I can hyper focus until it's done. In the past that often happened when I needed to implement something not too trivial. But now that AI does the implementation in minutes I need to switch tasks constantly and it is honestly super exhausting for me.
Sounds to me like what people are identifying as dopamine, generating it and enjoying it. I am not educated though about brain function.
Noticing novelty is beneficial in nature as it surfaces opportunities to conscious level. "Squirrel!" famously, from the movie "Up". It feels good to experience. Then, creating ones own dopamine supply can drive behavior, and increasing the number of behavior can exhaust energy supply on different human dimensions.
So now, managing this process and limiting the dopamine cycle becomes also worthwhile -- avoiding fatigue potentially perhaps -- while still not negating the attractiveness of dopamine derivable from the endless opportunities of the world. <3
This resonates. The "idea to result" loop getting shorter with AI is genuinely addictive, I've noticed it in my own workflow too. But theres a flip side nobody talks about: once you get used to that speed, going back to manual implementation feels 10x worse than it did before. The paralysis dosn't go away, it just gets masked. The real question is whether AI is solving the problem or just compressing the dopamine cycle around it.
I really feel this. I find myself reaching for CC, typing half a prompt, and then realizing I could've done the task faster myself. But CC is exciting, and feels* fast, so I keep reaching for it. Somehow it feels worse to just do the work.
In case anyone else is wondering if others feel this: yes, i can feel the risk of dopamine overshoot while using AI. As context, I've historically had ADHD that is crippling to a certain normal lifestyle. and I def feel the risk of mania or manic episodes when using these tools, in ways that I used to associate with the drug state of certain ADHD drugs.
Now I am recontextualizing the past experiences as the feeling of moving toward my goals at a speed I am not accustomed to, rather than being exclusively a drug effect
As someone with ADHD, it’s a lot more nuanced than that. Coding agents can remove task paralysis, but they also introduce many other distractions. Being one prompt away from zero to one is a double edged sword, because it means any random thought, idea and side project is also a prompt away.
I've a thought that AI could drive humanity to appreciate humans, as a side effect of its rise.
Nowadays we're bumping up against alternative nonhuman intelligences, nowadays as we go about our lives. New neighbors, kind of.
And AI has its idea of 'living' in this world .. as a servant to us mainly.
So human life is changing: we now have the opportunity to relate to life (existential) while we're being influenced by the valuable accompaniment of these new docile servants. We're able to "see our plantation and peacocks" if you will.
We experience our life-challenges differently ... now being alive to see our daily labors accomplished by others, and we're able to reap the benefits: more dopamine, resources, whatever.
Our role is changing somewhat, being 'wealthy' or 'elevated'.
I think this poses new questions implicitly, like: Q: Do we like our new wealthy-in-productive-results selves? Is this a life worth living?
AI is a multiplier of both our expertise and our defects.
I have learned how to hide my stupidity from AI's all-seeing eye and the result is the best I can expect from a tool that helped me become 100X more productive, I can't be happier.
I’ve been using CC as my GTD-buddy. All the usual plaintext files in a git repo, all the usual processes and workflows and constraints; but I’ve written two skills that have taken the activation energy out of what used to be the hard parts for me: /process-inbox and /weekly-review. Process-inbox interviews me item by item, making suggestions which I accept or amend, and it does the bookkeeping. I tell it when I want to do something and what calendar I want it on and it makes the calendar event. Weekly-review walks through an overview of everything done that week, all my open tasks and projects, makes sure everything has a scheduled next action. Sometimes I make a note, cancel something, reschedule something, whatever.
This is nothing I couldn’t do on my own, and in fact, it’s a lot slower than just manually editing files myself. But: this way it’s actually getting done :)
There’s too much hyperbole on this subject, so I won’t add to it; but it has solved a lot of very-long-running problems of mine.
> What is it good for?
> For me, personally? It helps me overcome my task paralysis. As mentioned earlier: I have a plan. A strategy. An idea. I just need someone (or something), who has fun in churning through the implementation. I have the ideas. But boy is coding exhausting.
I find the same. AI helps me overcome any paralysis. I just think "hey it's cheap to write the prompt" and go on.
Best way I've found around this: I design and code the UI for a given feature by hand and then let AI do the more tedious backend work (HITL/human-in-the-loop) I don't wish to do by hand.
It's wonderful if you do the things you enjoy by hand and delegate the "buhhh" stuff to AI. This approach also circumvents the need to review massive PRs (you're only ever concerned with the individual feature, not the whole farm).
Another way to put this is that focus is ultimately what matters, when it comes to actually getting stuff done. Choosing what not to do is often more important than what you actually do.
Since AI tools make it extremely easy to get started, it's really easy to begin half a dozen different projects, feel like you're being productive, but actually accomplish nothing.
This accurately described how I used to utilize AI – and my ChatGPT history is filled with all sorts of grandiose project plans. But lately I've been more and more narrow with what I actually prompt.
This leads me to think that a chatbox is not the best UI for using AI, as it's too open-ended and too prone to give you long, broad answers, rather than hyper-specific ones.
AI has replaced video games for me. And there are plenty of cheaper models that "do it" for me, I don't have to spend $$$$ just for entertainment. I will step up to the frontier for serious work. But if I'm just playing, I'm going for the free stuff on openrouter.
Also, ai art is fine. It looks better than me using paint. That said, there are plenty of foss art pieces and public domain that you can leverage if all you really need is placeholders, and that is much cheaper.
One side note: it's funny how everyone suddenly "hates AI" while happily using it and profiting from it every day. We all want AI like Gyges wanted and used the ring in Plato's Republic.
- good for me in the short term (e.g., I can fulfill what my company asks from me)
- good for the company in the short term (see above)
- bad for me in the long-term. E.g, I'm starting to become more and more replaceable at my job; I don't have the same depth of understanding of the systems we're building as I used to; my peers and I collaborate way less now (instead of talking to each other, we just ask claude directly); and there's not much to be proud of in my day-to-day work (we're not building CRUDs, but we're not building netflix either, it's something in between). The compounding effect worries me too: every shortcut I take today is a piece of context I'm not internalizing, a debugging instinct I'm not sharpening, a tradeoff I;m not learning to weigh. The skills that used to differentiate me are slowly atrophying. We're all individually more "productive" on paper, but collectively i think we're gonna end up with a codebase nobody fully understands and a team that barely knows each other
- good for the company in the long-term: they can fire me easily, they don't need 80% of us anymore. They can just pay anthropic for the agents instead. They don't need people to maintain or read the codebase either: agents do that now. And executives never really cared about us in the first place, so that part hasn't changed I guess. The math is simple from their side: headcount is the biggest line item, and agents don't ask for raises, don't burn out, don't go on leave, and dont push back when leadership makes a dumb call. We're the worst part of the business on a spreadsheet, and the tools to replace us are finally cheap enough that someone is gonna pull the trigger
I'm not a superstar engineer. I know that. I'm probably in the 80% bag of engineers out there. Some of you may be in the top 20%, and you probably gonna keep your job somehow (or not, who knows). But for the rest of us, I think we simply cannot compete anymore.
I regret every single time I've used AI so far. Nothing good has come from it for me; the feeling is so different from any other technology I've used in the past (frameworks, languages, libraries, whatever): it used to be fun, it improved my career prospects, it expanded my knowledge. AI/LLMs are precisely the opposite: it's not fun, it's making my career worse, and it's not expanding my knowledge.
I CANNOT UNDERSTAND HOW MOST OF US, ENGINEERS, ARE OUT HERE VOUCHING FOR AI. WE ARE LITERALLY CHEERING ON THE THING THAT IS COMING FOR OUR JOBS, AND WE'RE DOING IT FOR FREE, POSTING BENCHMARKS AND EVANGELIZING IT TO OUR MANAGERS LIKE WE'RE GETTING A COMMISSION. WE ARE NOT. THE LABS AND THE EXECS GET PAID. WE're HANDING THEM THE ROPE
I have a feeling that after enough slop has entered the system, the AI will also have difficulty debugging/understanding it.
My questions are: will the AI get to be above our level at creating grokkable source code before it comes unmanageable? And even if not: will the models' ability to understand and modify slop outpace it's ability to create it?
For our jobs, I hope neither is true. But we'll see. Even in the best case we'll have a lot of cleaning up to do.
Don't know about ADHD and whatnot, but I do feel this "task paralysis" pretty often. One thing that I found works really well for me is to work on multiple projects at once. Go one to two weeks on one, then switch to another. I'm not lacking motivation anymore and it feels great.
I do have an actual diagnostic and I had the same experience over the past year with early coding harness at the beginning of the year, then Claude code since its release date. But after 1+year going that direction I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents. I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely. While I got artifacts I can use (libraries, tools, docs), including some things that I’m pretty confident are SoA I do not feel satisfied anymore knowing that I used a model to generate them, even if I was the one designing every part of it. I do feel that I’m lying anytime I come to a colleague to share a new cool tool I have made. And I do not feel that relying on AI actually helped me improve with dealing with my executive function issues.
YMMV but I’m personally feeling burnt out with AI coding agents and ready to go back to the old ways for my next personal project
Almost a decade ago, I moved my career into the management track. I am a director by now and have two more management levels between myself and individual contributors.
I can strongly relate to what you‘re writing, because I share that same sentiment often in my daily (non-AI) work. In fact, coming from that background, the switch from coding to working with agents feels eerily similar to moving into management. You encounter the same challenges minus the „human people and emotions“ part: having to explain properly, the agents doing something different than what you intended, feeling detached from the actual work, only focusing on the bigger picture and so on
To me it feels very natural, it is what I do every day. But then again, I made that choice and it wasn‘t forced on me. So I understand frustration.
I feel lucky to have been promoted to a management position recently, just as I was starting to feel less excited about dev work because of AI. I still enjoy building systems, but I have to admit that the loss of challenge made the work much less enjoyable for me.
Now I have a team of interns to mentor. They're sharp and use AI constantly, so my guidance is less about code and more about UI/UX, understanding what the client actually wants, good work practices, well-documented tickets, thorough reviews, and so on. Thankfully, I like this work, it has been very rewarding.
I have done a lot of introspection on this and realized that I'm very much driven by intrinsic rewards moreso than extrinsic.
I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect.
Every time I have tried to be extrinsically driven (career or OSS wise) it's never worked out anyway. I could have done more to make it successful but I never cared about getting validation or getting users for my stuff (and the stress that brings).
I've been lucky that up until this point, the intrinsic rewards I have gotten from my job have aligned with company goals.
LLMs take all the intrinsic wins and leaves only the extrinsic ones. That makes me sad, but it is what it is I guess.
I have been thinking about a tool for months but didn't have the time. I finally gave in and built it at work in a week with LLM tokens. It worked fantastically. But I felt no accomplishment. It felt just the same as if I downloaded the tool from someone else's repo (and who had an overly eager maintainer that would implement my GitHub issue requests).
The hard part for me is ignoring LLMs in my free time to try and keep some of the intrinsic rewards to myself, without being annoyed that I could do it faster if I just "gave in".
I have found the opposite to be true. I really like getting stuff done for people and struggled for years with all of the specific syntax and details of solving any particular problem. I have a relatively in-depth knowledge of computers and how they work and algorithms and the like but always struggled with the exact details of how to do something so it feels like a blessing to be able to spit ball some conceptual understanding and get back real code. I always struggled with making my ideas real before the novelty of the inspiration wore off unless I happened to get hyper focused on solving a particular problem.
Now I can step through everything in a way that it feels like a super power. I have enough sense and knowledge to I think intuit whether the solution being provided is bloated or perhaps even unnecessary and I can reiterate over it. I've just been using Cursor for work as I adopted a personal restriction to only use AI I can run on my own devices for personal use, but if I'm getting paid and the tools are provided I'm going to do my best to solve the problems that I'm confronted with and so far the LLM connected IDE has been helpful.
It's best in my experience when I use it as a tool to augment trouble shooting and brainstorming but when you are fixing one liner bugs in other people's side it's not like me typing the fix is very different from a machine auto completing it.
It might feel like cheating on a crossword puzzle but that is also something I do if I get stuck and the fun of solving the problem has become a time sink.
I think the real risk is if you don't understand conceptually what you are commiting anymore and I've tried to make sure that I always understand what and how the code is working and also understanding the pitfalls of being able to propose bullshit hypothesis that the agreeability of the LLM will go along with.
I've yet to seriously use a LLM for a personal project and when I tried to use Devstral that ran on my Nvidia 4090 it hallucinated so much that it wasn't super helpful but it still shot out boiler plate code that I could then spend time fixing and helped me overcome my own task paralysis regarding initiating.
Yeah and that's totally fair!
We are all motivated by different things and being extrinsically motivated isn't a bad thing at all.
But being more interested in the problems rather than the solutions (and not wanting to "productize the solutions") is why LLMs are demotivating for me.
> But I felt no accomplishment. It felt just the same as if I downloaded the tool from someone else's repo (and who had an overly eager maintainer that would implement my GitHub issue requests).
I get that. I recently watched a "talking head" style video by javid9x, where he said something along the lines of disconnecting from the code emotionally [0]. He has to get into the code to understand that. I get the same feeling, however, for me, it feeds my curiosity and my need of exploration. At least for now, I might add.
[0]: https://youtu.be/1qjn1QRxlng?si=_75-J51UnZ0eJyb7&t=705
That’s exactly it! There is no feeling of accomplishment whatsoever, because we aren’t really accomplishing anything. The LLM is doing all the work. Out pops an application, but it might as well have been written by someone else, because it was, but also it wasn’t!
It’s great that an application now exists where there wasn’t one before, but it’s hollow because I didn’t make it. Nobody made it! It just exists now with nothing actually accomplished by anyone. It’s a very spooky way to conjure things up.
> I have done a lot of introspection on this and realized that I'm very much driven by intrinsic rewards moreso than extrinsic. > I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect. .. > LLMs take all the intrinsic wins and leaves only the extrinsic ones.
I'm not sure I understand this. For me, programming was at first a tool to use to satisfy my curiousity. When I first started coding I knew nothing about software patterns, how I should be naming my variables, length of functions, DOD vs OOP, functional vs imperative, single responsibility principle, and on and on.
I wrote a mess of a program and got it to do very cool things (for me). I loved it.
Then I got taught more, got my first jobs, learned why programming large systems needs standards, patterns, etc. I became good at that, and have had a long lucrative career out of it.
But I cannot wait for the day when I no longer need to earn money from programming and I can go back to using it just to do "cool shit". At that point, whether I am hacking and slashing myself, or working with an LLM to do something, I don't care. It is the intrinsic goal of solving a puzzle and programming just happens to be the tool I use.
Thinking more deeply about your words, is it that you enjoy figuring out the instructions to use to solve a problem? In other words, figuring out the algorithm and writing the code out to create something? Would you feel if you just tell the LLM what you want to create and it does it, you've lost the enjoyment?
> Thinking more deeply about your words, is it that you enjoy figuring out the instructions to use to solve a problem? In other words, figuring out the algorithm and writing the code out to create something? Would you feel if you just tell the LLM what you want to create and it does it, you've lost the enjoyment?
So there's a lot of nuance to the "is it that you enjoy figuring out the instructions to use to solve a problem".
At the surface I don't enjoy typing, I don't enjoy fighting syntax checkers, rust's borrow checker, or manual memory management in my personal C projects, typing out the HDL for nand2tetris problems, etc...
However, there have been studies done decades before this LLM boom about the psychological concept called the Generation Effect. While everyone is different and it's not completely black and white, the studies have found that people learn more by actual practice (the act of doing) than just by reading material. That's 100% the case for me.
I can read blogs and resources till the cows come home and I'll have a very surface understanding of a concept. Then I'll go to write the code to implement it and it rarely works right away because there are demonstrable gaps in my understanding. I'll debug it and iterate on it until it works, and that is what actually solidifies the mental model of what I was trying to learn in my mind. Not only can I say for sure that I remember it better, that seems to form connections in my brain that allows me to apply it in other use cases, or build fascinating technical tangents.
I not only get my high from that initial "Aha!" moment when I really feel like I understand a concept enough to actually apply it in other scenarios, but I also get my high from tangents that spawn off of that concept.
In many cases, I can map a direct line of my personal projects to a set of root projects that spawned them off because of ideas I came up with while actually implementing the projects. Since I tried real hard to optimize a C# game engine for an embedded platform, I realized where limitations were and it solidified my knowledge of how old game consoles worked.
This led me down to the interest of creating a GPU out of embedded device that I can pair with I/O constrained embedded devices. This taught me soooo much about the embedded space, and while I heavily improved my C writing abilities it also made me wish I could write C# on embedded.
Since I had learned C for the embedded project (and I knew MSIL from previous deep dives), I realized I can just translate MSIL into C and that would allow me to run C# anywhere (got C# working on an SNES, the linux kernel, and on an ESP32S3).
By implementing that by hand and coming face to face with many small decisions I had to make, that solidified a bunch of concepts in my head around intermediate representations and why they are a massive benefit. Those aha moments (among others) then led me down the path to implementing a just-in-time compilation engine for NES games and the C64 OS into the .net runtime.
The learnings from that have already spawned some other ideas in my mind, which is why I'm now learning Verilog and FPGA development.
None of these projects solved any useful problem (as in nothing was created that I or anyone else would use). The satisfaction and the high I got from them was having the curiosities of a problem, ideas of a solution, and persevering (partially due to being stubborn) through it and actually accomplishing it. The satisfaction that I actually understand the concepts at a foundational level, which actually ends up breeding excitement for a whole other tangent/problem.
These learnings have indirectly helped me in my day job as well. While I'm not working on anything that sophisticated or cool, all of these actual implementations I've learned have given me direct learnings I have been able to successfully use to create better software in other domains.
So it's not the actual typing I enjoy, but the whole picture of what comes out of the end through that typing. LLMs take most of that away. It lets me ideate on a vague solution and then it goes ahead and implements it for me. Even if I'm specific on the details of the algorithm it uses, it subtly fills in the blanks and the missing pieces that I haven't cemented in my brain yet thus making me miss out of the opportunity to do so.
And it steals the accomplishment of the final thing existing. I don't feel an accomplishment by typing in google "I need a C# to C transpiler" and just downloading it. That's what LLMs feel like, even if I'm trying to steer them at a lower architectural level. I don't have the aha moments, I don't have the learnings, and I'm disconnected from the code.
Thus it feels like it's stealing all the intrinsic rewards from me, only leaving the extrinsic ones. And those are not rewards I am particularly motivated by.
Agentic harnesses go in the exact opposite direction to what I'd want to get from LLMs. I don't want another black box to (poorly) work on a black box for me, I want to be better at reaching into and understanding boxes that I already have in front of me. I don't want tools to autocompact contexts and store generated memories to facilitate long runs I have barely any control over, I want tools that allow me to painlessly craft a more relevant context for short ones. I don't want agents to author commits, I want them to use Git (or other tools) to get the information that I'm looking for when it's tedious to do it myself. I don't need them to do the fun and beneficial part of the job for me, I want them to do the boring parts that I already know how to do which block me from proceeding because my brain just isn't interested. Some of those things you can script yourself relatively easily, but the current tooling for LLM coding is absolutely atrocious and disconnected from programmer's needs.
The main output of my work is gaining a better mental model of systems I work with. That's what lets me grow and that's what makes people want to pay me rather than someone else to work on these things. Anything else, including the produced code itself, is secondary to that. In general I find it pretty hard, although not impossible, to use LLMs in a way that doesn't diminish my output, especially with this tooling that seems explicitly designed to make it hard. After all, reviewing things is so much harder than writing them yourself, and you can't feel accomplished by something you haven't done.
> I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely.
100% agree, neither do I, but I see this as an opportunity to think "how can we gain trust in the outputs AI produced for us?"
Is it about tests, reviews, some methodology? Better observability? Formal specification? It's really interesting to think how you can relieve this pain. I think the answer to this question will show the path ahead for agentic coding.
I have never jumped on the train but I am writing a project that uses v4l2 or libcamera. I have been experimenting with both and spent 4 hours reading linux kernel docs, libcamera docs, and not writing any code. I’m okay with that and the project has still moved ahead even though I only have written v4l2 sample code.
I'm also diagnosed and I'm the complete opposite.
For the first time I can not only compete with normal people's work loads but now with AI I can supersede them. I've never been more excited.
>The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges.
Honestly I've had the opposite experience.
If I can leave the boring crap to the LLMs, I can focus more on the deep important bits. The bits where the LLM accuracy is spotty because there's a ton of moving pieces and the "how/what" of the code becomes crucial for auditability and debuggability. The code that I've written bugs in, that Opus has written bugs in that code, where the design around it to make that less catastrophic when it happens is often system-specific and unique.
If I can spend 5 minutes delegating all the tedious plumbing updates around it, then I have more time to put towards the core.
The system design challenge becomes making sure that they are well separated.
Managing fleets of agents hasn't entered into the picture because the needle-moving things there tend to be successive and cumulative, not easily parallelizable. (I believe this is true on the product side as well - 10 crappy MVP features in a week would be way less interesting to me as a user than 1 new feature released in a 3x-more-fleshed-out-way than it would've been three years ago.)
I wonder if this is a new thing or if it is a repeat of the past.
Like ...
When I was young, I wrote this REALLY tight assembly code - loops that were measurably better than C or other high-level languages.
Then obviously assembly was minimized, then forgotten.
Then years later, I found I was happy using even interpreted languages, not even using a compiler.
When first using perl and having a data structure not be as useful to the final output, in a line of code I used a different data structure and sorted the output exactly like I wanted. Too much effort if it had been C, and very much so for assembly language. But I got what I reall
Is AI a repeat of this? instead of assembly language, instead of C, instead of python, do we become high-level-english-language tech folks? Will AI just let us hand off our code and physical design to a fab, and will it make us happier?
I also wonder if SoA to you is how it behaves or how it is, and does it matter if you stop looking at the code just like I stopped comparing the code the C compiler generated to the assembly language I wrote. And what about years later with -O3, will AI have -O3?
I feel like this is where it's going -- it's not where we are, the tools are not reliable enough that it makes sense to step back quite this far, but it feels like where we are going to arrive really soon.
If you look at agile processes one of the biggest criticisms is that there's always a magic "customer" role that needs to prioritize existing work, do acceptance testing for completed tasks, and give requirements deep enough to create real specifications. This often required a lot of attention to detail and very fine grained judgment typically lacking in those that are eager to have a job title of "customer".
And now if you look at dark software factories, these pieces are also basically everything they're missing. The person/people responsible for this role were never seen as being engineers/programmers in those processes, but I think that's where most SWEs will end up, because as these tools mature to the point they manage the code all on their own that's what's going to be left to the SWE in the chair.
The SWE of course won't be the actual customer/stakeholder, they'll be the proxy, the one that has to navigate meetings in meatspace and make soothing noises to the actual customers. Will they be happy doing this? That's a big group of "they" so some will, sure. But I think a lot of people who got into this career consider this the worst part of it, and it's now going to be the whole job.
> ready to go back to the old ways for my next personal project
This stood out to me.
Because you shouldn’t, or can’t go back, in your professional projects?
> I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents.
I've tried to stay away for a variety of reasons (not approving of the way the tech was developed hovering up everyone's data for commercial gain, high amongst them), but the company I'm now part of (due to them buying us) is drinking deep from the GenAI water fountain, so I will very soon have no choice but to engage or be pushed out¹. I get it, I see the benefits, but it feels like turning into a manager (for GenAI agents rather than people, but still…) which is something I've always avoided because I want to tinker, I got into programming and database work because I like to play with the nitty-gritty details and I'm going to have to let that go.
To be frank, there is a sizable part of me that has wanted to be out of tech for a while² for various reasons³ and that part of me would prefer to go waiting tables if that is what it takes to escape! Maybe then I can reclaim tinkering as a hobby.
--------
[1] Redundancy would be nice, with 26 years service the statutory minimums would be more than enough to tide me by for a while, but I expect they'd not do that. I'd instead be put on a PIP for not performing (assuming they can make a case for not engaging with GenAI making me less efficient), and if I still don't play ball that'll be grounds for dismissal.
[2] Or at least take a fairly long sebatical.
[3] Not liking remote teams being a significant one, and even though I go into the office⁴ I'm still remote because most of everyone else is.
[4] which grants me the home/work separation
AI basically killed my joy for programming. I've been working as swe in bigtech for 8 years. I like learning stuff, actually coding stuff with my hands, gather information and understanding before implementing, polish my code. Thats all gone
Now I am just a monkey thats: 1) add enough context, description and harness to an agent 2) review ouput and repeat 1) if context is lacking
It was bottom to top: from understanding to implementation. You were the owner. Now it is top to bottom: get implementation first, try to get understanding later. Thinking is also delegated. "Think" nowadays is "reformulate,answer questions, add context, try again". This doest feels like I am doing the work, this feels like I am a limiting factor here
Another side effect is that any code now have 0 value. No one evaluating how you guided an agent, what decisions you took. People seeing your work and think "ye, I could vibe code that too with enough time" even if this is not true
And my work isn't css and html (with all respect). It is mostly high performance clusters, parallel computing, OS, low level, SOTA online llm inference etc
Now I am seriously considering blue collar job, as I have more joy building stuff with my hands than to be a passenger/context generator to an ai. I am not a business driven person, I don't really care how much money my company earns (sorry). I just like to solve technical puzzles and think hard
P.S. yes, there are corner cases ai can't do well: non trivial, highly specific algorithms and implementations; complex patches to gigantic multi domain proprietary code bases, but that's like 5% of my work
I could have written this article myself.
The addiction part, the ADHD part and the pending test part.
The fear of becoming addicted to AI is real and I don't think I'll be capable to stop it, considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
My Pro went to Max(5) to Max(20) pretty quickly and I was burning through that weekly limit still, without large agentic workflows that burn tokens. Just me and 4-5 terminals. Sometimes I was happy to hit the limit because I was forced back to normal life.
I've gone back to Pro to stop what was happening.
Now I'm self-aware enough to notice the trend and put up safe guards, but that's because I've always had to adapt my environment to control my behaviour because I know direct behaviour control is abnormally challenging. I fear for those who won't see it coming, until they're in deep.
I find that the new "drug" is constantly hunting down new cheaper models.. z.ai/glm, mistral, deepseek.. if you need to get your fix - find the cheaper path..
Average drug connesiuer activities
> [...] considering we're asking people who struggle with avoiding quick dopamine to use it professionally in their daily work life.
It's so wild that it never dawned on me, why some people around me were so quick with "Let AI do that!". I'm not saying that each and everyone has ADHD, but I think I underestimated a) the flow of dopamine a successful prompt can set free and b) the craving for it by folks that I deemed more stable than myself.
As someone with ADHD, it’s really a problem. I have so many random documents of random outputs from prompts I didn’t track. It’s honestly accelerated some of my worst habits because it feels like I actually completed a task. The reality is I just have folders of half finished projects, which anyone with ADHD can relate to.
I’ll finish modding that Dreamcast one day…
I feel kind of lucky in a way that I hate working with AI so much. I'd rather hammer nails through my fingers than spend my time prompting
So my ADHD isn't being satisfied by those little dopamine hits from LLMs, Any time I'm forced to use them I'm mad about it, and can't wait to be done with it
I still have that folder of half finished things just like you, though. It's just not AI generated
I thought about it a little deeper and I think software development has always had the addictive tendency. That hunt for the solution to the problem, has a rush when you complete it.
It’s just that the rush is more frequent, addiction intensity scales with dose and frequency.
I think I might be going through withdrawal because I feel like I rarely get that fun feeling anymore with coding :(
It can be gratifying to get shit done but I love the feeling of coming up with a great reusable component and then making an entire app out of it
Instead of jumping from project to project, I focus on one (maybe a few) and let myself free while agents spew out their output.
Something physical is excellent for me: minor wood carving, origami, drawing exercises, also light physical exercises.
My trick is to (try to) do something that requires high focus, on unrelated matters.
To give a practical example, the simple gesture to connect 2 points on a sheet of paper via a direct, non trembling line, requires high focus: if you try to do it sloppily it is too long, too short, etc. I need to shadow the moment, gain focus, draw the line.
It keeps my brain in focus, busy and engaged. Videos, podcasts, and in general enything digital seems to distract me away and/or overloads me.
Also, I am back at using pomodoro technique more frequently.
Just some pointers, in case you want to try out, or suggest some you find effective yourself.
Might call it the OnlyFans model of Software Development.
The counterweight has been, after using it for a bunch of projects, I have internalized that it will very, very quickly get me to maybe 60% and then I'll have to take it the rest of the way mostly by myself (or handholding it tightly for the remaining 40% at a much slower pace).
In other words, the initial implementation is practically already there, already done. So there's no rush left in generating it - it's only worth bothering if I'm prepared to see it through to 100%.
When it is worth pushing through to 100%, it's pretty great for getting the inertia going though.
For the addiction part I'm trying to squeeze as much quality code out of the free tokens possible. I'm having a blast!
I can relate to this. Last October, I had a real epiphany using Claude Code at work. Suddenly, that initial inertia of starting something whether it’s drafting a JIRA ticket, structuring a PR, or just brainstorming completely vanished.
I started using Claude exclusively in plan mode, and within minutes, I’d have full clarity on exactly what I wanted to do and how to do it. With the release of the Opus model, I felt 100% more productive because I stopped spending time on menial tasks like manual coding or documentation. Instead, I shifted my focus to architecting, problem solving, and reviewing code to make it perfect. I even wrote two PyCharm plugins to unify my workflow (one to manage Claude Code sessions as a first class citizen and another to render Markdown in a less eye straining way) so I don't have to leave the IDE.
However, the novelty is starting to wear off. Six months ago, I would have truly admired how efficient and productive the current version of myself has become, but now I just take it for granted. It has become the new normal, and I’m finding myself bored and stuck in a vicious cycle of constantly needing to reach the next level.
"shifted my focus to architecting, problem solving, and reviewing code to make it perfect" aka write couple more prompts and combine results. Pretty exciting
Resonates with me.
In a paradoxical way, the amount of stuff you can get done in an hour now is like a firehose -- which we rarely experienced in our earlier life -- which can be overwhelming to my brain. So I subconsciously resist starting a session because I never feel fully rested, calm, and focussed to take all that and process it well.
There are also 10x more "active" projects now -- and prioritization and choosing between them at every moment is still a struggle. The tempation to do the fun and novel thing and avoid important but familar boring chores pops up every step of the way and can derail you for days.
I am still trying to create a system that works -- now using the very tools. Long journey ahead.
EDIT: My experience --
I was paying for both Claude Code as well as ChatGPT Pro ...but was heavily almost exlucively using CC for coding work because it was so good. After CC started hammering the session and weekly quotas lately -- I tentatively srated using Codex and find that it seems equally good and almost indistnguishable for my work, and ocassionally shines by one-shotting some tasks. This helped me stay afloat with just 2x$20 spend per month without feeling held-up for ransom. Also never hit codex limits till now.
Leaving a 5 hour session quota unused towards the end, or worse not even starting a 5 hour session clock, was a source of constant anxiety -- that I am wasting precious quota getting nothing done. I think I am getting over that now.
I've been using Augment's agents (VS code, CLI) for 8ish months. It let's me easily switch between GPT and Claude models.
I've found the best results from letting GPT 5.4 code and then asking Opus to do code reviews to a file. I do the review in a different agent session so it's "fresh". Then I review the file, edit until I agree with everything, and let the existing GPT agent session address the items in the review file. I've found Clause agents don't perform as well for me in coding for whatever reason. They feel more sloppy.
I've also been doing a very organic spec-driven development process where I have a md file for each non-trivial project update and use that to define the task and address questions or problems the agent has.
I've also found I can give agents conditional instructions which they will usually use like skills. This gives me a way to easily distribute my instructions to any agent/model on any machine with a single AGENTS.md as the entry point:
https://github.com/rsyring/agent-configs/blob/main/default.m...
This has all been very effective, more than I would have predicted a year ago.
Addressing the end of the article, I think that we are all very much still learning how to use AI responsibly. It's like we just discovered alcohol and we're going on a rager every night because we don't know any better yet.
It's too easy to buy €100 of Claude tokens and burn through them to make those dream projects appear as if by magic. There's a middle ground where, for example, instead of building a whole project it could produce a project template and provide guidance as you build. That should take the edge off the task paralysis and hopefully disrupt the addiction loop.
That's how I use LLMs for programming. I predominately use the chatbots instead of the CLI tools. Every so often, I'll ask for a one-shot of some MVP, but then I take that MVP and make all the changes myself. However, I must say that I rarely do the one-shot-and-edit style of development. I find that such a process can save time, but not always.
So the end game for the current generation of AI companies won't be productivity improvements but gambling, just like everything else nowadays. That's why they want to get us all into these massive casinos they call data centers and don't want us to own the slot machines.
So what that you have ideas - other people have them too. It's not ideas that build businesses but knowing right people or ability to sell products.
The gambling trope is so tired. AI development doesn't involve luck to any appreciable degree, certainly not more than hiring people to do a job can be considered "gambling" (you never know what you're going to get!).
It's just paying to get stuff done, which is how it's always been, since the dawn of man.
The gambling part is because of the (hopefully emergent and not purposefully designed) intermittent reinforcement due to the limits. You don't get that with regular hires.
Really? All the hires I've seen had an 8-hour/5-day limit, or you had to pay through the nose for extended usage outside that window.
Where do you get your 24/7 hires from?
You usually don't get immediate responses from hires which means delayed gratification and avoiding much of the potential dopaminergic effects you get when engaging with LLMs.
You can play overextending the hire analogy all you want but it is simply not the same.
For most people who are not doing their day to day jobs it's just a prompt of their idea roughly sketched out and a miracle happens - LLM fills in the blanks. Every time it's different but it works, sometimes even better than initially expected. That's why the addiction and gambling. Gambling is a lot of things, not only flashing lights or play sounds. Some people claim prediction markets isn't gambling either, though that doesn't change the fact.
How is this different from hiring a designer, telling them "make me a website" and then waiting to see if they resolve the uncertainty into something you like or not?
I tell LLMs what to do in pretty high detail, and they do it. With LLMs I have much less variance than with coworkers.
It is different because for humans, it takes time to produce some result, while AI does it instantly. So if you tell a programmer to do X, you have a week for your adrenaline to cool off. If you tell AI, it will do it in minutes.
I don't think the difference between a designer and a slot machine is that one gives you results more slowly, "therefore it's not gambling".
If you're making the argument that LLMs are gambling simply because they're faster than humans, I'd like to see some evidence.
> If you're making the argument that LLMs are gambling simply because they're faster than humans
No I am not. It's more addictive because of the timescale. The comparison of AIs to gambling is through addiction mechanism, as I explain elsewhere.
My aunt used to put in (the same) lottery numbers every week. It was gambling, but probably not an addiction in the clinical sense. If she had played slot machines, god forbid, it could have been more problematic. AI is a slot machine, a hire is a lottery ticket.
> certainly not more than hiring people to do a job can be considered "gambling"
Actually it's quite possible that being a business manager/owner is actually addictive (having power over people), we just don't recognize it as such.
All gambling addiction is addiction, not all addiction is gambling.
Then you miss the point - AI use is being compared to gambling because it is addictive, partly due to same mechanism - the results (and rewards) are somewhat random, but it makes you feel as if you're completely in control of the outcome.
Yeah, that hasn't been my experience. The outcome, for me, is extremely consistent. I ~never have to "reroll" by wiping work and doing it again.
Strange. I tell Claude Code to do things differently all the time.
I'd recommend a different workflow, with extensive upfront planning. This works extremely well for me:
https://www.stavros.io/posts/how-i-write-software-with-llms/
It's to the point that I just push the output of that to production and know it'll be OK, except for very large changes where I'm unlikely to have specified everything at the required level of detail. Even then, things won't so much be wrong, as they'll just not be how I want them.
I'd observe that there are professional gamblers, and there are amateur gamblers.
If you know what you're doing, know how to spec a problem space, and can manage the tool competently enough to churn out good results, then everything's fine, and you're maybe being productive or increasing your productivity by some degree. (Professional "Gambler")
If you DON'T know what you're doing, and you're just vibe-coding, then I would argue that it is at least a form of gambling (Amateur "Gambler")
Both of these conditions can also be applied to "hiring people to do a job" however there we can also observe things like reputation, credentials and so on.
"It's just paying to get stuff done..." is, with respect, superflous.
I don't know, I can understand "some people might overdo it and get addicted to LLMs". I can't understand "LLMs are slot machines and that's all they're good for" when I use LLMs every day to do tons of actual work.
I don’t like the gambling comparison either. It’s more like smoking or drinking. It’s an addiction you lean on to help you do something- even if that something is just getting through the day.
Yeah but those are classified as addictions because they have a harm component (lung cancer, liver disease, societal impact). LLMs aren't going to kill you. If anything, it might be like gaming addiction.
If you've gotten to the point where you'd rather talk to an LLM than socialise, go to work, etc, then yes, you definitely have a problem, same as with a gaming addiction.
Saying "LLMs are slot machines" is like saying "video games are slot machines", and nobody says that, even though it's more true of video games (some are actual slot machines/gacha) than of LLMs.
> Saying "LLMs are slot machines" is like saying "video games are slot machines", and nobody says that, even though it's more true of video games (some are actual slot machines/gacha) than of LLMs.
People absolutely do say that video games are slot machines. [0][1]
0: https://lvl-42.com/2018/11/06/video-games-as-slot-machines/
1: https://www.psu.com/news/three-ways-casino-games-are-similar...
Hence the parenthesized section of the part of my comment you quoted.
Like the internet!
>AI development doesn't involve luck to any appreciable degree
Reading this while I'm prompting for the third time to fix a 100+ line function is amusing, to say the least. I don't care about the definition of "appreciable", but I definitely have to repeat myself to get stuff done, sometimes even to undo things I never told it to touch.
That sounds like a process problem. LLMs, like any tool, work better if you don't use them in the naive "do this" way. This works well for me:
https://news.ycombinator.com/item?id=48083267
What's your monthly token spend?
I have a $100 Claude sub and a $20 OpenAI sub.
> The gambling trope is so tired...
>>> That sounds like a process problem. LLMs, like any tool, work better if you don't use them in the naive "do this" way...
The "you're holding it wrong" trope is even more tired than the gambling trope.
If you can't get results with the thing I'm getting results with, what other explanation would you give?
That logic only makes sense if you and the other person are working on the exact same kinds of projects.
Exact same kinds of projects with the exact same development environment, models, etc. Either he's never worked with a development team or he doesn't consider things outside his own perspective. shrug
Not in that sense but social media companies already know the value of not giving a user exactly what they want. This keeps them on the platform longer and excited some lizard part of our brain for challenge.
Due to capitalism’s law of all businesses convergening on maximizing profit, it’s just a matter of time until AI companies employ similar techniques with LLMs. We can all imagine how that will look like
Some traits I recognized in many excellent coders i worked with, their drive to optimization, intellectual thirst, critical and creative thinking are attributes i consistently correlated with them being in some sort of neurodivergence spectrum.
Being able to remove the "first step" block is great, but what worries me is that this is coupled with LLMs sycophantic behaviours. My gut feeling is that coupling the feeling of unblocking ones capabilities with dopamine hits with the constant praising over someone abilities is an intro to psychosis and paranoia for them.
I'm wrestling with this right now. I only use LLMs for design and exploration because I am not employed and can't pay for a subscription right now, and they make the design phase feel like less of a fever dream because checking my ideas doesn't involve hours of scanning search results online and trying to see how my ideas fit with what exists or trying to evaluate if my ideas even make sense, so I feel more encouraged to get started on working, but I often wonder if the prompts are being sycophants
In one case recently I explained a garbage collector design I had been toying with a while ago, but couldn't find research related to my idea or really evaluate if my idea would work. After enough arguing with the prompt it finally "understood", started praising my "novelty", and when I later asked for research related to it I was given a paper that already implemented most of my idea
It was a funny moment of seeing how it was clearly trained on too many online forum comments (simply mentioning reference counting got it on this whole awkward line of false folklore about memory management) before switching to sycophancy, to finally showing me a paper
Nitpick: Stop the throat clearing and get to the point. The final paragraph is the whole point of the article.
It's a real turnoff when I have to scroll past a moral lecture on artistry and piracy when I just want to hear your thoughts on task paralysis.
---
To the author's point though, AI is incredible at building some initial momentum on a task. The initialization energy is basically zero.
IP law is incompatible with AI. It's an important point, but not here.
When I don't have time, I just ask AI to summarize the main points and expand on the point I like. I do this with even HN discussions. I just copy the whole HN page and paste into Claude and ask it to summarize and deduplicate talking points.
You didn't have to read the article.
And you don't have to read the critique.
You can't know that it was a waste of time to read until after you've read it. Especially if the point is at the end, and is a good point.
OP posted their article on the internet and then to HN presumably because they'd like people to read it.
Not a nitpick, but a justified criticism of the post. The technical term is "burying the lede" and it is incompetence at best and malice at worst.
It's absolutely awful. It's not a novel or entertainment. Don't "foreshadow" or "set the scene". Just get to the fucking point.
Appreciate your nitpick. As I dislike recipes that introduce you to the fine art of wheat milling before getting to the recipe itself, I tried to keep that section short(-ish). I felt the need to provide some context and thoughts, that's why I included it. Not sure what I'll do next time: Either put the conclusion at the beginning and offer some more context and thoughts at the end (then you can drop out if you don't want it), or just leave it out completely. I'll reflect on that.
For me it's different. I am not diagnosed, but I think my executive function doesn't work right. It's really hard for me to start a new task, but when it is interesting enough I can hyper focus until it's done. In the past that often happened when I needed to implement something not too trivial. But now that AI does the implementation in minutes I need to switch tasks constantly and it is honestly super exhausting for me.
Sounds to me like what people are identifying as dopamine, generating it and enjoying it. I am not educated though about brain function.
Noticing novelty is beneficial in nature as it surfaces opportunities to conscious level. "Squirrel!" famously, from the movie "Up". It feels good to experience. Then, creating ones own dopamine supply can drive behavior, and increasing the number of behavior can exhaust energy supply on different human dimensions.
So now, managing this process and limiting the dopamine cycle becomes also worthwhile -- avoiding fatigue potentially perhaps -- while still not negating the attractiveness of dopamine derivable from the endless opportunities of the world. <3
This resonates. The "idea to result" loop getting shorter with AI is genuinely addictive, I've noticed it in my own workflow too. But theres a flip side nobody talks about: once you get used to that speed, going back to manual implementation feels 10x worse than it did before. The paralysis dosn't go away, it just gets masked. The real question is whether AI is solving the problem or just compressing the dopamine cycle around it.
Does one also get dopamine from using LLMs to write comments on Hacker News?
I really feel this. I find myself reaching for CC, typing half a prompt, and then realizing I could've done the task faster myself. But CC is exciting, and feels* fast, so I keep reaching for it. Somehow it feels worse to just do the work.
In case anyone else is wondering if others feel this: yes, i can feel the risk of dopamine overshoot while using AI. As context, I've historically had ADHD that is crippling to a certain normal lifestyle. and I def feel the risk of mania or manic episodes when using these tools, in ways that I used to associate with the drug state of certain ADHD drugs.
Now I am recontextualizing the past experiences as the feeling of moving toward my goals at a speed I am not accustomed to, rather than being exclusively a drug effect
As someone with ADHD, it’s a lot more nuanced than that. Coding agents can remove task paralysis, but they also introduce many other distractions. Being one prompt away from zero to one is a double edged sword, because it means any random thought, idea and side project is also a prompt away.
This becomes less of a problem once you get into running agents on autopilot. Then it becomes about project and task management
I've a thought that AI could drive humanity to appreciate humans, as a side effect of its rise.
Nowadays we're bumping up against alternative nonhuman intelligences, nowadays as we go about our lives. New neighbors, kind of.
And AI has its idea of 'living' in this world .. as a servant to us mainly.
So human life is changing: we now have the opportunity to relate to life (existential) while we're being influenced by the valuable accompaniment of these new docile servants. We're able to "see our plantation and peacocks" if you will.
We experience our life-challenges differently ... now being alive to see our daily labors accomplished by others, and we're able to reap the benefits: more dopamine, resources, whatever.
Our role is changing somewhat, being 'wealthy' or 'elevated'.
I think this poses new questions implicitly, like: Q: Do we like our new wealthy-in-productive-results selves? Is this a life worth living?
AI is a multiplier of both our expertise and our defects.
I have learned how to hide my stupidity from AI's all-seeing eye and the result is the best I can expect from a tool that helped me become 100X more productive, I can't be happier.
Re: Claude usage limits
There was a comment the other day that explained how to use the new DeepSeek V4 with Claude Code.
I mention because it's roughly fifty times cheaper than Claude, and the quality gap is closing.
Which is the difference between "I don't use it for anything serious because I constantly run into limits" and "I can actually use the thing..."
https://news.ycombinator.com/item?id=48002640
It seems "Sonnet-ish" in quality so far, but I haven't tested it much yet.
I’ve been using CC as my GTD-buddy. All the usual plaintext files in a git repo, all the usual processes and workflows and constraints; but I’ve written two skills that have taken the activation energy out of what used to be the hard parts for me: /process-inbox and /weekly-review. Process-inbox interviews me item by item, making suggestions which I accept or amend, and it does the bookkeeping. I tell it when I want to do something and what calendar I want it on and it makes the calendar event. Weekly-review walks through an overview of everything done that week, all my open tasks and projects, makes sure everything has a scheduled next action. Sometimes I make a note, cancel something, reschedule something, whatever.
This is nothing I couldn’t do on my own, and in fact, it’s a lot slower than just manually editing files myself. But: this way it’s actually getting done :)
There’s too much hyperbole on this subject, so I won’t add to it; but it has solved a lot of very-long-running problems of mine.
I find the same. AI helps me overcome any paralysis. I just think "hey it's cheap to write the prompt" and go on.
Best way I've found around this: I design and code the UI for a given feature by hand and then let AI do the more tedious backend work (HITL/human-in-the-loop) I don't wish to do by hand.
It's wonderful if you do the things you enjoy by hand and delegate the "buhhh" stuff to AI. This approach also circumvents the need to review massive PRs (you're only ever concerned with the individual feature, not the whole farm).
Another way to put this is that focus is ultimately what matters, when it comes to actually getting stuff done. Choosing what not to do is often more important than what you actually do.
Since AI tools make it extremely easy to get started, it's really easy to begin half a dozen different projects, feel like you're being productive, but actually accomplish nothing.
This accurately described how I used to utilize AI – and my ChatGPT history is filled with all sorts of grandiose project plans. But lately I've been more and more narrow with what I actually prompt.
This leads me to think that a chatbox is not the best UI for using AI, as it's too open-ended and too prone to give you long, broad answers, rather than hyper-specific ones.
AI has replaced video games for me. And there are plenty of cheaper models that "do it" for me, I don't have to spend $$$$ just for entertainment. I will step up to the frontier for serious work. But if I'm just playing, I'm going for the free stuff on openrouter.
Also, ai art is fine. It looks better than me using paint. That said, there are plenty of foss art pieces and public domain that you can leverage if all you really need is placeholders, and that is much cheaper.
One side note: it's funny how everyone suddenly "hates AI" while happily using it and profiting from it every day. We all want AI like Gyges wanted and used the ring in Plato's Republic.
Why is this on the front page? Probably the least interesting thing I've read all week
Probably because it resonates with the experience many people have when coding with LLMs
I've come to the conclusion that using AI is:
- good for me in the short term (e.g., I can fulfill what my company asks from me)
- good for the company in the short term (see above)
- bad for me in the long-term. E.g, I'm starting to become more and more replaceable at my job; I don't have the same depth of understanding of the systems we're building as I used to; my peers and I collaborate way less now (instead of talking to each other, we just ask claude directly); and there's not much to be proud of in my day-to-day work (we're not building CRUDs, but we're not building netflix either, it's something in between). The compounding effect worries me too: every shortcut I take today is a piece of context I'm not internalizing, a debugging instinct I'm not sharpening, a tradeoff I;m not learning to weigh. The skills that used to differentiate me are slowly atrophying. We're all individually more "productive" on paper, but collectively i think we're gonna end up with a codebase nobody fully understands and a team that barely knows each other
- good for the company in the long-term: they can fire me easily, they don't need 80% of us anymore. They can just pay anthropic for the agents instead. They don't need people to maintain or read the codebase either: agents do that now. And executives never really cared about us in the first place, so that part hasn't changed I guess. The math is simple from their side: headcount is the biggest line item, and agents don't ask for raises, don't burn out, don't go on leave, and dont push back when leadership makes a dumb call. We're the worst part of the business on a spreadsheet, and the tools to replace us are finally cheap enough that someone is gonna pull the trigger
I'm not a superstar engineer. I know that. I'm probably in the 80% bag of engineers out there. Some of you may be in the top 20%, and you probably gonna keep your job somehow (or not, who knows). But for the rest of us, I think we simply cannot compete anymore.
I regret every single time I've used AI so far. Nothing good has come from it for me; the feeling is so different from any other technology I've used in the past (frameworks, languages, libraries, whatever): it used to be fun, it improved my career prospects, it expanded my knowledge. AI/LLMs are precisely the opposite: it's not fun, it's making my career worse, and it's not expanding my knowledge.
I have a feeling that after enough slop has entered the system, the AI will also have difficulty debugging/understanding it.
My questions are: will the AI get to be above our level at creating grokkable source code before it comes unmanageable? And even if not: will the models' ability to understand and modify slop outpace it's ability to create it?
For our jobs, I hope neither is true. But we'll see. Even in the best case we'll have a lot of cleaning up to do.
You have framed things well with your short and long g term analysis.
I would add these points to negative long-term personal effects:
- potential for cognitive impairment / deficit from long-term AI use.
- lack of diversity / creativity / heterogeneity / outside the box thinking of any sort in work going forward.
Don't know about ADHD and whatnot, but I do feel this "task paralysis" pretty often. One thing that I found works really well for me is to work on multiple projects at once. Go one to two weeks on one, then switch to another. I'm not lacking motivation anymore and it feels great.
It is really weird reading things but I guess normal? It seems many feel this, including me. AI just compounds this behavior even more! Darn.