Not just Amazon, too. It feels like all of big tech (and some smaller firms) have simultaneously gone insane. Imagine if your CEO woke up one day and told the company: "We need to encourage travel spending. Please book as many business trips as you can, and spend as much money as possible. Fly first class to our satellite offices! Take limos instead of Ubers! Eat at fine restaurants! Make sure you are constantly traveling. In fact, we are going to make Travel Spending part of your annual performance review: If you don't spend enough on business travel, you'll get a low rating!"
I know some that was told to try and use AI more on the job so they created some agent to just burn tokens and ended up using about 10x what the next highest employee used. Buddy expected to get shit but instead got an accolade and was asked to give a short talk to the other employees about how they could match their success.
And the fact that it is an industry-wide meme at this point makes bright red flashing lights and klaxons go off on my mind that a catastrophic reckoning can't be too far. There's not enough money in the world to keep this up for too long.
My dad worked at a company that had their own travel agency (early 90s when you needed a travel agent for reasons that no longer apply), and he was often booked on the more expensive flight because the travel agency made more money. More than once he could have got first class for less on a different flight but company policy didn't allow him to fly first class.
It's more like "We really value face-to-face interaction, so we're going to track that with your total travel spend. We don't want to get in the way, so there's no budget."
Like six months ago we got a presentation from an AWS guy on the AI tooling available and how it fit with our particular use cases.
At one point seemingly out of nowhere he pointed out on his screen share "Look at how many tokens I've used this month. I run so much Opus." It was a number that was offensively large.
I remember thinking "That's a really odd flex, this crap is so expensive the fact that you use so much should be a red flag"
He demonstrated a number of Claude Code use cases he had to manage and tweak AWS infrastructure that made me, the old greybeard sysadmin older than the internet think "You've used AI to do something that was a single command."
So this story makes sense. They were being encouraged to just blast away at it six plus months ago.
I notice a lot of Cursor's suggestions are just stuff a linter should auto-fix.
But if you hit "tab" it'll claim that as an AI-edited line, LOL.
(A lot of the rest of it is stuff I could already have been doing just as fast if I'd ever bothered to learn to use multiple cursors, learned vim navigation, or set up some macros—I never did because my getting-code-on-the-screen speed without those has never been slow enough to hold anything up, in practice)
Cursor absolutely tries to maximize what they claim is "AI-edited" and it's nonsense a lot of the time. If it writes a function and then I got in and edit that function, it claims my edits _and_ any net-new lines I add above or below the function.
I think you'll find that a lot of big investment companies are buried to the hilt in a lot of tech companies and also OpenAI and Anthropic. So you can do the math on where the directive is coming from and why it's not particularly careful or measured.
> You've used AI to do something that was a single command
Yes, and that’s a good thing! This is in fact where a lot of AI value lies. You dont need to know that command anymore - knowing the functional contract is now sufficient to perform the requisite work duties. This is huge!
I can't tell if this comment is sarcasm or not. If you let AI run commands you don't understand (especially in production) you may end up with some nasty surprises.
Not even joking that the main benefit I've seen from "AI" for editing code is that it lets me quickly do all the things I could already have been doing just as quickly if I'd ever bothered to learn to use my tools.
Of course I lose about as much time as I save to its fuck-ups, so I'd still have been better off learning to actually use a text editor properly. Though (as I mentioned in a another post) part of why I've never done that in 25ish years of writing code for pay is that my code-writing speed has never been too slow for any of the businesses I've worked in, i.e. other things move slowly enough it never mattered.
I still don't know how to reconcile these reports with what other people say about GenAI-agentic assisted engineering being the only way of working nowadays, especially in startups.
Probably there is no dichotomy going on and it depends on multiple factors, but it seems so weird to see reports that are so different between each other.
I work at a FAANG (not Amazon), and have heard this a lot, both internally and publicly. Except, never officially from anyone that mattered (leadership). It always starts with a rumor and/or someone (internal) creating a dashboard/metric, and blows up from there. I've even heard leaders proclaim that it's NOT what they're looking at, and that you better NOT be wasting those expensive tokens.
Now, they might be; they've certainly used silly metrics in the past (LoC, commit count, etc.) without ever fully acknowledging it. But I don't believe that it's as simple as more tokens = more better.
I feel like it depends on the leader. I've definitely seen leaders value LoC beyond reason and cause worse, bloated codebases by rewarding cowboys with 10k line PRs.
Big companies have thousands of leaders. Many good, many bad.
We didn't. The USSR had 100% employment long ago[0], and all the poverty that goes with it.
This isn't like that, as it isn't funded through taxes. This is private companies experimenting with their money, and risking downstream cost increases that may cause people to go elsewhere, as they do when they try anything new.
This is much better than just funding people regardless of productivity through forced taxes.
> We didn't. The USSR had 100% employment long ago[0], and all the poverty that goes with it.
I don't think USSR poverty rates surpassed those of Tsarist Russia that preceded them. To their credit, I think ideologic competition between capitalist and communist blocks was part of what allowed improvement of life conditions of workers in capitalist countries, after WWII. Fear of revolutions avoided one-percenters taking all productivity gains in the period. Thay had to share some to keep guillotines away. As soon as things went south in the USSR, from the 70s onwards, and capitalism took over the whole world, lacking any sort of viable extant competition, we reverted back to the old norm, the workers were denied their share of the productivity gains since then, and here are us now. A regime premised on free competition was undone by lack of competition to itself.
I'd bet that the goal is for people to 'game' it though. By pushing people to use AI more they'll try it, experiment with it, 'waste' time on it ... and from that they'll learn about it. That's the end goal.
They're using tokens for pointless stuff right now in order to figure out use cases where it helps. You can't do that without also learning where it doesn't help.
That is exactly the point. It may be wasteful, but it's the fastest way to explore how AI may actually be useful to your business. Even if 80% of employees are just wasting tokens, you still have 20% who are figuring it out.
Even if that were true it'd mean that current AI usage is overshooting actual, productive use by 5x. This is a problem when all the AI projections are that the current state is the minimum and future usage will be 10+x.
It is difficult to believe that you can cobra effect yourself into greatness. I'd rather say the most useful perk for companies doing this is the AI-washing adoption metrics they can report, which will hopefully (for them) increase valuations.
Lots of people reporting their "I had to use up my tokens, so I burned them on worthless stuff" stories. Incredible thing to do in a climate emergency. Push harder guys, maybe we can hit 3C warming?
This reminds me of the story of how the USSR nearly made whales extinct to meet a quota for whale meat that nobody wanted to eat.
Bullshit work has hit escape velocity, won’t be long now before we have huge warehouses filled with people doing sudoku for their daily food allowance, and that’s just how our entire economy functions.
How are we sliding face first into “snowpiercer but dumber”?
I've been noticing how our economy keeps getting more Soviet as it becomes more top-down. We basically have central planning now with all the pathologies inherent in that system, but unlike the soviets we just have a bunch of guys who happened to get rich or bribe the right people running our GOSPLAN.
Within Amazon, token usage is gamified if you use Kiro and your team isn't billed for it in the same way you are billed for AWS or have to account for your capacity in older systems. I've credibly heard of people gaming this internal ranking before anyone paid attention to it. There are also tons of enthusiasts doing all kinds of internal projects and sharing them.
There's definitely some pressure from managers when they hear about N00% productivity boosts in internal presentations, but where I am at they would figure out if you were making up tasks rather than working pretty quickly and the pressure comes from aggressive deadlines and a shift from the yearly OP1 process to a more agile one.
Some HN commenters believe the purpose of "AI" is data collection, not providing a dubiously valuable "service" for a fee. This is how "Big Tech" operates. Use is "free" (the carrot, the bait). No matter how valuable anyone thinks these "services" are, advertisers are generally the only paying customers
When the data collected is used to train and improve "AI", in addition to selling ad services, the focus on usage makes even more sense
On the contrary, some people will advance storylines about increased "productivity" but these claims have not been supported by evidence, only anecdotes (marketing)
I've heard similar stories from AWS and other non-AWS FAANG employees. All of the token leaderboards have a "this doesn't count toward your performance review" disclaimer, but there's an implied nudge nudge, wink wink after that statement.
One person I've talked to has someone in their org who is running GasTown and chews through tokens 24/7. They don't contribute very much, but they're comfortably in the #1 spot.
I've done similar at my job where management wants us to use all of our tokens before they expire. I usually set it to documentation tasks and other minor tasks just to eat up tokens.
At least that nominally creates some value at the end of the day. Documentation is the thing everyone wants but no one has time/desire to create. My most recent token heavy task was having an agent write unit tests for coverage on a little graphAPI tool I'd written a bit ago to satisfy SonarQube.
People don't want to read LLM-generated docs though. It'll lack the context to justify why things were designed the way they were, and there's always a risk of hallucination so you still have to verify the documentation's claims, since the person who published it likely did not scrutinize it.
There's really no end to dot-language diagrams you can have it make. Call graphs, package dependency maps, let it try to figure out an architecture diagram, whatever.
Giving it busywork that you don't have the time or wherewithal to check carefully sounds like a disaster. Rather than introduce content that will be partially wrong and cause confusion if it's ever read, I'd consume the credits and send the output to /dev/null.
Just title it "draft". Odds are nobody will look at it anyway.
Add a pre-commit hook to re-create the diagrams on every commit (in case anything changed, of course), that way you can really burn tokens and look good to management.
People need to start yelling, throwing things and publicly mocking execs that do this. What is wrong with you all? I do this (except the throwing) and I get nothing but respect. If you've been a good little soldier for years, done nothing but deliver and then you raise your ire people will listen.
If you can't change your company, change your company!
What's the root cause of these ridiculous decisions being taken at tech corporations? Constantly, they fall into fads like these that everyone with a brain knows make no sense but still many companies decide to follow them. For example: RTO -> what's the point of this shit? we never knew for sure but higher ups at most tech companies suddenly decided that RTO was the way to go forward despite all the downsides. Another example: DEI policies, some of them were very non-sensical.
I believe there has to be some downward pressure on these executives to take these decisions but I would like to know where it's coming from exactly and what's the logic behind them. Is it some big institution like Blackrock which has leverage on many of these companies? That's always been my bet but I never knew for sure.
Crappy managers don’t know (or actively avoid) how to measure business value from individuals. So they need you to be in the office so they can physically see if you are putting in the effort.
Tokens is just yet another proxy for business value.
The problem they face is if everybody is judge by business value in dollars, crappy managers are the first to go
I don’t even understand the point of making up tasks. Surely there’s some moonshot frustration project in your workday you could have an agent plugging away at, even if it’s unsuccessful.
I have colleagues at prime video who consult AI the way medieval clerks once consulted omens, generating entire chains of speculative labor after ritual examinations of any of their given codebases. no real or new initiatives / innovations are being pushed forward, and thats rumored to be happening in other departments as well.
Hasn't Anthropic being experiencing issues due to extremely high usage? Being their investor, you would think Amazon wouldn't do Anthropic dirty by weakening their ability to handle user traffic
Being an investor in Anthropic, Amazon must have a preferred billing rate, but others do not. No wonder their revenue shot up so much, so fast, because of BS goals like those.
This is foolish. High token use is associated with worse output. If you fill your models context you are going to be using a lot more context but the labs literally put out charts of how the models degrade at high context use.
This is analogous to measuring productivity by LoC output.
This is what I do. I tell AI to go through every file in my project, identify up to 10 bugs per file, and then write the markdown with the name of the file plus "bugfix". This takes about 2 hours. Then I delete all the files with the suffix "bugfix" and then do it again.
You should probably create an agent to make agents whose jobs are to figure out how to maximize the token usage (and one whose job is to calculate the minimum token usage, so it doesn't look like a boondoggle).
Not just Amazon, too. It feels like all of big tech (and some smaller firms) have simultaneously gone insane. Imagine if your CEO woke up one day and told the company: "We need to encourage travel spending. Please book as many business trips as you can, and spend as much money as possible. Fly first class to our satellite offices! Take limos instead of Ubers! Eat at fine restaurants! Make sure you are constantly traveling. In fact, we are going to make Travel Spending part of your annual performance review: If you don't spend enough on business travel, you'll get a low rating!"
We are living in a totally bonkers time.
It’s preposterous, companies are blindly funding slop and the product is fool’s gold.
I know some that was told to try and use AI more on the job so they created some agent to just burn tokens and ended up using about 10x what the next highest employee used. Buddy expected to get shit but instead got an accolade and was asked to give a short talk to the other employees about how they could match their success.
i call BS on this story
I call AI on this comment
why?
I don’t.
Things that rhyme with this have indeed been happening at the biggest names.
https://en.wikipedia.org/wiki/Goodhart%27s_law
Exactly this.
And the fact that it is an industry-wide meme at this point makes bright red flashing lights and klaxons go off on my mind that a catastrophic reckoning can't be too far. There's not enough money in the world to keep this up for too long.
Bragging about token usage is like bragging about LoC written.
Obligatory:
Negative 2000 Lines of Code
https://news.ycombinator.com/item?id=44381252
My dad worked at a company that had their own travel agency (early 90s when you needed a travel agent for reasons that no longer apply), and he was often booked on the more expensive flight because the travel agency made more money. More than once he could have got first class for less on a different flight but company policy didn't allow him to fly first class.
We have always been living in bonkers time.
It's more like "We really value face-to-face interaction, so we're going to track that with your total travel spend. We don't want to get in the way, so there's no budget."
Like six months ago we got a presentation from an AWS guy on the AI tooling available and how it fit with our particular use cases.
At one point seemingly out of nowhere he pointed out on his screen share "Look at how many tokens I've used this month. I run so much Opus." It was a number that was offensively large.
I remember thinking "That's a really odd flex, this crap is so expensive the fact that you use so much should be a red flag"
He demonstrated a number of Claude Code use cases he had to manage and tweak AWS infrastructure that made me, the old greybeard sysadmin older than the internet think "You've used AI to do something that was a single command."
So this story makes sense. They were being encouraged to just blast away at it six plus months ago.
I notice a lot of Cursor's suggestions are just stuff a linter should auto-fix.
But if you hit "tab" it'll claim that as an AI-edited line, LOL.
(A lot of the rest of it is stuff I could already have been doing just as fast if I'd ever bothered to learn to use multiple cursors, learned vim navigation, or set up some macros—I never did because my getting-code-on-the-screen speed without those has never been slow enough to hold anything up, in practice)
Cursor absolutely tries to maximize what they claim is "AI-edited" and it's nonsense a lot of the time. If it writes a function and then I got in and edit that function, it claims my edits _and_ any net-new lines I add above or below the function.
I think you'll find that a lot of big investment companies are buried to the hilt in a lot of tech companies and also OpenAI and Anthropic. So you can do the math on where the directive is coming from and why it's not particularly careful or measured.
> You've used AI to do something that was a single command
Yes, and that’s a good thing! This is in fact where a lot of AI value lies. You dont need to know that command anymore - knowing the functional contract is now sufficient to perform the requisite work duties. This is huge!
Is it? If the LLMs change broke something do you know enough to fix it?
It's also several hundred times more expensive.
> You dont need to know that command anymore
I find it hard to read "You can do things without knowing things" as a positive improvement in work, society, life, anywhere
I can't tell if this comment is sarcasm or not. If you let AI run commands you don't understand (especially in production) you may end up with some nasty surprises.
Once I learn a command that is both repeatable and useful, I prefer to either keep it in my mind or in my aliases. Thank you.
Not even joking that the main benefit I've seen from "AI" for editing code is that it lets me quickly do all the things I could already have been doing just as quickly if I'd ever bothered to learn to use my tools.
Of course I lose about as much time as I save to its fuck-ups, so I'd still have been better off learning to actually use a text editor properly. Though (as I mentioned in a another post) part of why I've never done that in 25ish years of writing code for pay is that my code-writing speed has never been too slow for any of the businesses I've worked in, i.e. other things move slowly enough it never mattered.
I still don't know how to reconcile these reports with what other people say about GenAI-agentic assisted engineering being the only way of working nowadays, especially in startups.
Probably there is no dichotomy going on and it depends on multiple factors, but it seems so weird to see reports that are so different between each other.
I work at a FAANG (not Amazon), and have heard this a lot, both internally and publicly. Except, never officially from anyone that mattered (leadership). It always starts with a rumor and/or someone (internal) creating a dashboard/metric, and blows up from there. I've even heard leaders proclaim that it's NOT what they're looking at, and that you better NOT be wasting those expensive tokens.
Now, they might be; they've certainly used silly metrics in the past (LoC, commit count, etc.) without ever fully acknowledging it. But I don't believe that it's as simple as more tokens = more better.
In our place it is really a thing and comes from leadership. They feel like they spent a lot on copilot and they want to see people using it.
I feel like it depends on the leader. I've definitely seen leaders value LoC beyond reason and cause worse, bloated codebases by rewarding cowboys with 10k line PRs.
Big companies have thousands of leaders. Many good, many bad.
It's a shame AI now has a universal basic jobs[1] program, but humans still not. Companies are paying AI to dig holes, so other AI can fill them.
[1] https://locusmag.com/feature/cory-doctorow-full-employment/
We didn't. The USSR had 100% employment long ago[0], and all the poverty that goes with it.
This isn't like that, as it isn't funded through taxes. This is private companies experimenting with their money, and risking downstream cost increases that may cause people to go elsewhere, as they do when they try anything new.
This is much better than just funding people regardless of productivity through forced taxes.
[0] https://nintil.com/the-soviet-union-achieving-full-employmen...
> We didn't. The USSR had 100% employment long ago[0], and all the poverty that goes with it.
I don't think USSR poverty rates surpassed those of Tsarist Russia that preceded them. To their credit, I think ideologic competition between capitalist and communist blocks was part of what allowed improvement of life conditions of workers in capitalist countries, after WWII. Fear of revolutions avoided one-percenters taking all productivity gains in the period. Thay had to share some to keep guillotines away. As soon as things went south in the USSR, from the 70s onwards, and capitalism took over the whole world, lacking any sort of viable extant competition, we reverted back to the old norm, the workers were denied their share of the productivity gains since then, and here are us now. A regime premised on free competition was undone by lack of competition to itself.
I'd bet that the goal is for people to 'game' it though. By pushing people to use AI more they'll try it, experiment with it, 'waste' time on it ... and from that they'll learn about it. That's the end goal.
They're using tokens for pointless stuff right now in order to figure out use cases where it helps. You can't do that without also learning where it doesn't help.
My company is doing the same thing.
That is exactly the point. It may be wasteful, but it's the fastest way to explore how AI may actually be useful to your business. Even if 80% of employees are just wasting tokens, you still have 20% who are figuring it out.
Even if that were true it'd mean that current AI usage is overshooting actual, productive use by 5x. This is a problem when all the AI projections are that the current state is the minimum and future usage will be 10+x.
It is difficult to believe that you can cobra effect yourself into greatness. I'd rather say the most useful perk for companies doing this is the AI-washing adoption metrics they can report, which will hopefully (for them) increase valuations.
Lots of people reporting their "I had to use up my tokens, so I burned them on worthless stuff" stories. Incredible thing to do in a climate emergency. Push harder guys, maybe we can hit 3C warming?
This reminds me of the story of how the USSR nearly made whales extinct to meet a quota for whale meat that nobody wanted to eat.
This is why we're clear-cutting forests to build new data centers? Not even for "real" productivity gains, but just for the sake of using the tokens.
Bullshit work has hit escape velocity, won’t be long now before we have huge warehouses filled with people doing sudoku for their daily food allowance, and that’s just how our entire economy functions.
How are we sliding face first into “snowpiercer but dumber”?
Gotta scale and then IPO those startups, so the VCs can cash out profitably.
Yeah but what can we do. I don't want to be punished by work either.
Luckily I work in app management and I know they can only see the last date used so if I just put in one query per day I'm good.
But I'm so sick and tired of this AI hype :(
I've been noticing how our economy keeps getting more Soviet as it becomes more top-down. We basically have central planning now with all the pathologies inherent in that system, but unlike the soviets we just have a bunch of guys who happened to get rich or bribe the right people running our GOSPLAN.
> USSR nearly made whales extinct
USSR barely accounted for 15% of the world caught amount (with Japan as the leader).
> that nobody wanted to eat
unsubstantiated.
Within Amazon, token usage is gamified if you use Kiro and your team isn't billed for it in the same way you are billed for AWS or have to account for your capacity in older systems. I've credibly heard of people gaming this internal ranking before anyone paid attention to it. There are also tons of enthusiasts doing all kinds of internal projects and sharing them.
There's definitely some pressure from managers when they hear about N00% productivity boosts in internal presentations, but where I am at they would figure out if you were making up tasks rather than working pretty quickly and the pressure comes from aggressive deadlines and a shift from the yearly OP1 process to a more agile one.
The "success" of "AI" depends on usage
Some HN commenters believe the purpose of "AI" is data collection, not providing a dubiously valuable "service" for a fee. This is how "Big Tech" operates. Use is "free" (the carrot, the bait). No matter how valuable anyone thinks these "services" are, advertisers are generally the only paying customers
When the data collected is used to train and improve "AI", in addition to selling ad services, the focus on usage makes even more sense
On the contrary, some people will advance storylines about increased "productivity" but these claims have not been supported by evidence, only anecdotes (marketing)
I've heard similar stories from AWS and other non-AWS FAANG employees. All of the token leaderboards have a "this doesn't count toward your performance review" disclaimer, but there's an implied nudge nudge, wink wink after that statement.
One person I've talked to has someone in their org who is running GasTown and chews through tokens 24/7. They don't contribute very much, but they're comfortably in the #1 spot.
When are they going to admit that they over invested in AI and somehow have to justify that spend with usage down our throat?
I've done similar at my job where management wants us to use all of our tokens before they expire. I usually set it to documentation tasks and other minor tasks just to eat up tokens.
At least that nominally creates some value at the end of the day. Documentation is the thing everyone wants but no one has time/desire to create. My most recent token heavy task was having an agent write unit tests for coverage on a little graphAPI tool I'd written a bit ago to satisfy SonarQube.
People don't want to read LLM-generated docs though. It'll lack the context to justify why things were designed the way they were, and there's always a risk of hallucination so you still have to verify the documentation's claims, since the person who published it likely did not scrutinize it.
There's really no end to dot-language diagrams you can have it make. Call graphs, package dependency maps, let it try to figure out an architecture diagram, whatever.
Giving it busywork that you don't have the time or wherewithal to check carefully sounds like a disaster. Rather than introduce content that will be partially wrong and cause confusion if it's ever read, I'd consume the credits and send the output to /dev/null.
Just title it "draft". Odds are nobody will look at it anyway.
Add a pre-commit hook to re-create the diagrams on every commit (in case anything changed, of course), that way you can really burn tokens and look good to management.
People need to start yelling, throwing things and publicly mocking execs that do this. What is wrong with you all? I do this (except the throwing) and I get nothing but respect. If you've been a good little soldier for years, done nothing but deliver and then you raise your ire people will listen.
If you can't change your company, change your company!
Let it write unit tests for every single function in the codebase lol
I've chosen the wrong profession.
What's the root cause of these ridiculous decisions being taken at tech corporations? Constantly, they fall into fads like these that everyone with a brain knows make no sense but still many companies decide to follow them. For example: RTO -> what's the point of this shit? we never knew for sure but higher ups at most tech companies suddenly decided that RTO was the way to go forward despite all the downsides. Another example: DEI policies, some of them were very non-sensical.
I believe there has to be some downward pressure on these executives to take these decisions but I would like to know where it's coming from exactly and what's the logic behind them. Is it some big institution like Blackrock which has leverage on many of these companies? That's always been my bet but I never knew for sure.
Crappy managers don’t know (or actively avoid) how to measure business value from individuals. So they need you to be in the office so they can physically see if you are putting in the effort.
Tokens is just yet another proxy for business value.
The problem they face is if everybody is judge by business value in dollars, crappy managers are the first to go
I don’t even understand the point of making up tasks. Surely there’s some moonshot frustration project in your workday you could have an agent plugging away at, even if it’s unsuccessful.
I have colleagues at prime video who consult AI the way medieval clerks once consulted omens, generating entire chains of speculative labor after ritual examinations of any of their given codebases. no real or new initiatives / innovations are being pushed forward, and thats rumored to be happening in other departments as well.
Hasn't Anthropic being experiencing issues due to extremely high usage? Being their investor, you would think Amazon wouldn't do Anthropic dirty by weakening their ability to handle user traffic
Amazon runs anthropic models in it's own DCs with Bedrock.
Waiting for the YC startup in the next batch that provides tokenmaxxing-as-a-service.
This is coming to my workplace too. They send us angry reminders if we don't use copilot in ms office every day :( I just type Hello to it.
Being an investor in Anthropic, Amazon must have a preferred billing rate, but others do not. No wonder their revenue shot up so much, so fast, because of BS goals like those.
Long live Goodhart!
Good old Goodhart's law. https://xkcd.com/2899/
Token-driven development
Goodhart's Law in effect right there.
This is foolish. High token use is associated with worse output. If you fill your models context you are going to be using a lot more context but the labs literally put out charts of how the models degrade at high context use.
This is analogous to measuring productivity by LoC output.
> This is analogous to measuring productivity by LoC output
True, but it looks like productivity to people whose own productivity is measured by how busy their subordinates appear to be.
Corporate tech has accelerated into a preposterous trajectory.
Burn resources at all costs to appear productive and use proxy metrics to measure success.
Fire productive employees to ensure we have resources to fund the proxy metrics.
AI slop fool’s gold is the product.
Use Vim or you're fired!!
this one is legit the right call, though
Narrator: “it wasn’t just Amazon”
When a measure becomes a target....
Especially a measure that's so easily manipulated.
I was just going to invoke Goodhart's Law
https://en.wikipedia.org/wiki/Goodhart's_law
There are some secret random seeds that will prevent the end token and just keep generating forever. This will ruin your hardware though.
This is what I do. I tell AI to go through every file in my project, identify up to 10 bugs per file, and then write the markdown with the name of the file plus "bugfix". This takes about 2 hours. Then I delete all the files with the suffix "bugfix" and then do it again.
You should probably create an agent to make agents whose jobs are to figure out how to maximize the token usage (and one whose job is to calculate the minimum token usage, so it doesn't look like a boondoggle).
This seems like AI is the new ponzi scheme.
If GDP is going up, we must be wealthier and more productive, right? Surely? (/s)
New proposed corporate slogan: "Tokens must roll for victory!"
The original (third reich): "Wheels must roll for victory!"
It will end in the same manner.