>There is almost certainly at least one person in Redmond working on this that’s smarter and better informed than you, just going off probability.
this isn't the actual test though right? it's surely true, and yet companies make bad decisions all the time. It needs to be conditioned on "someone smarter and better informed than you _with influence in the organization sufficient to enact change_"
Yeah the guy who designed the Courier for Microsoft was smarter and better informed than me, the problem is he was smarter and better informed than his leadership at microsoft who canned the project in favour of doing nothing, messing around with nokia, and ending up with a watered down product years too late in terms of the surface.
I can tell you for certain about quite a few people like that who are no longer working in Redmond because they have been canned in the last wave of layoffs.
Copilot (referring to the M365 one) is not a very great product though. It's clearly been rushed to market before it was even done. Microsoft is soooo afraid they'll miss this boat. But we are stuck with the fallout.
That's not what I see from the outside Everytime versioning comes up. My understanding is Microsoft marketing has full control over what constitutes a major / minor version bump like when typescript 4 is released vs typescript 3.9 (just an example). The people who build typescript don't even control their own version numbers.
The versioning logic is fairly simple (although kind of pointless afaict?). X.1 —> X.9, and instead of going to .10 it simply increments major number. With average releases every 3 months, then major versions are simply being bumped roughly every 30 months.
I think functionally it’s just a really awkward date-versioning on a 30 month calendar instead of 12
It's an old school versioning system that was very popular for DOS software. I'm not surprised that TypeScript, being ultimately still an Anders Hejlsberg project, would adopt it.
.NET is not a good representation of Microsoft. It is a uniquely developer centered product (from devs by devs) that is of higher quality than almost everything else they produce.
M$ Nadella is great at business strategic decisions. If only there is another great product person at M$ that pushes the design of their software and hardware. Both has gotten a lot better but still not Apple level of polish.
I’ve used windows and macOS side by side for over a decade, and macOS BY FAR has a greater level of polish than windows, which to this day is relatively unstable. I haven’t had myself or a relative experience an OS-level crash in macOS in like five years years. Meanwhile, troubleshooting a family member’s BSOD is a regular occurrence.
- I can't tell if my macbook is charging or not while the lid is closed. There is no light indicator
- minimize/maximize buttons are very small given that I am using a the touchpad. I am not saying they are unsuably small but far from ideal.
- If I fully deplete the battery. I cannot immediatly turn it on with adapter's power. Why? Every windows laptop can be run with adapter even with a dead battery.
> BSOD is a regular occurrence
I hate windows as much as the next guy, but are you really getting BSOD in 2025?
I like linux and I know its not exactly polished but it offers something as a trade off which neither MacOS nor Windows do.
> If I fully deplete the battery. I cannot immediatly turn it on with adapter's power. Why? Every windows laptop can be run with adapter even with a dead battery.
First you should not be depleting modern batteries, its not good for the battery health, or indeed storage health, recall that solid-state storage integrity is not guaranteed in the absence of electrical current (a reminder to those who have a habit of backing up stuff onto SSDs, unplugging them and forgetting about them for extended time periods).
Second, if you think about it, its a safety feature in to protect you from data corruption.
If they let you turn on a machine with a fully-depleted battery and you immediately yank out the power cord, then you risk data corruption.
Sure, modern solid-state storage (at least the high quality implementations) have power-loss-protection. But that relies on capacitors. And to charge up capacitors, you need what ... oh yeah, that's right, electricity .... ;-)
So that is most likely why Apple require you to have a minimal charge before allowing you to power-on from completely depleted.
And to be honest, what are you bitching about anyway. Most Apple devices I use only require a 5–10 minute charge from completely depleted to get you to the minimum required. So just go make a cup of coffee and remind yourself not to completely deplete batteries in the future.
> First you should not be depleting modern batteries, its not good for the battery health,
I think most modern devices already turn themselves off before reaching true ZERO.
> Second, if you think about it, its a safety feature in to protect you from data corruption.
No its to make sure that you can not use a perfectly functional compouter without replacing a battery once it has fully died.
> I use only require a 5–10 minute charge
Are you saying apple devices are not good for watching live content?
> You risk data corruption
With autosave (which most app have) data loss would be minimal.
Before I could afford apple products, from afar everything apple looked amazing but after using them I found out, Apple UI is just pretty and only slightly more polished than Windows.
I'm sure people running out of batteries in the middle of a zoom call or a presentation especially enjoy waiting an additional 5-10 minutes before being able to continue.
> Even if there is fast takeoff AGI, he thinks human messiness will slow implementation
Five years ago during covid all these clunky legacy businesses couldn't figure out a CSV let alone APIs but suddenly with AGI they are going to become tech masters and know how to automate and processize their entire operations?
I found it very amusing that at the turn of the decade "digitalisation" was a buzzword as Amazon was approaching its 25th anniversary.
Meanwhile huge orgs like the NHS run on fax and was crippled by excel row limits. Software made a very slow dent in these old important slow moving orgs. AI might speed up the transition but I don't see it happening overnight. Maybe 5 years if we pretend smartphone adoption is indicative of AGI and humanoid robot rollout
I think you don’t have widespread adoption until AI takes the form of a plug-and-play “remote employee”.
You click a button on Microsoft Teams and hire “Bob” who joins your team org, gets an account like any other employee, interacts over email, chat, video calls, joins meetings, can use all your software in whatever state it’s currently in.
It has to be a brownfield solution because most of the world is brownfield.
Completely unusable in any bank, or almost any organization dealing with data secrecy. You have complex, often mandatory processes to onboard folks. Sure, these can be improved but hiring some 'Bob' would be far from smooth sailing.
Big enough corps will eventually have their own trusted 'Bobs' just like they have their own massive cluster farms (no, AWS et al is not a panacea and its far from cheap&good solution).
Giving any form of access to some remote code into internal network of a company? Opsec guys would never ack that, there is and always will be malice coming from potentially all angles.
I have worked at a place with serious opsec and none of that was allowed. Everything pointed at private mirrors containing vetted packages. Very few people had the permissions to write to those repos.
Not to mention that if Bob works as the current overhyped technologies do, it will be possible to bribe him by asking him to translate the promise of a bajillion dollars into another language and then repeat it back looking fort deciding on his next steps.
> I think you don’t have widespread adoption until AI takes the form of a plug-and-play “remote employee”.
Exactly. The problem with the AGI-changes-everything argument is that it indirectly requires "plug-and-play" quality AGI to happen before / at the same time as specialized AGI. (Otherwise, specialized AGI will be adopted first)
I can't think of a single instance in which a polished, fully-integrated-with-everything version of a new technology has existed before a capable but specialized one.
E.g. computers, cell phones, etc.
And if anyone points at the "G" and says "That means we aren't talking about specialized in any way," then you start seeing the technological unlikeliness of the dominoes falling as they'd need to for AGI fast ramp.
Honestly, I think the mode that will actually occur is that incumbent businesses never successfully adopt AI, but are just outcompeted by their AI-native competitors.
Sears also did everything it could to annihilate itself while dot-com was happening.
their CEO was a believer of making his departments compete for resources leading to a brutal, dysfunctional clusterfuck. rent seeking behavior on the inside as well as outside.
"Bob" in this example is just some other random individual contributor, not some master of the universe. E.g. they would have the title "associate procurement specialist @ NHS" and join and interact on zoom calls with other people with that title in order to do that specific job.
Right, but these jobs are inefficient mostly because of checks and balances. So unless you have a bunch of AIs checking one another's work (and I'm not sure I can see that getting signed off) doesn't it just move the problem slightly along the road?
There's an argument here something like.. if you can replace each role with an AI, you can replace multiple with a single AI, why not replace the structure with a single person?
And the answer is typically that someone has deemed it significant and necessary that decision-making in this scenario be distributed.
Yup. If we ignore all the ‘people’ issues (like fraud, embezzlement, gaming-the-system, incompetence when inputting data, weird edge cases people invent, staff in other departments who are incompetent, corruption, etc), most bureaucracies would boil down to a web form and a few scripts, and probably one database.
Better hope that coder doesn’t decide to just take all the money and run to a non extradition jurisdiction though, or the credentials to that DB get leaked.
Just look at names. Firstname, lastname? Jejeje, no.
Treating them as constants? laughs in married woman.
If you can absolutely, 100% cast iron guarantee that one identity field exactly identifies one living person (never more, never less), these problems are trivial.
If not? Then its complexity might be beyond the grasp of the average DOGE agent (who, coincidentally, are males in their early 20s with names conforming to a basic Anglo schema).
The ability to use that tech effectively to optimize the organizations internal processes. Or do the job of a person without actually being a person with a name that can be held accountable.
Most of those orgs have people in key positions (or are structurally setup in such a way) that isn’t desirable to change these things.
As a first order of business, a sufficiently advanced AGI would recommend that we stop restructuring and changing to a new ERP every time an acquisition is made or the CFO changes, and to stop allowing everyone to have their own version of the truth in excel.
As long as we have complex manual processes that even the people following them can barely remember the reason why they exist, we will never be able to get AI to smooth it over. It is horrendously difficult for real people to figure out what to put in a TPS report. The systems that you refer to need to be engineered out or organisations first. You don't need AI for that, but getting rid of millions of excel files is needed before AI can work.
I dont know that getting rid of those wacky Excel sheets is a prerequisite to having AI work. We already have people like Automation Anywhere watching people hand carve their TPS reports so that they can be recreated mechanistically. Its a short step from feeding the steps to a task runner to feeding them to the AI agent.
Paradigm shifts in the technology do not generally occur simultaneously with changes in how we organize the work to be done. It takes a few years before the new paradigm backs into the workflow and changes it. Lift and shift was the path for several years before cloud native became a thing, for example. People used iPhone to call taxi companies, etc.
It would be a shame to not take the opportunity to tear down some old processes, but, equally, sometimes Chesterton's fence is there for good reason.
But why are these sort of orgs slow and useless? I don't think it is because they have made a conscious decision to do so - I think it is more than they do not have the resources to do anything else. They can't afford to hire in huge teams of engineers and product managers and researchers to modernize their systems.
If suddenly the NHS had a team of "free" genuinely phd-level AGI engineers working 24/7 they'd make a bunch of progress on the low-hanging fruit and modernize and fix a whole bunch of stuff pretty rapidly I expect.
Of course the devil here is the requirements and integrations (human and otherwise). AGI engineers might be able to churn out fantastic code (some day at least), but we still need to work out the requirements and someone still needs to make decisions on how things are done. Decision making is often the worst/slowest thing in large orgs (especially public sector).
It's not a resource problem; everyone inside the system has no real incentive to do anything innovative; improving something incrementally is more likely to be seen as extra work by your colleagues and be detrimental to the person who implemented it.
What's more likely is a significantly better system is introduced somewhere, NHS can't keep up and is rebuilt by an external. (Or more likely it becomes a inferior system of a lesser nation as the UK continues its decline).
I think this is where the AGI employee succeeds where other automation doesn’t. The AGI employee doesn’t require the organization to change. It’s an agent that functions like a regular employee in the same exact system with all of its inefficiencies, except that it can do way more inefficient work for a fraction of the cost of a human.
Assuming we get to AGI and companies are willing to employ them in lieu of a human employee, why would they stop at only replacing small pieces of the org rather than replacing it entirely with AGI?
AGI, by most definitions at least, would be better than most people at most things. Especially if you take OpenAI's definition, which boils it down only to economic value, a company would seemingly always be better off replacing everything with AGI.
Maybe more likely. AGI would just create superior businesses from scratch and put human companies out if business.
This is a huge selling point, and it will really differentiate the orgs that adopt it from those who don’t. Eventually the whole organization will become as inscrutable as the agents that operate it. From the executive point of view this is indistinguishable from having human knowledge workers. It’s going to be interesting to see what happens to an organization like that when it faces disruptive threats that require rapid changes to its operating model. Many human orgs fall apart faced by this kind of challenge. How will an extra jumbo pattern matcher do?
IMO it comes from inertia. People at the top are not digital-native. And they're definitely not AI-native.
So you're retrofitting a solution onto a legacy org. No one will have the will to go far enough fast enough. And if they didn't have the resources to engineer all these software migrations who will help them lead all these AI migrations?
Are they going to go hands off the wheels? Who is going to debug the inevitable fires in the black box that has now replaced all the humans?
And many of the users/consumers are not digital-native either. My dad is not going to go to a website to make appointments or otherwise interact with the healthcare system.
In fact most of the industries out there are still slow and inefficient. Some physicians only accept phone calls for making appointments. Many primary schools only take phone calls and an email could go either way just not their way.
It's just we programmers who want to automate everything.
Today I spent 55 minutes in a physical store trying to get a replacement for two Hue pendant lights that stopped working. The lights had been handed in a month ago and diagnosed as "can't be repaired" two weeks ago. All my waiting time today was spent watching different employees punching a ridiculous amount of keys on their keyboards, and having to get manual approval from a supervisor (in person) three times. I am now successfully able to wait 2-6 weeks for the replacement lights to arrive, maybe.
When people say AI is going to put people out of work, I always laugh. The people I interacted with today could have been put out of work by a well-formulated shell script 20 years ago.
Nonsense. They wouldn't be out of work, their jobs would just be easier and more pleasant. And your experience as a customer would be better. But clearly their employer and your choice of store isn't sufficiently incentivized to care, otherwise they would have done the software competently.
The hilarious thing is there is absolutely no improvement AI could possibly make in the experience you've described. It would just be a slower, costlier, more error prone version of what can easily be solved with a SQL database and a web app.
You obviously don't live in the UK, where the mad dash at 8:00am on the dot to attempt to secure an appointment happened, and the line would be busy until 8:30am when they ran out of appointment slots, if you were unlucky on the re-dial/hangup rodeo.
Apps (actually a PWA) mean I can choose an appointment at any time in the day and know that I have a full palette of options over the next few days. The same App(PWA) allows me to click through to my NHS login where I can see my booked appointments or test results.
> Five years ago during covid all these clunky legacy businesses couldn't figure out a CSV let alone APIs but suddenly with AGI they are going to become tech masters and know how to automate and processize their entire operations?
The whole point of an AGI is that you don't need to be a tech master (or even someone of basic competence) to use it, it can just figure it all out.
Technical people don't write code, they (along with product people) specify things exhaustively. While in theory a super AGI can do everything (therefore deprecating all of humanity) in reality I suspect that given existing patterns in orgs of layers of managers who don't like to wade into the details specifying to the nth degree there will be a need for lots of SMEs except that AI will probably be a leaky abstraction and you'll still need technical people to guide the automation efforts
> Matt Hancock has banned the NHS from buying fax machines and has ordered a complete phase-out by April 2020.
The NHS is quite federated. Hell many parts of it are independent companies. Some trusts have decent modern systems though - I had to go for a test just before christmas - I'd phoned up my GP in the morning got an appointment for half an hour later, he ordered a test, said go to one of these 8 centres, so I went to one about half an hour away (I live a fair way away from a major town). Had the test, by the time I'd had lunch and driven back home I had another call from the GP asking me to come in that evening, the appointment was created by the GP and read seconds later at the hospital, the test was done there and results reported back again at the click of a system at the GP.
But that's just my local trust. Go 10 miles west and it's another trust and they have different systems. And I had to go to one of the test centres in the trust, I couldn't go to one in a neighbouring trust as they have different systems and there's no/limited interconnects.
It's all pretty much a central system as of this year, your trust will have localised PWAs or Apps, but it all connects via provider IDs and transfer codes to the same NHS backend.
The healthcare system I largely use in the US was bought up by another larger system. They are now primarily on an electronic system (which for me is welcome) but some of the offices still use faxes as well.
I had a stint in healthcare software and can confirm the fax thing is real.
That said, I had a hunch it's all cloud-hosted fax sending systems delivering to cloud-hosted fax-to-pdf systems. I challenged by colleagues to show me an actual fax machine and never had any takers.
The minicomputer revolution was meant to replace all the big iron in the basement with computers shared by teams. This lasted a decade until we got computers meant for one person and we've used that format of computation for nearly 50 years now.
Everyone is talking about how AI will replace people.
It won't.
AI without HI is useless. However, HI with AI is much better at handling all the bullshit office work that needs to get done—anyone who doesn't understand this is either an idiot, a troglodyte, or already has an expendable army of minions doing their work.
The surge in AI demand won't be top-down but bottom-up. That stupid office birthday wish card being passed around? I can Shakespeare the motherfucker. And that email to the big boss telling him the project is doomed and that he needs a compass to find his own ass? Let's ask R1.
>My boss is a moron who can't find his ass without a compass. His latest tech initiative will cost us millions and make the best staff leave. Write a professional letter that covers those points without being harsh.
---
Subject: Request for Review of Upcoming [Tech Initiative]
Dear [Recipient Name/Team Name],
I hope this message finds you well. I wanted to share some thoughts regarding our upcoming [Tech Initiative Name], as I believe open dialogue will ensure its success while aligning with our team’s strengths and long-term goals.
While the initiative’s intent to innovate is commendable, I’ve observed several potential challenges that may impact its effectiveness:
Financial Impact: Based on preliminary analysis, the projected costs appear to be significantly higher than initially estimated. A detailed cost-benefit review or phased rollout might help mitigate financial risk while allowing us to test feasibility.
Team Retention Concerns: Many of our top performers have expressed reservations about the initiative’s current structure, particularly around [specific pain point, e.g., workflow disruption, lack of clarity]. Retaining their expertise will be critical to execution, and their insights could refine the plan to better address on-the-ground needs.
To ensure alignment with both our strategic objectives and team capacity, I respectfully suggest:
Conducting a collaborative risk assessment with department leads.
Piloting the initiative in a controlled environment to gather feedback.
Hosting a forum for staff to voice concerns/solutions pre-launch.
I’m confident that with adjustments, this project can achieve its goals while preserving morale and resources. Thank you for considering this perspective—I’m eager to support any steps toward a sustainable path forward.
To be honest, that kind of sounds like a dystopian hell: chatGPT writing memos because we can't be arsed, and the reading the same memos because neither can the recipient. Why even bother with it?
With a well working rag system you can find the reason why any decision was made so long as it was documented at some point somewhere. The old share point drives with a billion unstructured word documents starting from the 1990s is now an asset.
Since I don't have an expendable army, I must be either an idiot or a troglodyte. Where my understanding falters is finding a domain where accuracy and truth aren't relevant. In your example you said nothing about a "phased rollout", is that even germane to this scenario? Is there appreciable "financial risk?" Are you personally qualified to make that judgement? You put your name at the bottom of this letter and provided absolutely no evidence backing the claim, so you'd best be ready for the shitstorm. I don't think HR will smile kindly on the "uh idk chatgpt did it" excuse.
"Here is the project description:
[project description]
Help me think of ways this could go wrong."
Copy some of the results. Be surprised that you didn't think of some of them.
"Rewrite the email succinctly and in a more casual tone. Mention [results from previous prompt + your own thoughts]. Reference these pricing pages: [URL 1], [URL 2]. The recipient does not appreciate obvious brown nosing."
If I was sending it as a business email I'd edit it before sending it off. But the first draft saved me between 10 to 30 minutes of trying to get out of the headspace of "this is a fucking disaster and I need to look for a new job" to speaking corporatese that MBAs can understand.
Is that really a problem most people have in business communication? I can't recall a time sending a professionally appropriate email was actually hard.. Also, consider the email's recipient. How would you feel if you had to wade through paragraph upon paragraph of vacuous bullshit that boils down to "Hey [name], I think the thing we're doing could be done a little differently and that would make it a lot less expensive, would you like to discuss?"
Our current intellectual milieu is largely incapable of nuance—everything is black or white, on or off, good or evil. Too many people believe that the AI question is as bipolar as every other topic is today: Will AI be godlike or will it be worthless? Will it be a force for absolute good or a force for absolute evil?
It's nice to see someone in the inner circles of the hype finally acknowledge that AI, like just about everything else, will almost certainly not exist at the extremes. Neither God nor Satan, neither omnipotent nor worthless: useful but not humanity's final apotheosis.
Whoa there, hold yer horses ;) Let's wait and see until we have something like an answer to "will it be intelligent?" Then we might be ready to start trying to answer "will it be useful?"
I agree completely though, it's nice to see a glimmer of sanity.
EDIT: My best hope is this bubble bursts catastrophically, and puts the dotcom crash to shame. Then we might get some sensible regulation over the absolute deluge of fraudulent, George Jetson spacecamp bull shit these clowndick companies spew on a daily basis.
If by "messiness" he means "the general public having massive problems with a technology that makes the human experience both philosophically meaningless and economically worthless", then yeah, I could absolutely see that slowing implementation.
I'm a hobby musician. There are better musicians. I don't stop.
I like to cook. There are better cooks. I don't stop.
If you are Einstein or Michael Jordan or some other best-of-the-best, enjoyment of life and the finding of worth/meaning can be tied to absolute mastery. For the rest of us, tending to our own gardens is a much better path. Finding "meaning" at work is a tall order and it always has been.
As for "economically worthless," yes if you want to get PAID for writing code, that's a different story, and I worry about how that will play out in the scope of individual lifetimes. Long term I think we'll figure it out.
Some paraphrasing on my part, but what he does say is at odds with fast takeoff AGI. He describes AI as a tool that gradually changes human work at a speed that humans adapt to over time.
In fact I don't think he believes in the takeoff of AGI at all, he just can't say it plainly. Microsoft Research is one of the top industry labs, and one of its functions (besides PR and creating the aura of "innovation") is exactly this: to separate the horseshit from things that actually have realistic potential.
Bit of a click bait title, but it certainly seems like the realization is setting in that the hype has exceeded near term realistic expectations and some are walking back claims (for whatever reason; honesty, derisking against investor securities suits, poor capital allocation, etc).
Nadella appears to be the adult in the room, which is somewhat refreshing considering the broad over exuberance.
2) Wouldn't believe that it's about to replace human intellectual work
In other words AI got advanced enough to do amazing things but not 500B or T level of amazing and people with the money are not convinced that it will be anytime soon.
I went back. It’s fine at solving small problems and jump-starting an investigation, but only a small step better than a search engine. It’s no good for deep work. Any time I’ve used it to research something I know well, it’s got important details wrong, but in a confident way that someone without my knowledge would accept.
RLHF trains it to fool humans into thinking it’s authoritative, not to actually be correct.
This is exactly the experience I've had. Recently started learning OpenTofu(/Terraform) and the company now had Gemini as part of the Workspace subscription. It was great to get some basic going, but very quickly starts suggesting wrong or old or bad practices. Still using it as a starting point and to help known what to start searching for, but like you said, it's only slightly better than a regular search engine.
I use it to get the keywords and ideas, then use the normal search engine to get the facts. Still, even in this limited capacity I find the LLMs very useful.
I have started using AI for all of my side projects, and am now building stuff almost everyday. I did this as a way to ease some of my anxiety related to AI progress and how fast it is moving. It has actually had the opposite effect; it's more amazing than I thought.
I think the difficulty in reasoning about 2) is that given what interesting and difficult problems it can already solve, it's hard to reason about where it will be in 3-5 years.
But, I am also having more fun building things than perhaps the earliest days of my first code written, which is just over 7 years now.
Insofar as 1) goes, yes, I never want to go back. I can learn faster and more deeply than I ever could. It's really exciting!
I’ve tried to use it as a web search replacement and often the information is generic or tells me what I already know or wrong.
I’ve used a code suggestions variant long before OpenAI hype started and while sometimes useful, rarely is it correct or helpful on getting over the next hurdle.
Any code from my coworkers is now just AI slop they glanced over once. Then I spend a long time reviewing and fixing their code.
I really don’t find spending time writing long form questions so a bot can pretend to be human all that time saving, especially if I have to clarify or reword it to a specific “prompt-engineer” quality sentence. I can usually find the results faster typing in a few keywords and glancing at a list of articles. My built in human speed reading can determine if what I need is probably in an article.
LLM seriousness has made my job more difficult. I would prefer if people did go back.
In my case its coding real world apps that people use and pay money for. I no longer personally type most of my code, Instead I describe stuff or write pseudo code that LLMs end up converting into the real thing.
It's very good at handling the BS part of coding but also its very good at knowing things that I don't know. I recently used it to hack a small bluetooth printer which requires its own iOS app to print, using DeepSeek and ChatGPT I was able to reverse engineer the printer communication and then create an app that will print whatever I want from my macOS laptop.
Before AI I would have to study how Bluetooth works now I don't have to. Instead, I use my general knowledge of protocols and communications and describe it to the machine and I'm asking for ideas. Then I try things and ask the stuff that I noticed but I don't understand, then I figure out how this particular device works and then describe it to the machine and ask it to generate me code that will do the thing that I discovered. LLMs are amazing at filling the gaps in a patchy knowledge, like my knowledge of Bluetooth. Because I don't know much about Bluetooth, I ended up creating a CRUD for Bluetooth because that's what I needed when trying to communicate and control my bluetooth devices(it's also what I'm used to from Web tech). I'm bit embarrassed about it but I think I will release it commercially anyway.
If I have a good LLM under my hand, I don't have to know specialised knowledge on frameworks or tools. General understanding of how things works and building up from there is all I need.
I see, for single operator, no customers products it works nicely. You may find you use it less and less and will actually require that Bluetooth knowledge eventually as you grow a product.
LLMs so far seem to be good at developing prototype apps. But most of my projects already have codegen and scaffolding tools so I guess I don’t get that use out of them.
I predict that once you release your embarrassing app, you will find all the corner cases and domain secrets come rearing out with little ability of the LLM to help you (especially with Bluetooth).
The Bluetooth app thing is just an example of LLMs helping me build something I don't have beyond-basics knowledge of.
For other stuff, I still find it very useful because why would I bother to code something non-novel when I can just tell the LLM what I need?
For example, if I need a code that finds the device that given characteristics belongs(bluetooth stuff, again) to I can just tell the LLM to write it for me. It doesn't take a genius to write such a code, its elemental stuff and I would rather not spend my working memory on remembering the structures and names of variables. I copy+paste the current class that handles the bluetooth comms, tell it that I need a function for sending data to the printer and it gives me back the result. There's no art in writing such a code, its standard code for an API and I would prefer not to bother with it.
“Before AI I would have to study how Bluetooth works now I don't have to.”
And
“It's very good at handling the BS part of coding…”
This is the part that I think is difficult in a team situation.
Learning and understanding is the important part, and certainly isn’t BS.
I understand that it really can make it seem like velocity has increased when you really are shipping things that more or less “work”, but it’s really a good practice to understand your code.
I’ve had to spend a significant amount of time fixing work that was admittedly generated using AI by other engineers, and I really fear engineers are beginning to trade deep understanding for the high of getting something that “works” with little effort.
It might “work” but you might be ignoring the work everyone around you is doing to clean up your brittle code that doesn’t scale and wasn’t thought through at inception.
You have an entirely valid worry and I get a bit scared at my use of AI because of this. I fear that dev jobs might go away or become third world only jobs like electronics manufacturing but in the mean time its scary how much it atrophies your mind. At the same time, it has opened up a universe of answers to questions I wouldn't normally ask because the bar was too high. Everyone seems to have their own unique stories.
For example just today, I dumped a crash log from the Mac version of Microsoft Remote Desktop into it. This damn app locks up 10 times a day for me causing a "Force Quit" event and subsequent crash dump to be generated. Normally what can I do with that crash dump other than send it off to Apple/Microsoft? It identified where it thought the crash was coming from: excessive right clicking causing some sort of foundational error in their logic. Avoiding right clicking has solved the issue for me. Now that I write this out, I could have spent hours upon hours finding a needle in a haystack and that would probably made me a better developer but the bar is too high, there is too much other work I have to get done than to chase this. Instead I would have just lived with it. Now I have some closure at least.
Again it seems like everyone has got their own unique stories. Is AI taking everything over? Not yet. Can I go back to pre-AI? No, its like going back to Windows 95.
It is effective because you can spend your mental energy on the things that matter, things that make difference.
Code quality actually doesn't matter when you remove the human from the loop as long as it works correctly because it becomes something made by a machine to be interpreted by a machine.
Code isn’t a binary scale of works or doesn’t - there is inefficient code and insecure code and everything else in between that still technically “works” - but a lack of understanding will eventually cause these “working” solutions to catch up to you.
You can always revisit that part of code if it doesn’t perform. For vast majority of code running on consumer devices there’s no difference between smart implementation and mediocre implementation. LLMs are great at being mediocre by default.
As for security, that mostly stems from the architecture. LLMs mediocracy also helps with following industry conventions and best practices.
In my case I never get the code being written at once, instead I make LLMs write pieces that I put together myself. Never got used to copilot or Cursor, I feel in control only with the chat interface.
Not understanding how Bluetooth works while building a Bluetooth thing seems like… a problem, though. Like, there are going to be bugs, and you’re going to have to deal with them, and that is where the “just ask the magic robot” approach tends to break down.
Funny enough, you already don't have access to low level radio so building a "Bluetooth thing" is just about dealing with some libraries and API.
Bugs happen but its not that different from any other type of bugs. Also, you end up learning about Bluetooth as bugs and other unexpected behavior happen. The great thing about LLMs is that they are interactive, so for example when collecting Bluetooth packets for analysis I ended up learning that the communication with Bluetooth is a bit like talking through a middleman and some packet types are only about giving instructions to the Bluetooth chip and others are actually about communicating with a connected device.
Using LLM for coding something you don't understand is much different than Googling something, then copy+paste a snippet from Stackoverflow because you can ask for instant explanation and modifications for testing edge cases and other ideas.
The only part I would quibble with is the fear that superficial AI generated code becomes widespread. It's not that I think this won't happen, and I wouldn't want it on my team, but I think it could actually increase demand for competent software engineers.
I got into coding about a decade ago when cheap outsourcing had been all the rage for a number of years. A lot of my early experience was taking over very poorly written apps that had started off with fast development and then quickly slowed down as all of the sloppy shortcuts built up and eventually ground development to a halt. There's a decent chance LLMs lead to another boom in that kind of work.
For mass production/scalability, I absolutely agree with you.
For products that won't be scaled, I imagine it becomes just another abstraction layer, with the cost of human input outweighing the cost of the additional infrastructure / beefing up hardware to support the inefficiencies created.
Oh come on, I'm not an "AI believer", but it regularly does things for me like write complex SQL queries that I can then verify are correct. Just something like that will often save me 20-40 minutes over doing it manually. There is _something_ there, even if it's not going to replace the workforce anytime soon.
For code completion, I’ve found it’s not good at jumping hard hurdles, but it is a bit better than find replace (e.g. it finds things that are syntactically different, but semantically related), and can notice stuff like “you forgot to fix the Nth implementation of the interface you just extended”.
It’s also good at “I need to do something simple in a language I do not know”.
I’ve definitely encountered ai slop from coworkers. I’m sure they also produce stack overflow copy paste garbage too. Dealing with their newly-found increased “productivity” is an open problem.
Insisting on strict static typing helps. The LLMs can’t help with that, and it forces a higher bar before compilation succeeds.
> Wouldn't believe that it's about to replace human intellectual work
Yea idk about that one chief. I have been working in ML (specifically scaling of large model training) at FAANG for the past 8 years, and have been using AI for my work since basically the first time this became even slightly usable, and I don’t share your optimism (or pessimism depending on how you see it).
Yes it’s still pretty bad, but you have to look at rate of improvement, not just a static picture of where we are today.
I might still be wrong though and you may be right, but claiming that anyone using AI believes like you do is flat out false. A lot of my colleagues also working in ML researcher think like me btw.
It's a figurative speech, obviously its a spectrum where some believe that AGI is around the corner or that all this is nothing more than some overblown statistics exercise and LLMs have nothing to do with actual intelligence.
In my opinion, this generation of AI is amazing but isn't it.
He doesn't actually say that, the (very biased and polemical) article writer seems to have made that up. The actual quote is:
"Us self-claiming some [artificial general intelligence] milestone, that's just nonsensical benchmark hacking to me. So, the first thing that we all have to do is, when we say this is like the Industrial Revolution, let's have that Industrial Revolution type of growth. The real benchmark is: the world growing at 10 percent. Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."
That's a completely different statement from "AI is generating no value"!
Lining up for whatever the next thing is. "Look, we know we said AR/VR was the next big thing in the late tens and LLMs were the next big thing in the early 20s, but quantum is the next big thing now. For real, this time!"
(Not entirely sure what the next fad will be, but some sort of quantum computing thing doesn't feel unlikely. Lot of noise in that direction lately.)
Curiously, all of these three (VR/AI/QC) are limited by hardware. But AI is the only one that has seen meaningful progress by just throwing more contemporary hardware at it. Sure, future hardware might bring advancements to all of them. But if you're making an investment plan for the next quarter, the choice is pretty obvious. This is why AI rules the venture capitalist sector instead of fusion or other long term stuff.
Of the three, QC is different in that it's not a solution looking for a problem. If we ever scale QC to the point where it can do meaningful work (the "if" is doing a lot of work there - per your point about hardware), then I don't see it fumbling like the other two have. We have immediate and pressing needs that we know how to solve with QC. The other two are very much research coming up with cool toys, and product fucking around so that they can find out what to use them for.
Did they tell the M365 sales/marketing teams about this? My users get bombarded with sales pitches, new free trials and other commms about how wonderful copilot is. It's almost a full time job to manage people's expectations around this...
Nadella is just saying we haven't yet seen a revolution yet measurable by 10% economic growth--he makes no statement about the future.
Most people have no clue how to use AI or where to use it in their lives. There was a guy at work who was submitting command-like queries (give meeting summary) and complained about how it left out XYZ. Then I told him to ask "Give me the meeting summary with X, Y, Z" or "what did so and so say about Y."
His mind was blown.
We are in the first inning. We haven't figured out how to integrate this into everything yet.
Nadella is looking for the world to grow at 10% due to AI enhancement, like it did during the industrial revolution.
That seems like a low bar because it already is- it's just not equally distributed yet.
My own productivity has grown far more than 10% thanks to AI, and I don't just mean in terms of dev. It reads my bloodwork results, speeds up my ability to repair a leak in my toilet tank, writes a concise "no I won't lend you money; I barely know you" message... you name it.
Normally all of those things would take much longer and I'd get worse results on my own.
If that's what I can do at the personal level, then surely 10% is an easily-achievable improvement at the enterprise level.
All I hear is anecdotal statements from people claiming LLMs have made them some percent more productive. Yet few actually say how or demonstrate it.
For the last year, I've tried all sorts of models both as hosted services and running locally with llama.cpp or ollama. I've used both the continue.dev vscode extension and cursor more recently.
The results have been frustrating at best. The user interface of the tools is just awful. The output of any models from Deepseek to quen to Claude to whatever other model is mediocre to useless. I literally highlight some code that includes comments about what I need and I even include long explicit descriptions etc in the prompts and it's just unrelated garbage out every time.
The most useful thing has just been ChatGPT when there's something I need to learn about. Rubber ducking basically. It's alright at very simple coding questions or asking about obscure database questions I might have, but beyond that it's useless. Gotta keep the context window short, or it starts going off the rails every single time.
If LLM chatbots are making you vastly more productive in a field, you are in the bottom 20% of that field.
They're still useful tools for exploring new disciplines, but if you're say a programmer and you think ChatGPT or DeepSeek is good at programming, that's a good sign you need to start improving.
This. I shudder to think of the hubris of a programmer who doesn’t understand pointers prompting an AI model to generate low-level system code for them. Sure it might generate a program that appears to work. But is that human reading the code qualified to review it and prevent the model from generating subtle, non-obvious errors?
If you have to tell others that then perhaps some introspection for yourself might be helpful. Comes across more as denial than constructive commentary.
I do believe the benefit decreases the more senior or familiar the work is but there is still a noticeable benefit and I think it largely depends on the velocity and life cycle of the product. I think you get less benefit the slower the velocity or the more mature of a product. To deny it like in your post is simply being an intellectual minimalist.
You make a polite but still ad hominem "attack" about me instead of addressing my points with demonstrations of evidence.
Make a video or blog article actually showing how your use of LLMs in coding is making you more productive. Show what it's doing to help you that has a multiplier effect on your productivity.
Oh I see, I had replied to your comment directly where I was stating that I find it surprising that folks like yourself are so quick to attack, though looking at your response here its not that surprising.
I don't think it deserves a video or blog, like I already said the multiple posts that have made HN front page have covered it well.
- Autocomplete saves me keystrokes usually
- Features like Cursor's composer/agent allow me to outsource junior level changes to the code base. I can copy/paste my requirements and it gives me the diffs of the changes when its done. Its often at a junior level or better and tackles multi-file changes. I usually kick this off and go make other changes to the code base.
Now like I have said before, this depends a lot on the velocity of the team and the maturity of the code base. I think more mature products you will have less benefit on feature implementation and most likely more opportunity in the test writing capabilities. Likewise, teams with a slower cadence, thinking a bluechip software company compared to a startup, are not going to get as much benefit either.
Instead of being so aggressive, simply say why it does not work for you. These tools strive in web dev which you may not be involved in!
I have a good shoes business. Can you give me a couple of 100 billions of dollars? Good news I promise you trillions, in a year or two or 10 maybe, who knows you can exprapolate into a future science fiction reality yourself. So when are you transfering the money?
You are now moving the goal post from us discussing is this adding value to how much is it worth. There are a lot of open debate to some of the level of investment but from the hyperscaler territory, they are flush with cash and it probably hurts more under invest and be wrong than it is to over invest.
I would like to propose a moratorium on these sorts of “AI coding is good” or “AI coding sucks” comments without any further context.
This comment is like saying, “This diet didn’t work for me” without providing any details about your health circumstances. What’s your weight? Age? Level of activity?
In this context: What language are you working in? What frameworks are you using? What’s the nature of your project? How legacy is your codebase? How big is the codebase?
If we all outline these factors plus our experiences with these tools, then perhaps we can collectively learn about the circumstances when they work or don’t work. And then maybe we can make them better for the circumstances where they’re currently weak.
I feel like diet as an analogy doesn't work. We know that the only way to lose weight is with a caloric deficit. If you can't do this, it doesn't matter what you eat you won't lose weight. If you're failing to lose weight because of a diet you are eating too much, full stop.
Whereas measuring productivity and usefulness is way more opaque.
Many simple software systems are highly productive for their companies.
I think its about scope and expectations. I have had some form of AI code completer in my neovim config for 3 years. It works flawlessly and saves me tons of keystrokes. Sure sometimes it suggests the incorrect completion but I just ignore it and keep coding as if it didn't exist. I am talking about line by line, not entire code blocks, but even that it does well at times.
From what I have seen the people that have the most success have AI building something from scratch using well known tooling (read: old tooling).
The problem is that doesn't immediately help most people. We are all stuck in crap jobs with massive, crusty code bases. Its hard for AI because its hard for everyone.
I've been using Amazon Q Developer as it was provided and approved by my employer. It has been pretty good with Python codebases, Kubernetes configurations, and (not surprisingly) CDK/Cloudformation templates. I can pretty much just ask it "here's my python script, make everything I need to run it as a lambda, hook that lambda up to x, it should run in a vpc defined in this template over here", and it'll get all that stuff put together and its normally pretty solid code it generates. It seems to pull in a lot of the context of the project I have open. For instance, I can say "it should get those values from the outputs in other-cf-template.yml" and it knows the naming schemes and what not across templates, even if it didn't generate those templates.
I might go back and tweak some stuff, add some extra tags and what not, but often its pretty good at doing what I ask.
Sometimes its suggestions aren't what I was really wanting to do in my codebase, a handful of times it has made up methods or parameters of even well-known libraries. But usually, its suggestions are better than a basic IntelliSense-style autocomplete at least in my experiences.
I haven't used many of the other developer assistant plugins like say GitHub Copilot. I couldn't really say which is better or worse. But I do think using Q Developer has made me faster in many tasks.
I wouldn't expect a tool that doesn't have access to the context of my editor and the files I have open to be very useful for actually coding. There's a lot of context to understand in even a basic application. If you're just asking a locally running app in ollama "give me a method to do x", don't be surprised if it doesn't know everything else happening in your app. Maybe it'll give you a halfway decent example of doing something, but devoid of how it actually plugs in to whatever you're making it might be entirely worthless.
Just in the past couple months there have been a number of "I am a senior/principal engineer and this is how I use LLMs". I would agree that the tools are not optimal yet but every iteration has improved for me.
Maybe whatever language you are coding it or whatever project you are working on is not a good fit? It is an equally perplexing situation for myself when I hear anecdotes like yours which don't align with my experience. The fact that you say everything is garbage calls into question either how you are using the tool or something else.
I can reliably use cursor's composer to reference a couple files, give a bullet list of what we are trying to do and point it to one of the better models and the output is junior engineer level or better output. When I say junior, I mean a junior who has experience with the codebase.
Generally a lot of web-dev which is where I would assume LLMs shine the best. I noted elsewhere but I think it depends a lot on the age of the product and the level of velocity. For early life products where the speed of your velocity matters, I think you can get the most benefit. The more mature the product and the slower the team implements features, the benefits are still measurable but not as high.
Ah yeah, I can totally see how it can be useful for churning put tons of code. Even without copy-paste, just generating a ton of references and rewriting/improving them. Anecdotally, I’ve tried asking deepseek to review a few files of my code — it wasn’t bad at all, though not without false positives.
I agree with the other commenter that said if you're "vastly" more productive as a developer due to AI, you probably weren't that good to begin with. Otherwise, please provide concrete examples.
Myself, I do find it quite useful in a few respects. First and foremost, as a "better Google/StackOverflow." If something's not working, I can describe my exact scenario and usually get pointed in the right direction. Sometimes the LLM just wastes my time by very confidently telling me some function/library that solves my exact problem exists when in fact it doesn't.
Second, IntelliJ's local LLM is sort of a smarter autocomplete. It makes some outright wrong suggestions, but when there's areas where I have to do a lot of repetitive tasks that follow a simple pattern (like for instance, mapping fields from one type of object to another), it does a pretty good job of making correct suggestions. I definitely appreciate it but it's certainly not doing things like writing a significant portion of code in my style.
Seriously. It’s like half of the people in this thread are living in a completely different world.
And this is coming from someone who uses LLMs daily at the subscription, API (vscode and 3 nextjs apps) and local level. I have a custom langchain stack, prompt repo, you name it. And regardless of how little or how much I use what I have, or what soup de jour prompt or process (from Keep it simple to Prompt enhancers) I can’t say it’s made a meaningful difference in my life. Even with all of the customization and planning.
Would it look like such a good search engine if the actual search engines hadn't progressively broken themselves over the last 15 years?
I swear half the time when I use it to look up the nuances of system API stuff, it's replaying forum, mailing list or Stackoverflow conversations that Google ought to be able to find but somehow can't.
> All I hear is anecdotal statements from people claiming LLMs have made them some percent more productive. Yet few actually say how or demonstrate it.
It's very difficult to measure productivity of most people, certainly most people in office jobs, so while you can have a gut feeling that you're doing better, it's no more measurable than pre-AI individual productivity measurement was
It’s not really about objective measurements, but practical applications. Like try this in the following manner and compare it to your previous workflow. Sensible advices like the ones found in The Pragmatic Programmer.
Sure, so it's always going to be annecdotal. That doesn't mean the benefits don't exist, just means they can't be objectively measured. Just like we can't objectively measure the output of a single knowlege worker, especially output on a single day
I have a similar experience. Tried to use it for real work and got frustrated by the chat’s inability to say “I don’t know”. It’s okay for code snippets demonstrating how something can be used (stack overflow essentially), also code reviews can be helpful if doing something for the first time. But they fail to answer questions I’m interested in like “what’s the purpose of X”.
I fixed the hinge in my oven by giving perplexity.com the make and problem. I saved an hour on the phone calling people to organise a visit some time in the next week.
Maybe you should stop using the Ai slop tools that don't work?
And Henry Ford would reply: "Who is going to buy the cars?"
We have been living in a fake economy for quite some time where money is printed and distributed to the "tech" sector. Which isn't really "tech", but mostly entertainment (YouTube, Netflix, Facebook, ...).
Growth of the economy means nothing. The money that has been printed goes to shareholders. What the common man gets is inflation and job losses.
If you want to grow the real economy, build houses and reduce the cost of living.
> If you want to grow the real economy, build houses and reduce the cost of living.
Yes, I wonder why it is so hard for Western countries to understand that there's no future in a place where housing is more expensive than your average salary. If may look cool for a few years until most people have left or are living on the streets.
This is a non-sense that spreads because of North American style of housing. If you're talking about sprawling suburban houses then you're right. But big cities have provided reasonable housing for lots of workers for centuries. The only thing you need is to build more apartments in the cities that have an excess of job positions.
No, you can't just "build more apartments". For these new inhabitants you will need more grocery stores, more bus/subway stops and overall transportation, more hospitals, more firefighters, more restaurants, more gyms, more tennis courts, more of everything.
Of course. Big cities with all this infrastructure are nothing new. They existed in the past and are big in alive in Asia and other parts of the world. Only in North America we have this bizarre world where it seems like a strange thing to build cities and provide infrastructure for workers!
There is basically no large city outside of subsaharan African & maybe the subcontinent that has that development style and anything even approaching a sustainable 2.1 total fertility rate
There is no cheap housing anywhere in the entire state of California. In the worst and poorest parts of the state where are basically no jobs or anything the housing is still way more expensive than anyone can afford.
A friend tried to tell me China has a real estate crisis, because the value of houses is dropping due to building to many and people are losing on their investments. I asked him if he is sure cheap and available housing is a crisis.
Everyone in the industry losing their shirts and going out of business is a crisis. It happened 15 years ago in the US and we still haven't made it back to mid 90s level of housing starts.
You should be curious why Nadella is looking for the world to grow at that rate. That’s because he wants Microsoft to grow into $500B/year in revenue by 2030, and it will be challenging without that economic growth to grow into that target. You can grow into a TAM, try to grow or broaden the TAM, or some combination of both. Without AI, it is unlikely the growth target can be met.
Annual growth rates during the Industrial Revolution where way lower than 10%. In the 18th century it was well below 1%, during the 19th century it was on average at 1-1.5% (the highest estimates go up to 3% annual growth for certain decades close to 1900).[0][1][2]
Some regions or sectors might have experienced higher growth spurts, but the main point stands: the overall economic growth was quite low by modern standards - even though I don't think GDP numbers alone adequately describe the huge societal changes of such sustained growth compared to agrarian cultures before the Industrial Revolution.
It also gets all of these things wrong, like not paying attention to models of toilets and quirks for their repair, often speaking with an authoritative voice and deceiving you on the validity of its instructions.
All of the things you site are available via search engines, or better handled with expertise so you know how much of the response is nonsense.
Every time I contact an enterprise for support, the person I'm talking to gets lots of things wrong too. It takes skepticism on my part and some back and forth to clean up the mess.
On balance AI gets more things wrong than the best humans and fewer things wrong than average humans.
The difference is that a human will tell you things like "I think", "I'm pretty sure" or "I don't know" in order to manage expectations. The LLM will very matter-of-factly tell you something that's not right at all, and if you correct them the LLM will go and very confidently rattle off another answer based on what you just said, whether your were telling it the truth or not. If a human acted that way more than a few times we'd stop asking them questions or at least have to do a lot of "trust but verify." LLMs do this over and over again and we just kind of shrug our shoulders and go "well they do pretty good overall."
I can't count the number of times I've had a support person confidently tell me something that is obviously not applicable to my problem and makes completely flawed assumptions about cs/physics/networking/logic.
I get a lot of correct answers from llms, but sometimes they make shit up. Most of the time, it's some function in a library that doesn't actually exist. Sometimes even the wrong answers are useful because they tell me where to look in the reference docs. Ask it to search the web and cite sources, makes it easier to verify the answer.
I don't appreciate what's going on with AI art and AI generated slop, but the idea that they aren't a useful tool is just wild to me.
AI is a lossy data compression technique at best.
One can always tell when an AI cheerleader/ex blockchain bro has hitched their financial wagon to this statistic based word vomit grift.
What is your personal productivity metric by which you have more than 10% increase? More money earned, less money spent, fewer working hours for same income, more leisure time?
It needs to be something in aggregate to mean something related to what Nadella meant. There are many individual task which LLM system can help with. But there is also may ways for those gains to fail to aggregate into large overall gains. Both on personal level and on corporate, and economy wide level.
Going to safely assume you've never worked at an enterprise.
Because improving the productivity of every employee by 10% does not translate to the company being 10% more productive.
Processes and systems exist precisely to slow employees down so that they comply with regulations, best practices etc rather than move fast and break things.
And from experience with a few enterprise LLM projects now they are a waste of time. Because the money/effort to fix up the decades of bad source data far exceeds the ROI.
You will definitely see them used in chat bots and replacing customer service people though.
I think the 'grow at 10%' refers to the incremental part of the entire world/market.
during the industrial revolution(steam/electricity/internet), the world was growing, there're trains, cars, netflix
bussiness grown with productivity growing, even so, we lived through 2 world wars and dozens of economic crisis
but now is very different, when you repair the tank with LLM's help, when the labour value of repairers is decreased, there's no addition value are produced
there's a very simple thought experiment abt the result of productivity growing alone:
let's assume robotics become to a extremely high level, everything humen work can be reduced to 1/100 with help of robots, what will happen next?
You’re describing exactly what happened during both the Industrial Revolution and the advent of computer automation.
Prior to computerization and databases, millions of people were required for filing, typing, and physically transporting messages and information. All of those jobs, entire fields of work were deleted by computerization.
A fellow degenerate gambler I see. The market can remain irrational longer than you can remain solvent, trade with caution. Being early is the same as being wrong.
A common hypothesis for why Nvidia is so hot is because they have an effective monopoly on the hardware to train AI models and it requires a crap ton of hardware.
With DeepSeek it’s been demonstrated you can get pretty damn for a lot cheaper. I can only imagine that there are tons of investors thinking that it’s better to invest their dollars in undercutting the costs of new models vs investing billions in hardware.
The question is, can Nvidia maintain their grip on the market in the face of these pressures. If you think they can’t, then a short position doesn’t seem like that big of a gamble.
it’s effectively a software moat wrt. GPU programming, there’s nothing stopping AMD from catching up besides insufficiently deep pockets and engineering culture
Not sure why AMD’s software side gets so much flack these days. For everything other than AI programming, their drivers range from fine to best in class.
I have an AMD minipc running linux that I use for steam gaming, light development, etc. The kernel taint bit is off.
There is one intel device on the pci/usb buses: wifi/bt, and it’s the only thing with a flaky driver in the system. People have been complaining about my exact issue for something like 10 years, across multiple product generations.
Nobody who controls the purse strings cares about the kernel taint bit if their model doesn’t train, if they’re burning developer time debugging drivers, if they have to burn even more dev time switching off of cuda, etc.
If AMD really cared about making money, they would’ve sent MI300s to all of the top CS research institutions for free and supported rocm on every single product. Investing any less than nvidia, the trillion dollar behemoth, is just letting big green expand their moat even more.
As I said, other than AI. The management made a big bet on crypto when nvidia made a big bet on AI.
That didn’t work out all that well in the medium term (it did in the short term), though it gave them runway to take a big chunk of intel’s server market.
Whether or not that was a good move, it’s not evidence of some engineering shortcoming.
More seriously though: unless you have privileged information or have done truly extensive research, do not short stocks. And if you do have privileged information, still don't short stocks because unless you have enough money to defend yourself against insider trading like Musk and it's ilk, it's not going to be worth it.
It's perfectly reasonable to determine that a particular high growth stock is not going to perform as well going forward, in which case I'd shift allocation to other, better candidates.
Generally, being long equities is a long term positive expected value trade. You don't have to time the market, just be persistent. On the other hand, as you correctly alluded to, shorting equities requires decently precise timing, both on entry and exit.
I think its probably foolish to short nvidia until theres at least echoes of competition.
AMD wants it to be them, but the reality is that the moat is wide.
The closest for AI is Apple, but even then, I’m not certain its a serious competitor; especially not in the datacenter.
For Gaming there’s practically no worthwhile competition. Unreal Engine barely even fixes render bugs for Intel and AMD cards, and I know this for fact.
FD: I’m recently holding shares in nvidia due to the recent fluctuation, and my own belief that the moat is wider than we care to believe, as mentioned.
The combination of high and climbing price to earnings ratios for a smaller subset of tech firms, outsize retail investment in tech (cloaked by people buying crypto), and macro environment factors like high interest rates stimulating risky lending has me swapping this bubble toward the top of the list.
The "elephant in the room" is that AI is good enough, it's revolutionary in fact, but the issue now is the user needs more education to actually realize AI's value. No amount of uber-duper AI can help an immature user population lacking in critical thinking, which in their short shortsightedness seek self destructive pastimes.
It's not "good enough", it's mostly overhyped marketing garbage. LLM models are mostly as good as they're going to get. It's a limitation of the technology. It's impressive at what has been done, but that's it.
It doesn't take billions of dollars and all human knowledge to make a single human level intelligence. Just some hormones and timing. So LLMs are mostly a dead end. AGI is going to come from differenst machine learning paradigms.
This is all mostly hype by and for investors right now.
LLM direct response models are quite mature, yes (4o)
LLM based MoE architectures with some kind of reasoning process ( Claude 3+, o series, R1, grok 3 with thinking ), are the equivalent of v0.2 atm, and they're showing a lot of promise.
I spent more time yesterday trying to get "AI" to output runnable code, and retyping, than if I had just buckled down and done it myself.
But I don't think you can blame users if they're given an immature tool, when it really is on companies to give us a product that is obvious to use correctly.
Its not an exact analogy, but I always like to think of how doors are designed - if you have to put a sign on it, its a bad design. A well designed door requires zero thought, and as such, if "AI" usage is not obvious to 99% of the population, its probably a bad design.
Think of it like you're talking to someone so smart that they answer before you're finished explaining, and get the general idea wrong, or seem really pedantic and your misplaced use of a past tense verb that should have been active tense causes then to completely reinterpret what you're talking about. Think of our current LLMs like idiot savants, and trust them as much.
I don't use AI to write code if that code is not short and self contained. It's great at explaining code, great at strategy and design about code. Not so much at actually implementing code larger than 1/4 to 1/3rd it's output context window. After all, it's not "writing code", it's statistically generating tokens that look like code it's seen before. It's unknown if the training code in which the LLM is statistically generating a reply actually ran, it could have been pseudo code explaining that computer science concept, we don't know.
People seem to want a genie that does what they are thinking, and that is never going to work (at least with this technology.) I'm really talking about effective communications, and understanding how to communicate with a literal unreal non-human construct, a personality theater enhanced literary embodiment of knowledge. It's subtle, it requires effort on the user's side, more than it would if one were talking to a human expert in the area of knowledge you operate. You have to explain the situation so the AI can understand what you need, and developers are surprising bad at that. People in general are even worse at explaining. Implied knowledge is rampant in developer conversation, and an LLM struggles with ambiguity, such as implied references. Too many same acronyms in different parts of tech and science. It does work, but one really needs to treat LLMs like idiot savants.
Remember that there is a lot of nuance to these sorts of deals.
I don’t have any domain knowledge, but I recently saw an executive put in restaurant reservations for five different places the night of our team offsite, so he would have optionality. An article could accurately claim that he later canceled 80% of the teams eating capacity!
But if it was reported in the press that your team was going to eat 5 meals at the same time before it was revealed that it was just an asshole screwing over small businesses, then that correction in eating capacity should be reported.
That was the point in the parent. How this is being reported is bit skewed.
And also there is the problem that nobody reads corrections. Lies run around the globe before the Truth has tied its shoelaces, or some quote like that.
I've read the first 2 paragraphs 5 times and I still can't tell if Microsoft was renting datacenters and paying for them, or if Microsoft was leasing out datacenters and decided "no more AI data centers for you, 3rd parties".
And digging further into the article didn't help either.
The thing that is driving me crazy in all these threads is that we have invariably a bunch of programmers saying "I use it for coding and I would never go back" but this is orthoganal to the question of whether it's a good business.
If you use gen-AI but don't pay the real cost, you're not really a data point on the economics of it. It could be useful to programmers and still be a terrible business if the cost of giving you that usefulness is more than you (or your employer) would pay.
My impression is that costs will continue to go down. Large investments are unlikely to be profitable for these businesses. Whoever is dumping billions into this is unlikely to get their money back. The new tooling, models, discoveries seem to be commoditized within months. There are no moats. If things keep going this way there will never be a point where employers (or anybody for that matter) have to pay the real cost.
Microsoft said they were going to spend 80 Billion on AI data centers, and they confirmed this again, so there is no 'scaling back'.
Speculation:
I suspect after they observed the incredible speed with which Grok was able to build out a leading edge AI infrastructure themselves in Memphis, far faster than what the traditional data-centers could offer it, Microsoft might have had an epiphany and went 'wait, why are we not doing this?'
These pages are somewhat out of date, but they confirm what I said: "Ancient Egypt was a peasant-based economy and it was not until the Greco-Roman period that slavery had a greater impact."
no - by not specifying which Kingdom you were referring to, it revealed an incomplete, obstinate, reactive answer.. (as I have also done many times) So my remedy to that today was to read a bit before further damage is done. Non-English pages might also be useful on this massive and historical topic
You're missing the key phrase. "Says it will". Companies, of course, say all sorts of things. Sometimes, those things come to pass. But not really all that often.
Apply said the same thing the last two election cycles. They seem to be eternally indicating they're investing multiple hundreds of billions in the US. What they actually followed through on, is what I want to know.
If Apple can pull off "Siri with context," it will completely annihilate Microsoft's first mover advantage. They'll be left with a large investment in a zero-margin commodity (OpenAI).
The messiest launch ever. The renewed UI makes it easy to assume that the LLM-backed Siri is already here but just isn't much better than the old one. A marketing disaster.
Yes, although before full "LLM Siri," Apple promised an "enhanced" Siri with contextual understanding in iOS 18. The clock is ticking though—WWDC will be here before you know it.
Apple will not beat Microsoft in any capacity here
Microsoft has all the context in the world just waiting for exploitation: Microsoft Graph data, Teams transcripts and recordings, Office data, Exchange data, Recall data(?), while not context per se even the XBox gaming data
> Apple will not beat Microsoft in any capacity here
I'm sure MS will provide AI to business, but if Apple get things right, they'll be the biggest provider of AI to the masses.
With a Siri that knows your email, calendar, location, history, search history, ability to get data from and do things in 3rd party Apps (with App Intents) and if it runs on your phone for security, it could be used by billions of consumers, not a few hundred million MS office users.
What was that restaurant I went to with Joan last fall? Send linkedin requests to all the people I've had emails from company X.
Of course they could take too long or screw things up.
Siri's success would greatly depend on app developers adopting intents. The major players are going to be hesitant to give Apple that much access to data - the EU may help push them that way, but even still, Microsoft, Google, Facebook, and others want their AIs to be the one people use.
Siri is also limited to Apple products, and while lots of people have iPhones, many of them still have a PC, where Siri doesn't work.
Companies are also very concerned about employees accidentally or purposefully exfiltrating data via AI usage. Microsoft is working hard to build in guardrails there and Intune allows companies to block Siri intents, so Apple would have to do a lot to reassure corporate customers how they'll prevent Siri from sending data to a search engine or such.
But you might be right. I think it's way too early to tell, and that's why so much money is being poured into this. All the major players feel that they can't afford to wait on this.
A lot of developers have already adopted intents to support Shortcuts and existing Siri. There will be tremendous business pressure to be able to fit into a request like "Get me a car to my next appointment"
How are any of these unique competitive advantages over iCloud, App Store, Safari, and just generally more locked-in high margin mobile platform users than anyone?
If the money is in providing AI to businesses, to do things humans were previously paid to do - then Microsoft would be in a much better position than Apple, because they already have a big foothold whereas Apple has never really targeted business use.
I have serious doubts about this. Consider that Apple has somewhere around 2 billion users (a very, very optimistic estimate). This would be $250 per user - an utterly ridiculous number to spend on a set of features than nobody even uses. I think this is creative accounting to impress Trump and stave off the tariffs until his term ends.
I'm not sure why there is so much poor reporting on accelerator demand recently, it seems there are a lot of people looking to sell a message that isn't grounded in reality.
Quite a lot of money is at stake in the control for retail investors' minds.
Lots of stupid takes get amplified by people who lack background. Look at the recent Intel/TSMC/Broadcom merger rumors. The story was "Canada could join the US" level of stupid to anyone with experience anywhere near chip fabrication but it still circulated for several days. Also, look at what it did to the stock price of INTC. Lots of money made and lost there.
I think most companies know b2b is the most lucrative segment for AI because it reduces one of their top costs - people. Companies selling AI are basically just selling the ultimate automation tool, which (in theory) is massive value for companies. Having a nice consumer product is a side gig.
- OpenAI and Oracle partnership: When Microsoft is 'at capacity' more demand can go to Oracle, so now Microsoft don't need to rapidly add capacity with leases (which likely have a lower ROI than Microsoft owned and operated centres).
- Longer term investments are still going ahead: Microsoft aren't cutting back on capex investment. They don't want to lease if they don't have to, but long term still see a huge market for compute that they will want to be a key supplier of.
I think Microsoft's goal here is to focus on expanding their capex to be a long term winner instead of chasing AI demand at any cost in the short term. Likely because they already think they're in a pretty strong position already.
key piece of info at the end, looks like they are leaving the spending on datacenter to OpenAI
> Microsoft’s alliance with OpenAI may also be evolving in ways that mean the software giant won’t need the same kind of investments. In January, OpenAI and SoftBank Group Corp. announced a joint venture to spend at least $100 billion and perhaps $500 billion on data centers and other AI infrastructure.
In a recent Dwarkesh podcast this week Nadella was just commenting on how they expected to benefit from reduced DC rental pricing and were preparing for Jevon's paradox to max out capacity. I guess they are calculating a ceiling now.
It’s additive math, is the overall plus or minus. There is always gonna be some push and pull.
“ TD Cowen posited in a second report on Monday that OpenAI is shifting workloads from Microsoft to Oracle Corp. as part of a relatively new partnership. The tech giant is also among the largest owners and operators of data centers in its own right and is spending billions of dollars on its own capacity. TD Cowen separately suggested that Microsoft may be reallocating some of that in-house investment to the US from abroad”
So, Microsoft’s move to ditch leases for “a couple hundred megawatts” of data center capacity, as noted in TFA, is a pretty intriguing shift—and it’s not just a random cutback. Per some reports from Capacity Media and Analytics India Magazine, it looks like they’re pulling some of their international spending back to the U.S. and dialing down the global expansion frenzy. For context, that “couple hundred megawatts” could power roughly 150,000 homes, (typical U.S. energy stats) so it’s a decent chunk of capacity they’re letting go.
IMO it's not a full-on retreat—Microsoft’s still on track to drop $80 billion this fiscal year on AI infrastructure, as they’ve reaffirmed. But there’s a vibe of recalibration here. They might’ve overcooked their AI capacity plans, especially after being the top data center lessee in 2023 and early 2024. Meanwhile, OpenAI—Microsoft’s big AI partner—is reportedly eyeing other options, like Project Stargate with SoftBank, which could handle 75% of its compute needs by 2030 (per The Information report). That’s a potential shift in reliance that might’ve spooked Microsoft into rethinking its footprint.
Also it seems they're redirecting at least some costs - over half that $80 billion is staying stateside, per Microsoft’s own blog, which aligns with CEO Satya Nadella’s January earnings call push to keep meeting “exponentially more demand.” It’s a pragmatic flex—trim the fat, dodge an oversupply trap, and keep the core humming. Whether it’s genius or just good housekeeping, it shows even the giants can pivot when the AI race gets too hot.
> Wall Street stepped up its questions about the massive outlays after the Chinese upstart DeepSeek released a new open-source AI model that it claims rivals the abilities of US technology at a fraction of the cost.
And that's the crux of it and that's also why DeepSeek was such a big deal.
This could simply be datacenter deals started years ago that they are pulling out of now they have larger AI-optimized DC's being commissioned in places more suited to faster & larger power availability.
OpenAI is pivoting away from MS. MS also has their own internal AI interests. Need to frame this for investors that doesn't look like we are losing out. "Nadella doesn't believe in AI anymore". Done and done.
a lot of this sounds like a normal course of business and stuff that msft does all the time. i don't understand the openai drama speculation on here. msft continues to have right of first refusal on openai training and exclusivity on inferencing. if someone else wants to build up openai capacity to spend money on msft for inferencing, msft would be thrilled. they recognize revenue on inferecing not training at the moment, so it's all upside to their revenue numbers
It's sad, people are already recklessly rearranging business logic via AI in key medical software systems due to global outsourcing linguistic reasons.
I have so much intense hatred for pg and Sama right now I rarely come to this shit show of a site
Everyone's response when a politician announcing some supposed massive spending program should be to say "show me the appropriation bill and receipts, then we can talk."
Not surprised. You need massive amounts of compute for training LLMs from scratch, but practical uses of LLMs are increasingly focused on using existing models with minimal or no tweaks, which requires far less computing power.
I would like to note Bloomberg pulls these types of FUD before every NVDA earnings release. The last one they did was the false reports on Blackwell technical issues.
Completely agree with you re: Bloomberg's somewhat shady history of reporting. However, in this case, the article is citing a research note written by TD Cowen equity research analysts.
This is the insistence to spread FUD for … some uncertain aim.
Perhaps it hitting all these various firms, in order to leverage their strong reputation, to cause the price to drop, allowing someone - perhaps Bloomberg himself, to make a profit off of it.
No no, I know I sound like a conspiracy theorist now. But my eyes were opened by the sharing of the story.
The fact that FT, WSJ, Fox or a million other sites haven’t latched onto this obvious scheme, is heinous, and once again a sign of our completely captured media.
When you double down on the incorrect reporting instead of retracting or correcting it, as Bloomberg did with their ludicrous spy chip story, it becomes FUD regardless of your initial intent.
Once companies start charging and those paying now like 200 a month realize it isn’t as game changing as Silicon Valley wants you to believe the AI car is going over the cliff , driven by its AI Agent of course
I'm honestly in two minds on this one. On one hand, I do agree that valuations have run a bit too far in AI and some shedding is warranted. A skeptical position coming from a company like MSFT should help.
On the other hand, I think MSFT was trying to pull a classic MSFT on AI. They thought they can piggyback on top of OpenAI's hard-work and profit massively from it and are now having second thoughts, thats better too. MSFT has mostly launched meh products on top of AI.
Given how things go when past bubbles have popped, this is likely to be "both" I think. Just not all at once
When the bubble pops you see things collapse
Then it becomes a feeding frenzy as companies and IPs get bought up on the cheap by whoever has a bit of money left
When the dust clears, some old players are gone, some are still around but weaker, some new players have emerged that resemble conglomerates of the old players, but overall a lot of the previous existing power is consolidated into fewer hands
I doubt it. MS has what they need in oAI partnership. I think this is more likely just a reflection of the broader economic environment. Going into a recession, cut investment, try to retain as much talent as you can afford to for the next few years.
https://archive.is/dWo55
- This is a proper MSFT resource allocation commentary, not an industry wide canary in coalmine
- MSFT CapEx is up 45% YoY, positioning for inference dominance
- MSFT is prioritizing AI inference for millions of customers over massive AI training for OpenAI
- Losing inference demand risks clients moving to AWS/GCP
- MSFT taking bet that industry overbuild will lead to more attractive leasing terms in future vs owning
- OpenAI chases AGI; MSFT chases scalable, affordable AI that drives Azure adoption
- Leadership is risk adjusted - they win if OpenAI succeeds, and win if open-source wins
If they are taking the bet that industry will overbuild, how is it not a canary in a coal mine?
There is almost certainly at least one person in Redmond working on this that’s smarter and better informed than you, just going off probability.
>There is almost certainly at least one person in Redmond working on this that’s smarter and better informed than you, just going off probability.
this isn't the actual test though right? it's surely true, and yet companies make bad decisions all the time. It needs to be conditioned on "someone smarter and better informed than you _with influence in the organization sufficient to enact change_"
Yeah the guy who designed the Courier for Microsoft was smarter and better informed than me, the problem is he was smarter and better informed than his leadership at microsoft who canned the project in favour of doing nothing, messing around with nokia, and ending up with a watered down product years too late in terms of the surface.
I can tell you for certain about quite a few people like that who are no longer working in Redmond because they have been canned in the last wave of layoffs.
The usage of Azure in business is something else.
Or that person is this person
Copilot (referring to the M365 one) is not a very great product though. It's clearly been rushed to market before it was even done. Microsoft is soooo afraid they'll miss this boat. But we are stuck with the fallout.
And it isn't even that affordable.
Microsoft is a master class in becoming so big you don't need compelling products anymore.
You are in their ecosystem, you will use their products.
It took them 12 years to make Teams almost as good as Slack.
Azure only has made it this far because its Microsoft, so many of their roll outs have been terrible (looking at you AKS, Cosmos DB).
Copilot is used at my wifes company because they get it for free, and it seriously seems like its ChatGPT 3.5 still
To be fair, they did somehow manage to take over javascript while nobody was looking.
Do you mean the non-product people that designed and wrote typescript?
Exactly, and they did it while no one at Microsoft was looking at them.
That's not what I see from the outside Everytime versioning comes up. My understanding is Microsoft marketing has full control over what constitutes a major / minor version bump like when typescript 4 is released vs typescript 3.9 (just an example). The people who build typescript don't even control their own version numbers.
According to this https://www.learningtypescript.com/articles/why-typescript-d...
The versioning logic is fairly simple (although kind of pointless afaict?). X.1 —> X.9, and instead of going to .10 it simply increments major number. With average releases every 3 months, then major versions are simply being bumped roughly every 30 months.
I think functionally it’s just a really awkward date-versioning on a 30 month calendar instead of 12
It's an old school versioning system that was very popular for DOS software. I'm not surprised that TypeScript, being ultimately still an Anders Hejlsberg project, would adopt it.
> almost as good as Slack
I'm sorry, what?! I will grant you almost as good as Zoom but Teams' actual chat functionality is awful.
I can't fathom why they didn't just clone Slack for their chat UX, it would have been less effort and better.
.NET is a compelling product, cross platform, open source, well supported, good tooling.
.NET is not a good representation of Microsoft. It is a uniquely developer centered product (from devs by devs) that is of higher quality than almost everything else they produce.
As with every other Microsoft business product it's a name with no ideology. No sense of what and what not to do.
Nailed it. Nadella's commentary is essentially this.
Dwarkesh's interview is fantastic: https://www.youtube.com/watch?v=4GLSzuYXh6w&t=348s (timestamp for "where is the value created?" question)
- Improving scale in efficiency of inference / training likely something too.
- Existing data-centre footprints might have enough planned hardware swaps.
- There might be better places to lease data centers for power. The US, Canada, and others all have perspectives on this.
Huh. It has everything to do with OpenAI planning to move away from Microsoft datacenters to its own datacenters.
I think that is the best direction to go.
M$ Nadella is great at business strategic decisions. If only there is another great product person at M$ that pushes the design of their software and hardware. Both has gotten a lot better but still not Apple level of polish.
>Apple level of polish.
Absurd claim given the untenable OS release cadence under Tim Cook.
"But better than M$" is still undeserving of the lofty implication from "level of polish".
I’ve used windows and macOS side by side for over a decade, and macOS BY FAR has a greater level of polish than windows, which to this day is relatively unstable. I haven’t had myself or a relative experience an OS-level crash in macOS in like five years years. Meanwhile, troubleshooting a family member’s BSOD is a regular occurrence.
> Apple level of polish
- I can't tell if my macbook is charging or not while the lid is closed. There is no light indicator - minimize/maximize buttons are very small given that I am using a the touchpad. I am not saying they are unsuably small but far from ideal. - If I fully deplete the battery. I cannot immediatly turn it on with adapter's power. Why? Every windows laptop can be run with adapter even with a dead battery.
> BSOD is a regular occurrence
I hate windows as much as the next guy, but are you really getting BSOD in 2025?
I like linux and I know its not exactly polished but it offers something as a trade off which neither MacOS nor Windows do.
> If I fully deplete the battery. I cannot immediatly turn it on with adapter's power. Why? Every windows laptop can be run with adapter even with a dead battery.
First you should not be depleting modern batteries, its not good for the battery health, or indeed storage health, recall that solid-state storage integrity is not guaranteed in the absence of electrical current (a reminder to those who have a habit of backing up stuff onto SSDs, unplugging them and forgetting about them for extended time periods).
Second, if you think about it, its a safety feature in to protect you from data corruption.
If they let you turn on a machine with a fully-depleted battery and you immediately yank out the power cord, then you risk data corruption.
Sure, modern solid-state storage (at least the high quality implementations) have power-loss-protection. But that relies on capacitors. And to charge up capacitors, you need what ... oh yeah, that's right, electricity .... ;-)
So that is most likely why Apple require you to have a minimal charge before allowing you to power-on from completely depleted.
And to be honest, what are you bitching about anyway. Most Apple devices I use only require a 5–10 minute charge from completely depleted to get you to the minimum required. So just go make a cup of coffee and remind yourself not to completely deplete batteries in the future.
> First you should not be depleting modern batteries, its not good for the battery health,
I think most modern devices already turn themselves off before reaching true ZERO.
> Second, if you think about it, its a safety feature in to protect you from data corruption.
No its to make sure that you can not use a perfectly functional compouter without replacing a battery once it has fully died.
> I use only require a 5–10 minute charge
Are you saying apple devices are not good for watching live content?
> You risk data corruption
With autosave (which most app have) data loss would be minimal.
Before I could afford apple products, from afar everything apple looked amazing but after using them I found out, Apple UI is just pretty and only slightly more polished than Windows.
I'm sure people running out of batteries in the middle of a zoom call or a presentation especially enjoy waiting an additional 5-10 minutes before being able to continue.
> people running out of batteries in the middle of a zoom call or a presentation
I would remind those sorts of people of the 5 P's ...
Prior Planning Prevents Poor Performance
Starting a zoom or presentation on low battery is just a fucking stupid thing to do. There's no other way to put it.
It doesn't matter how stupid it is, the OS shouldn't make it any worse even so.
> There is no light indicator
Counter point: Light indicators are ugly and usually unnecessary. Most people charge overnight, and don't want/need a light in their face.
> Counter point: Light indicators are ugly and usually unnecessary
And if you really must have an ugly and un-necessary light indicator, Apple have an answer for that....
Do not charge off USB-C, instead buy an Apple magsafe charger. That comes with a little indicator light on the end of the cord.
Alternatively, if you are charging off USB-C all you have to do on a powered mac is to look at the battery indicator in the menu bar.
Not saying Apple is great but for me MS UX has been an utter crap, both in managed corp env and on a personal device.
You mean Tim “Apple intelligence” Cook
Ha ha
Have you seen a fail that was Apple “Intelligence” launch? The apple polish is long lost.
Remember Apple Maps? They polish stuff incrementally.
AGI (not) confirmed
A few things from the Dwarkesh interview with Satya:
* He sees data center leases getting cheaper in near future due to everyone building
* He’s investing big in AI, but in every year there needs to be a rough balance between capacity and need for capacity
* He doesn’t necessarily believe in fast takeoff AGI (just yet)
* Even if there is fast takeoff AGI, he thinks human messiness will slow implementation
* It won’t be a winner-take-all market and there will be plenty of time to scale up capacity and still be a major player
> Even if there is fast takeoff AGI, he thinks human messiness will slow implementation
Five years ago during covid all these clunky legacy businesses couldn't figure out a CSV let alone APIs but suddenly with AGI they are going to become tech masters and know how to automate and processize their entire operations?
I found it very amusing that at the turn of the decade "digitalisation" was a buzzword as Amazon was approaching its 25th anniversary.
Meanwhile huge orgs like the NHS run on fax and was crippled by excel row limits. Software made a very slow dent in these old important slow moving orgs. AI might speed up the transition but I don't see it happening overnight. Maybe 5 years if we pretend smartphone adoption is indicative of AGI and humanoid robot rollout
I think you don’t have widespread adoption until AI takes the form of a plug-and-play “remote employee”.
You click a button on Microsoft Teams and hire “Bob” who joins your team org, gets an account like any other employee, interacts over email, chat, video calls, joins meetings, can use all your software in whatever state it’s currently in.
It has to be a brownfield solution because most of the world is brownfield.
Completely unusable in any bank, or almost any organization dealing with data secrecy. You have complex, often mandatory processes to onboard folks. Sure, these can be improved but hiring some 'Bob' would be far from smooth sailing.
Big enough corps will eventually have their own trusted 'Bobs' just like they have their own massive cluster farms (no, AWS et al is not a panacea and its far from cheap&good solution).
Giving any form of access to some remote code into internal network of a company? Opsec guys would never ack that, there is and always will be malice coming from potentially all angles.
> Giving any form of access to some remote code into internal network of a company? Opsec guys would never ack that
Solarwinds.
Local agent machines for cloud CI/CD pipelines.
Devs using npm and pypi and such.
To be a bit reductive, `apt-get update` or equivalent.
I have worked at a place with serious opsec and none of that was allowed. Everything pointed at private mirrors containing vetted packages. Very few people had the permissions to write to those repos.
When serious money is on the line the previously hard rules can become soft quite fast. 1 major escalation away in my experience.
Not to mention that if Bob works as the current overhyped technologies do, it will be possible to bribe him by asking him to translate the promise of a bajillion dollars into another language and then repeat it back looking fort deciding on his next steps.
> I think you don’t have widespread adoption until AI takes the form of a plug-and-play “remote employee”.
Exactly. The problem with the AGI-changes-everything argument is that it indirectly requires "plug-and-play" quality AGI to happen before / at the same time as specialized AGI. (Otherwise, specialized AGI will be adopted first)
I can't think of a single instance in which a polished, fully-integrated-with-everything version of a new technology has existed before a capable but specialized one.
E.g. computers, cell phones, etc.
And if anyone points at the "G" and says "That means we aren't talking about specialized in any way," then you start seeing the technological unlikeliness of the dominoes falling as they'd need to for AGI fast ramp.
Honestly, I think the mode that will actually occur is that incumbent businesses never successfully adopt AI, but are just outcompeted by their AI-native competitors.
Yes this is exactly how I see it happening - just like how Amazon and Google were computer-native companies
And Sears had all the opportunity to be Amazon
Sears also did everything it could to annihilate itself while dot-com was happening.
their CEO was a believer of making his departments compete for resources leading to a brutal, dysfunctional clusterfuck. rent seeking behavior on the inside as well as outside.
Sounds kinda like Amazon..
[dead]
And some, both new and old, will collapse after severely misjudging what LLMs can safely and effectively be used for.
it looks like a variant of Planck's principle https://en.wikipedia.org/wiki/Planck%27s_principle
Hah, that isn’t a brownfield solution.
These orgs could hire someone who could solve these issues right now (and for the last decade) if they would allow those people to exist in their org.
The challenge is precisely that those people (or people with that capability) aren’t allowed to exist.
"Bob" in this example is just some other random individual contributor, not some master of the universe. E.g. they would have the title "associate procurement specialist @ NHS" and join and interact on zoom calls with other people with that title in order to do that specific job.
Right, but these jobs are inefficient mostly because of checks and balances. So unless you have a bunch of AIs checking one another's work (and I'm not sure I can see that getting signed off) doesn't it just move the problem slightly along the road?
There's an argument here something like.. if you can replace each role with an AI, you can replace multiple with a single AI, why not replace the structure with a single person?
And the answer is typically that someone has deemed it significant and necessary that decision-making in this scenario be distributed.
Yup. If we ignore all the ‘people’ issues (like fraud, embezzlement, gaming-the-system, incompetence when inputting data, weird edge cases people invent, staff in other departments who are incompetent, corruption, etc), most bureaucracies would boil down to a web form and a few scripts, and probably one database.
Better hope that coder doesn’t decide to just take all the money and run to a non extradition jurisdiction though, or the credentials to that DB get leaked.
It's weird edge cases all the way down.
Just look at names. Firstname, lastname? Jejeje, no.
Treating them as constants? laughs in married woman.
If you can absolutely, 100% cast iron guarantee that one identity field exactly identifies one living person (never more, never less), these problems are trivial.
If not? Then its complexity might be beyond the grasp of the average DOGE agent (who, coincidentally, are males in their early 20s with names conforming to a basic Anglo schema).
And that's just the NAME field.
> those people (or people with that capability) aren’t allowed to exist
I'm not sure what personal characteristics or capabilities you're referring to, FWIW.
The ability to use that tech effectively to optimize the organizations internal processes. Or do the job of a person without actually being a person with a name that can be held accountable.
Most of those orgs have people in key positions (or are structurally setup in such a way) that isn’t desirable to change these things.
There hardly are any plug and play human employees.
[dead]
As a first order of business, a sufficiently advanced AGI would recommend that we stop restructuring and changing to a new ERP every time an acquisition is made or the CFO changes, and to stop allowing everyone to have their own version of the truth in excel.
As long as we have complex manual processes that even the people following them can barely remember the reason why they exist, we will never be able to get AI to smooth it over. It is horrendously difficult for real people to figure out what to put in a TPS report. The systems that you refer to need to be engineered out or organisations first. You don't need AI for that, but getting rid of millions of excel files is needed before AI can work.
I dont know that getting rid of those wacky Excel sheets is a prerequisite to having AI work. We already have people like Automation Anywhere watching people hand carve their TPS reports so that they can be recreated mechanistically. Its a short step from feeding the steps to a task runner to feeding them to the AI agent.
Paradigm shifts in the technology do not generally occur simultaneously with changes in how we organize the work to be done. It takes a few years before the new paradigm backs into the workflow and changes it. Lift and shift was the path for several years before cloud native became a thing, for example. People used iPhone to call taxi companies, etc.
It would be a shame to not take the opportunity to tear down some old processes, but, equally, sometimes Chesterton's fence is there for good reason.
But why are these sort of orgs slow and useless? I don't think it is because they have made a conscious decision to do so - I think it is more than they do not have the resources to do anything else. They can't afford to hire in huge teams of engineers and product managers and researchers to modernize their systems.
If suddenly the NHS had a team of "free" genuinely phd-level AGI engineers working 24/7 they'd make a bunch of progress on the low-hanging fruit and modernize and fix a whole bunch of stuff pretty rapidly I expect.
Of course the devil here is the requirements and integrations (human and otherwise). AGI engineers might be able to churn out fantastic code (some day at least), but we still need to work out the requirements and someone still needs to make decisions on how things are done. Decision making is often the worst/slowest thing in large orgs (especially public sector).
It's not a resource problem; everyone inside the system has no real incentive to do anything innovative; improving something incrementally is more likely to be seen as extra work by your colleagues and be detrimental to the person who implemented it.
What's more likely is a significantly better system is introduced somewhere, NHS can't keep up and is rebuilt by an external. (Or more likely it becomes a inferior system of a lesser nation as the UK continues its decline).
I think this is where the AGI employee succeeds where other automation doesn’t. The AGI employee doesn’t require the organization to change. It’s an agent that functions like a regular employee in the same exact system with all of its inefficiencies, except that it can do way more inefficient work for a fraction of the cost of a human.
Assuming we get to AGI and companies are willing to employ them in lieu of a human employee, why would they stop at only replacing small pieces of the org rather than replacing it entirely with AGI?
AGI, by most definitions at least, would be better than most people at most things. Especially if you take OpenAI's definition, which boils it down only to economic value, a company would seemingly always be better off replacing everything with AGI.
Maybe more likely. AGI would just create superior businesses from scratch and put human companies out if business.
Extrapolating this, I cannot help but imagine a dystopian universe in which humans' reason for existence is to be some uber AI's pets.
This is a huge selling point, and it will really differentiate the orgs that adopt it from those who don’t. Eventually the whole organization will become as inscrutable as the agents that operate it. From the executive point of view this is indistinguishable from having human knowledge workers. It’s going to be interesting to see what happens to an organization like that when it faces disruptive threats that require rapid changes to its operating model. Many human orgs fall apart faced by this kind of challenge. How will an extra jumbo pattern matcher do?
Don't forget that the executive is also a (human) employee. If AGI is working so well, why would the major stakeholders need a human CEO?
What you are describing is science fiction and is not worthy of serious discussion.
IMO it comes from inertia. People at the top are not digital-native. And they're definitely not AI-native.
So you're retrofitting a solution onto a legacy org. No one will have the will to go far enough fast enough. And if they didn't have the resources to engineer all these software migrations who will help them lead all these AI migrations?
Are they going to go hands off the wheels? Who is going to debug the inevitable fires in the black box that has now replaced all the humans?
And many of the users/consumers are not digital-native either. My dad is not going to go to a website to make appointments or otherwise interact with the healthcare system.
In fact most of the industries out there are still slow and inefficient. Some physicians only accept phone calls for making appointments. Many primary schools only take phone calls and an email could go either way just not their way.
It's just we programmers who want to automate everything.
Today I spent 55 minutes in a physical store trying to get a replacement for two Hue pendant lights that stopped working. The lights had been handed in a month ago and diagnosed as "can't be repaired" two weeks ago. All my waiting time today was spent watching different employees punching a ridiculous amount of keys on their keyboards, and having to get manual approval from a supervisor (in person) three times. I am now successfully able to wait 2-6 weeks for the replacement lights to arrive, maybe.
When people say AI is going to put people out of work, I always laugh. The people I interacted with today could have been put out of work by a well-formulated shell script 20 years ago.
Nonsense. They wouldn't be out of work, their jobs would just be easier and more pleasant. And your experience as a customer would be better. But clearly their employer and your choice of store isn't sufficiently incentivized to care, otherwise they would have done the software competently.
The hilarious thing is there is absolutely no improvement AI could possibly make in the experience you've described. It would just be a slower, costlier, more error prone version of what can easily be solved with a SQL database and a web app.
Yeah. I hope software engineers slow down a bit. We are good enough. There is no need to push ourselves out of jobs.
In some ways, we're always putting ourselves out of the job, anytime you write some code and then abstract it away in a reusable form.
“ only takes phone calls for appointments” is a huge selling point for a physicians office. People are very tired of apps.
I’d far prefer a well done app. It’s so frustrating doing a back and forth of dates when trying to make an appointment.
Fair, but I'd prefer phone over a poorly done app.
And given the state of most apps...
You obviously don't live in the UK, where the mad dash at 8:00am on the dot to attempt to secure an appointment happened, and the line would be busy until 8:30am when they ran out of appointment slots, if you were unlucky on the re-dial/hangup rodeo.
Apps (actually a PWA) mean I can choose an appointment at any time in the day and know that I have a full palette of options over the next few days. The same App(PWA) allows me to click through to my NHS login where I can see my booked appointments or test results.
Yeah maybe we programmers should start doing that too. Why do we use Teams, Slack or even emails?
We should submit our code to a mainframe and everyone is going to improve their skills too.
On punched cards.
Given how bad some of the apps and websites are I am not sure phone calls are any worse! They are also less prone to data breaches and the like.
This. Thank you.
People stop caring about optimising or improving stuff. Even programmers are guilty of it. I haven't changed my vimrc in over 5 years
> Five years ago during covid all these clunky legacy businesses couldn't figure out a CSV let alone APIs but suddenly with AGI they are going to become tech masters and know how to automate and processize their entire operations?
The whole point of an AGI is that you don't need to be a tech master (or even someone of basic competence) to use it, it can just figure it all out.
Technical people don't write code, they (along with product people) specify things exhaustively. While in theory a super AGI can do everything (therefore deprecating all of humanity) in reality I suspect that given existing patterns in orgs of layers of managers who don't like to wade into the details specifying to the nth degree there will be a need for lots of SMEs except that AI will probably be a leaky abstraction and you'll still need technical people to guide the automation efforts
Random non-technical people dealing with CSVs is probably one of the best usecases for current AI tools too.
> Meanwhile huge orgs like the NHS run on fax
I thought this was a German-only thing?
Not convinced.
In 2018:
https://www.gov.uk/government/news/health-and-social-care-se...
> Matt Hancock has banned the NHS from buying fax machines and has ordered a complete phase-out by April 2020.
The NHS is quite federated. Hell many parts of it are independent companies. Some trusts have decent modern systems though - I had to go for a test just before christmas - I'd phoned up my GP in the morning got an appointment for half an hour later, he ordered a test, said go to one of these 8 centres, so I went to one about half an hour away (I live a fair way away from a major town). Had the test, by the time I'd had lunch and driven back home I had another call from the GP asking me to come in that evening, the appointment was created by the GP and read seconds later at the hospital, the test was done there and results reported back again at the click of a system at the GP.
But that's just my local trust. Go 10 miles west and it's another trust and they have different systems. And I had to go to one of the test centres in the trust, I couldn't go to one in a neighbouring trust as they have different systems and there's no/limited interconnects.
It's all pretty much a central system as of this year, your trust will have localised PWAs or Apps, but it all connects via provider IDs and transfer codes to the same NHS backend.
The healthcare system I largely use in the US was bought up by another larger system. They are now primarily on an electronic system (which for me is welcome) but some of the offices still use faxes as well.
I had a stint in healthcare software and can confirm the fax thing is real. That said, I had a hunch it's all cloud-hosted fax sending systems delivering to cloud-hosted fax-to-pdf systems. I challenged by colleagues to show me an actual fax machine and never had any takers.
Counterpoint: the PC.
The minicomputer revolution was meant to replace all the big iron in the basement with computers shared by teams. This lasted a decade until we got computers meant for one person and we've used that format of computation for nearly 50 years now.
Everyone is talking about how AI will replace people.
It won't.
AI without HI is useless. However, HI with AI is much better at handling all the bullshit office work that needs to get done—anyone who doesn't understand this is either an idiot, a troglodyte, or already has an expendable army of minions doing their work.
The surge in AI demand won't be top-down but bottom-up. That stupid office birthday wish card being passed around? I can Shakespeare the motherfucker. And that email to the big boss telling him the project is doomed and that he needs a compass to find his own ass? Let's ask R1.
>My boss is a moron who can't find his ass without a compass. His latest tech initiative will cost us millions and make the best staff leave. Write a professional letter that covers those points without being harsh.
---
Subject: Request for Review of Upcoming [Tech Initiative]
Dear [Recipient Name/Team Name],
I hope this message finds you well. I wanted to share some thoughts regarding our upcoming [Tech Initiative Name], as I believe open dialogue will ensure its success while aligning with our team’s strengths and long-term goals.
While the initiative’s intent to innovate is commendable, I’ve observed several potential challenges that may impact its effectiveness:
Financial Impact: Based on preliminary analysis, the projected costs appear to be significantly higher than initially estimated. A detailed cost-benefit review or phased rollout might help mitigate financial risk while allowing us to test feasibility.
Team Retention Concerns: Many of our top performers have expressed reservations about the initiative’s current structure, particularly around [specific pain point, e.g., workflow disruption, lack of clarity]. Retaining their expertise will be critical to execution, and their insights could refine the plan to better address on-the-ground needs.
To ensure alignment with both our strategic objectives and team capacity, I respectfully suggest:
Conducting a collaborative risk assessment with department leads.
Piloting the initiative in a controlled environment to gather feedback.
Hosting a forum for staff to voice concerns/solutions pre-launch.
I’m confident that with adjustments, this project can achieve its goals while preserving morale and resources. Thank you for considering this perspective—I’m eager to support any steps toward a sustainable path forward.
Best regards,
To be honest, that kind of sounds like a dystopian hell: chatGPT writing memos because we can't be arsed, and the reading the same memos because neither can the recipient. Why even bother with it?
It is heaven.
With a well working rag system you can find the reason why any decision was made so long as it was documented at some point somewhere. The old share point drives with a billion unstructured word documents starting from the 1990s is now an asset.
Since I don't have an expendable army, I must be either an idiot or a troglodyte. Where my understanding falters is finding a domain where accuracy and truth aren't relevant. In your example you said nothing about a "phased rollout", is that even germane to this scenario? Is there appreciable "financial risk?" Are you personally qualified to make that judgement? You put your name at the bottom of this letter and provided absolutely no evidence backing the claim, so you'd best be ready for the shitstorm. I don't think HR will smile kindly on the "uh idk chatgpt did it" excuse.
"Here is the project description: [project description] Help me think of ways this could go wrong."
Copy some of the results. Be surprised that you didn't think of some of them.
"Rewrite the email succinctly and in a more casual tone. Mention [results from previous prompt + your own thoughts]. Reference these pricing pages: [URL 1], [URL 2]. The recipient does not appreciate obvious brown nosing."
If I was sending it as a business email I'd edit it before sending it off. But the first draft saved me between 10 to 30 minutes of trying to get out of the headspace of "this is a fucking disaster and I need to look for a new job" to speaking corporatese that MBAs can understand.
This is where the HI comes into the equation.
Is that really a problem most people have in business communication? I can't recall a time sending a professionally appropriate email was actually hard.. Also, consider the email's recipient. How would you feel if you had to wade through paragraph upon paragraph of vacuous bullshit that boils down to "Hey [name], I think the thing we're doing could be done a little differently and that would make it a lot less expensive, would you like to discuss?"
This is a very refreshing take.
Our current intellectual milieu is largely incapable of nuance—everything is black or white, on or off, good or evil. Too many people believe that the AI question is as bipolar as every other topic is today: Will AI be godlike or will it be worthless? Will it be a force for absolute good or a force for absolute evil?
It's nice to see someone in the inner circles of the hype finally acknowledge that AI, like just about everything else, will almost certainly not exist at the extremes. Neither God nor Satan, neither omnipotent nor worthless: useful but not humanity's final apotheosis.
Whoa there, hold yer horses ;) Let's wait and see until we have something like an answer to "will it be intelligent?" Then we might be ready to start trying to answer "will it be useful?"
I agree completely though, it's nice to see a glimmer of sanity.
EDIT: My best hope is this bubble bursts catastrophically, and puts the dotcom crash to shame. Then we might get some sensible regulation over the absolute deluge of fraudulent, George Jetson spacecamp bull shit these clowndick companies spew on a daily basis.
> human messiness will slow implementation
If by "messiness" he means "the general public having massive problems with a technology that makes the human experience both philosophically meaningless and economically worthless", then yeah, I could absolutely see that slowing implementation.
Picking on "philosophically meaningless" a bit...
I'm a hobby musician. There are better musicians. I don't stop.
I like to cook. There are better cooks. I don't stop.
If you are Einstein or Michael Jordan or some other best-of-the-best, enjoyment of life and the finding of worth/meaning can be tied to absolute mastery. For the rest of us, tending to our own gardens is a much better path. Finding "meaning" at work is a tall order and it always has been.
As for "economically worthless," yes if you want to get PAID for writing code, that's a different story, and I worry about how that will play out in the scope of individual lifetimes. Long term I think we'll figure it out.
Is this the transcript of the interview (podcast) with Dwarkesh?
https://www.dwarkeshpatel.com/p/satya-nadella
Because if so,
> He doesn’t necessarily believe in fast takeoff AGI (just yet)
the term "fast takeoff AGI" does not appear in the transcript.
Some paraphrasing on my part, but what he does say is at odds with fast takeoff AGI. He describes AI as a tool that gradually changes human work at a speed that humans adapt to over time.
Given that you invented the term in your paraphrase, what do you mean by "fast takeoff AGI"?
I expect data centers will become more expensive precisely because everyone is building at the same time. Supply chain crunch
Temporary. During their operating and depreciating long tail phase the over supply will drive down costs for users. Like fiber cables.
> * He doesn’t necessarily believe in fast takeoff AGI (just yet)
This is so based... I would probably have given slow take off a 1% chance of happening 10 years ago, but today I'd put that somewhere like 30%.
In fact I don't think he believes in the takeoff of AGI at all, he just can't say it plainly. Microsoft Research is one of the top industry labs, and one of its functions (besides PR and creating the aura of "innovation") is exactly this: to separate the horseshit from things that actually have realistic potential.
Microsoft CEO Admits That AI Is Generating Basically No Value - https://futurism.com/microsoft-ceo-ai-generating-no-value
Bit of a click bait title, but it certainly seems like the realization is setting in that the hype has exceeded near term realistic expectations and some are walking back claims (for whatever reason; honesty, derisking against investor securities suits, poor capital allocation, etc).
Nadella appears to be the adult in the room, which is somewhat refreshing considering the broad over exuberance.
IMHO anyone who started using AI seriously:
1) Wouldn't want to go back
2) Wouldn't believe that it's about to replace human intellectual work
In other words AI got advanced enough to do amazing things but not 500B or T level of amazing and people with the money are not convinced that it will be anytime soon.
I went back. It’s fine at solving small problems and jump-starting an investigation, but only a small step better than a search engine. It’s no good for deep work. Any time I’ve used it to research something I know well, it’s got important details wrong, but in a confident way that someone without my knowledge would accept.
RLHF trains it to fool humans into thinking it’s authoritative, not to actually be correct.
This is exactly the experience I've had. Recently started learning OpenTofu(/Terraform) and the company now had Gemini as part of the Workspace subscription. It was great to get some basic going, but very quickly starts suggesting wrong or old or bad practices. Still using it as a starting point and to help known what to start searching for, but like you said, it's only slightly better than a regular search engine.
I use it to get the keywords and ideas, then use the normal search engine to get the facts. Still, even in this limited capacity I find the LLMs very useful.
That’s similar to how I use it. Useful, but not game changing. Definitely not at the level of the hype.
I have started using AI for all of my side projects, and am now building stuff almost everyday. I did this as a way to ease some of my anxiety related to AI progress and how fast it is moving. It has actually had the opposite effect; it's more amazing than I thought. I think the difficulty in reasoning about 2) is that given what interesting and difficult problems it can already solve, it's hard to reason about where it will be in 3-5 years.
But, I am also having more fun building things than perhaps the earliest days of my first code written, which is just over 7 years now. Insofar as 1) goes, yes, I never want to go back. I can learn faster and more deeply than I ever could. It's really exciting!
What is using AI seriously?
I’ve tried to use it as a web search replacement and often the information is generic or tells me what I already know or wrong.
I’ve used a code suggestions variant long before OpenAI hype started and while sometimes useful, rarely is it correct or helpful on getting over the next hurdle.
Any code from my coworkers is now just AI slop they glanced over once. Then I spend a long time reviewing and fixing their code.
I really don’t find spending time writing long form questions so a bot can pretend to be human all that time saving, especially if I have to clarify or reword it to a specific “prompt-engineer” quality sentence. I can usually find the results faster typing in a few keywords and glancing at a list of articles. My built in human speed reading can determine if what I need is probably in an article.
LLM seriousness has made my job more difficult. I would prefer if people did go back.
In my case its coding real world apps that people use and pay money for. I no longer personally type most of my code, Instead I describe stuff or write pseudo code that LLMs end up converting into the real thing.
It's very good at handling the BS part of coding but also its very good at knowing things that I don't know. I recently used it to hack a small bluetooth printer which requires its own iOS app to print, using DeepSeek and ChatGPT I was able to reverse engineer the printer communication and then create an app that will print whatever I want from my macOS laptop.
Before AI I would have to study how Bluetooth works now I don't have to. Instead, I use my general knowledge of protocols and communications and describe it to the machine and I'm asking for ideas. Then I try things and ask the stuff that I noticed but I don't understand, then I figure out how this particular device works and then describe it to the machine and ask it to generate me code that will do the thing that I discovered. LLMs are amazing at filling the gaps in a patchy knowledge, like my knowledge of Bluetooth. Because I don't know much about Bluetooth, I ended up creating a CRUD for Bluetooth because that's what I needed when trying to communicate and control my bluetooth devices(it's also what I'm used to from Web tech). I'm bit embarrassed about it but I think I will release it commercially anyway.
If I have a good LLM under my hand, I don't have to know specialised knowledge on frameworks or tools. General understanding of how things works and building up from there is all I need.
It's like a CNC machine but for coding.
I see, for single operator, no customers products it works nicely. You may find you use it less and less and will actually require that Bluetooth knowledge eventually as you grow a product.
LLMs so far seem to be good at developing prototype apps. But most of my projects already have codegen and scaffolding tools so I guess I don’t get that use out of them.
I predict that once you release your embarrassing app, you will find all the corner cases and domain secrets come rearing out with little ability of the LLM to help you (especially with Bluetooth).
The Bluetooth app thing is just an example of LLMs helping me build something I don't have beyond-basics knowledge of.
For other stuff, I still find it very useful because why would I bother to code something non-novel when I can just tell the LLM what I need?
For example, if I need a code that finds the device that given characteristics belongs(bluetooth stuff, again) to I can just tell the LLM to write it for me. It doesn't take a genius to write such a code, its elemental stuff and I would rather not spend my working memory on remembering the structures and names of variables. I copy+paste the current class that handles the bluetooth comms, tell it that I need a function for sending data to the printer and it gives me back the result. There's no art in writing such a code, its standard code for an API and I would prefer not to bother with it.
You seem to be overestimating the quality of a many production software products.
Don’t worry I am not. I understand that most prod deployed softwares are prototypes before LLMs came around.
“Before AI I would have to study how Bluetooth works now I don't have to.”
And
“It's very good at handling the BS part of coding…”
This is the part that I think is difficult in a team situation.
Learning and understanding is the important part, and certainly isn’t BS.
I understand that it really can make it seem like velocity has increased when you really are shipping things that more or less “work”, but it’s really a good practice to understand your code.
I’ve had to spend a significant amount of time fixing work that was admittedly generated using AI by other engineers, and I really fear engineers are beginning to trade deep understanding for the high of getting something that “works” with little effort.
It might “work” but you might be ignoring the work everyone around you is doing to clean up your brittle code that doesn’t scale and wasn’t thought through at inception.
You have an entirely valid worry and I get a bit scared at my use of AI because of this. I fear that dev jobs might go away or become third world only jobs like electronics manufacturing but in the mean time its scary how much it atrophies your mind. At the same time, it has opened up a universe of answers to questions I wouldn't normally ask because the bar was too high. Everyone seems to have their own unique stories.
For example just today, I dumped a crash log from the Mac version of Microsoft Remote Desktop into it. This damn app locks up 10 times a day for me causing a "Force Quit" event and subsequent crash dump to be generated. Normally what can I do with that crash dump other than send it off to Apple/Microsoft? It identified where it thought the crash was coming from: excessive right clicking causing some sort of foundational error in their logic. Avoiding right clicking has solved the issue for me. Now that I write this out, I could have spent hours upon hours finding a needle in a haystack and that would probably made me a better developer but the bar is too high, there is too much other work I have to get done than to chase this. Instead I would have just lived with it. Now I have some closure at least.
Again it seems like everyone has got their own unique stories. Is AI taking everything over? Not yet. Can I go back to pre-AI? No, its like going back to Windows 95.
It is effective because you can spend your mental energy on the things that matter, things that make difference.
Code quality actually doesn't matter when you remove the human from the loop as long as it works correctly because it becomes something made by a machine to be interpreted by a machine.
Code isn’t a binary scale of works or doesn’t - there is inefficient code and insecure code and everything else in between that still technically “works” - but a lack of understanding will eventually cause these “working” solutions to catch up to you.
You can always revisit that part of code if it doesn’t perform. For vast majority of code running on consumer devices there’s no difference between smart implementation and mediocre implementation. LLMs are great at being mediocre by default.
As for security, that mostly stems from the architecture. LLMs mediocracy also helps with following industry conventions and best practices.
In my case I never get the code being written at once, instead I make LLMs write pieces that I put together myself. Never got used to copilot or Cursor, I feel in control only with the chat interface.
Not understanding how Bluetooth works while building a Bluetooth thing seems like… a problem, though. Like, there are going to be bugs, and you’re going to have to deal with them, and that is where the “just ask the magic robot” approach tends to break down.
Funny enough, you already don't have access to low level radio so building a "Bluetooth thing" is just about dealing with some libraries and API.
Bugs happen but its not that different from any other type of bugs. Also, you end up learning about Bluetooth as bugs and other unexpected behavior happen. The great thing about LLMs is that they are interactive, so for example when collecting Bluetooth packets for analysis I ended up learning that the communication with Bluetooth is a bit like talking through a middleman and some packet types are only about giving instructions to the Bluetooth chip and others are actually about communicating with a connected device.
Using LLM for coding something you don't understand is much different than Googling something, then copy+paste a snippet from Stackoverflow because you can ask for instant explanation and modifications for testing edge cases and other ideas.
The only part I would quibble with is the fear that superficial AI generated code becomes widespread. It's not that I think this won't happen, and I wouldn't want it on my team, but I think it could actually increase demand for competent software engineers.
I got into coding about a decade ago when cheap outsourcing had been all the rage for a number of years. A lot of my early experience was taking over very poorly written apps that had started off with fast development and then quickly slowed down as all of the sloppy shortcuts built up and eventually ground development to a halt. There's a decent chance LLMs lead to another boom in that kind of work.
For mass production/scalability, I absolutely agree with you.
For products that won't be scaled, I imagine it becomes just another abstraction layer, with the cost of human input outweighing the cost of the additional infrastructure / beefing up hardware to support the inefficiencies created.
Oh come on, I'm not an "AI believer", but it regularly does things for me like write complex SQL queries that I can then verify are correct. Just something like that will often save me 20-40 minutes over doing it manually. There is _something_ there, even if it's not going to replace the workforce anytime soon.
Either you’re using it wrong, or you are using the wrong tools.
For search, try kagi fastgpt (no subscription required):
https://kagi.com/fastgpt
For code completion, I’ve found it’s not good at jumping hard hurdles, but it is a bit better than find replace (e.g. it finds things that are syntactically different, but semantically related), and can notice stuff like “you forgot to fix the Nth implementation of the interface you just extended”.
It’s also good at “I need to do something simple in a language I do not know”.
I’ve definitely encountered ai slop from coworkers. I’m sure they also produce stack overflow copy paste garbage too. Dealing with their newly-found increased “productivity” is an open problem.
Insisting on strict static typing helps. The LLMs can’t help with that, and it forces a higher bar before compilation succeeds.
I shut off Kagi search LLM stuff because I don’t need an LLM typing stuff out while I’m literally looking at a list of results.
> What is using AI seriously?
It's buying into the hype. AI is crypto 2.0, the true believers live in their own bubble and their own fantasy world.
> 1) Wouldn't want to go back
I went back. It sucks pretty bad, actually.
> Wouldn't believe that it's about to replace human intellectual work
Yea idk about that one chief. I have been working in ML (specifically scaling of large model training) at FAANG for the past 8 years, and have been using AI for my work since basically the first time this became even slightly usable, and I don’t share your optimism (or pessimism depending on how you see it). Yes it’s still pretty bad, but you have to look at rate of improvement, not just a static picture of where we are today.
I might still be wrong though and you may be right, but claiming that anyone using AI believes like you do is flat out false. A lot of my colleagues also working in ML researcher think like me btw.
It's a figurative speech, obviously its a spectrum where some believe that AGI is around the corner or that all this is nothing more than some overblown statistics exercise and LLMs have nothing to do with actual intelligence.
In my opinion, this generation of AI is amazing but isn't it.
He doesn't actually say that, the (very biased and polemical) article writer seems to have made that up. The actual quote is:
"Us self-claiming some [artificial general intelligence] milestone, that's just nonsensical benchmark hacking to me. So, the first thing that we all have to do is, when we say this is like the Industrial Revolution, let's have that Industrial Revolution type of growth. The real benchmark is: the world growing at 10 percent. Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."
That's a completely different statement from "AI is generating no value"!
What kind of growth? who is going to buy all that stuff, if a third of the workforce has suddenly been made redundant?
(I always thought that this 'evil microsoft' trope is a sign of stupidity, now i am having second thoughts...)
There's a wikipedia page addressing your exact concern: https://en.wikipedia.org/wiki/Lump_of_labour_fallacy
Lining up for whatever the next thing is. "Look, we know we said AR/VR was the next big thing in the late tens and LLMs were the next big thing in the early 20s, but quantum is the next big thing now. For real, this time!"
(Not entirely sure what the next fad will be, but some sort of quantum computing thing doesn't feel unlikely. Lot of noise in that direction lately.)
Quantum AI will be the next buzzword, once AGI loses steam.
Curiously, all of these three (VR/AI/QC) are limited by hardware. But AI is the only one that has seen meaningful progress by just throwing more contemporary hardware at it. Sure, future hardware might bring advancements to all of them. But if you're making an investment plan for the next quarter, the choice is pretty obvious. This is why AI rules the venture capitalist sector instead of fusion or other long term stuff.
Of the three, QC is different in that it's not a solution looking for a problem. If we ever scale QC to the point where it can do meaningful work (the "if" is doing a lot of work there - per your point about hardware), then I don't see it fumbling like the other two have. We have immediate and pressing needs that we know how to solve with QC. The other two are very much research coming up with cool toys, and product fucking around so that they can find out what to use them for.
When considering scientists perhaps, but when you look at the general population the other two have way bigger ramifications on daily life than QC.
Did they tell the M365 sales/marketing teams about this? My users get bombarded with sales pitches, new free trials and other commms about how wonderful copilot is. It's almost a full time job to manage people's expectations around this...
Nadella is just saying we haven't yet seen a revolution yet measurable by 10% economic growth--he makes no statement about the future.
Most people have no clue how to use AI or where to use it in their lives. There was a guy at work who was submitting command-like queries (give meeting summary) and complained about how it left out XYZ. Then I told him to ask "Give me the meeting summary with X, Y, Z" or "what did so and so say about Y."
His mind was blown.
We are in the first inning. We haven't figured out how to integrate this into everything yet.
> We haven't figured out how to integrate this into everything yet.
Maybe we don't need to. The world is fine without a sci-fi chatbot.
Nadella is looking for the world to grow at 10% due to AI enhancement, like it did during the industrial revolution.
That seems like a low bar because it already is- it's just not equally distributed yet.
My own productivity has grown far more than 10% thanks to AI, and I don't just mean in terms of dev. It reads my bloodwork results, speeds up my ability to repair a leak in my toilet tank, writes a concise "no I won't lend you money; I barely know you" message... you name it.
Normally all of those things would take much longer and I'd get worse results on my own.
If that's what I can do at the personal level, then surely 10% is an easily-achievable improvement at the enterprise level.
All I hear is anecdotal statements from people claiming LLMs have made them some percent more productive. Yet few actually say how or demonstrate it.
For the last year, I've tried all sorts of models both as hosted services and running locally with llama.cpp or ollama. I've used both the continue.dev vscode extension and cursor more recently.
The results have been frustrating at best. The user interface of the tools is just awful. The output of any models from Deepseek to quen to Claude to whatever other model is mediocre to useless. I literally highlight some code that includes comments about what I need and I even include long explicit descriptions etc in the prompts and it's just unrelated garbage out every time.
The most useful thing has just been ChatGPT when there's something I need to learn about. Rubber ducking basically. It's alright at very simple coding questions or asking about obscure database questions I might have, but beyond that it's useless. Gotta keep the context window short, or it starts going off the rails every single time.
If LLM chatbots are making you vastly more productive in a field, you are in the bottom 20% of that field.
They're still useful tools for exploring new disciplines, but if you're say a programmer and you think ChatGPT or DeepSeek is good at programming, that's a good sign you need to start improving.
This. I shudder to think of the hubris of a programmer who doesn’t understand pointers prompting an AI model to generate low-level system code for them. Sure it might generate a program that appears to work. But is that human reading the code qualified to review it and prevent the model from generating subtle, non-obvious errors?
Linus Trovalds: bottom 20% coder.
https://www.youtube.com/watch?v=VHHT6W-N0ak&pp=ygUMTGludXMgQ...
Linus hasn't coded in decades, let the man retire in peace.
If you have to tell others that then perhaps some introspection for yourself might be helpful. Comes across more as denial than constructive commentary.
I do believe the benefit decreases the more senior or familiar the work is but there is still a noticeable benefit and I think it largely depends on the velocity and life cycle of the product. I think you get less benefit the slower the velocity or the more mature of a product. To deny it like in your post is simply being an intellectual minimalist.
Again, show your evidence.
You make a polite but still ad hominem "attack" about me instead of addressing my points with demonstrations of evidence.
Make a video or blog article actually showing how your use of LLMs in coding is making you more productive. Show what it's doing to help you that has a multiplier effect on your productivity.
Oh I see, I had replied to your comment directly where I was stating that I find it surprising that folks like yourself are so quick to attack, though looking at your response here its not that surprising.
I don't think it deserves a video or blog, like I already said the multiple posts that have made HN front page have covered it well. - Autocomplete saves me keystrokes usually - Features like Cursor's composer/agent allow me to outsource junior level changes to the code base. I can copy/paste my requirements and it gives me the diffs of the changes when its done. Its often at a junior level or better and tackles multi-file changes. I usually kick this off and go make other changes to the code base.
Now like I have said before, this depends a lot on the velocity of the team and the maturity of the code base. I think more mature products you will have less benefit on feature implementation and most likely more opportunity in the test writing capabilities. Likewise, teams with a slower cadence, thinking a bluechip software company compared to a startup, are not going to get as much benefit either.
Instead of being so aggressive, simply say why it does not work for you. These tools strive in web dev which you may not be involved in!
I was not replying to you so I hope your comment was not directed at me?
I see a gap between "vastly more productive" and "noticeable benefit".
good shoes help me walk a bit faster, and for longer.
they don't let me walk at the pace of a SUV.
AI is like the good shoes. they help, and make many tasks a bit easier. but they can't make me into an SUV.
and if they can, then no programmers will have jobs. which is the end-state of this whole LLM thing as far as I can tell.
I have a good shoes business. Can you give me a couple of 100 billions of dollars? Good news I promise you trillions, in a year or two or 10 maybe, who knows you can exprapolate into a future science fiction reality yourself. So when are you transfering the money?
You are now moving the goal post from us discussing is this adding value to how much is it worth. There are a lot of open debate to some of the level of investment but from the hyperscaler territory, they are flush with cash and it probably hurts more under invest and be wrong than it is to over invest.
I would like to propose a moratorium on these sorts of “AI coding is good” or “AI coding sucks” comments without any further context.
This comment is like saying, “This diet didn’t work for me” without providing any details about your health circumstances. What’s your weight? Age? Level of activity?
In this context: What language are you working in? What frameworks are you using? What’s the nature of your project? How legacy is your codebase? How big is the codebase?
If we all outline these factors plus our experiences with these tools, then perhaps we can collectively learn about the circumstances when they work or don’t work. And then maybe we can make them better for the circumstances where they’re currently weak.
I feel like diet as an analogy doesn't work. We know that the only way to lose weight is with a caloric deficit. If you can't do this, it doesn't matter what you eat you won't lose weight. If you're failing to lose weight because of a diet you are eating too much, full stop.
Whereas measuring productivity and usefulness is way more opaque.
Many simple software systems are highly productive for their companies.
I think its about scope and expectations. I have had some form of AI code completer in my neovim config for 3 years. It works flawlessly and saves me tons of keystrokes. Sure sometimes it suggests the incorrect completion but I just ignore it and keep coding as if it didn't exist. I am talking about line by line, not entire code blocks, but even that it does well at times.
From what I have seen the people that have the most success have AI building something from scratch using well known tooling (read: old tooling).
The problem is that doesn't immediately help most people. We are all stuck in crap jobs with massive, crusty code bases. Its hard for AI because its hard for everyone.
I've been using Amazon Q Developer as it was provided and approved by my employer. It has been pretty good with Python codebases, Kubernetes configurations, and (not surprisingly) CDK/Cloudformation templates. I can pretty much just ask it "here's my python script, make everything I need to run it as a lambda, hook that lambda up to x, it should run in a vpc defined in this template over here", and it'll get all that stuff put together and its normally pretty solid code it generates. It seems to pull in a lot of the context of the project I have open. For instance, I can say "it should get those values from the outputs in other-cf-template.yml" and it knows the naming schemes and what not across templates, even if it didn't generate those templates.
I might go back and tweak some stuff, add some extra tags and what not, but often its pretty good at doing what I ask.
Sometimes its suggestions aren't what I was really wanting to do in my codebase, a handful of times it has made up methods or parameters of even well-known libraries. But usually, its suggestions are better than a basic IntelliSense-style autocomplete at least in my experiences.
I haven't used many of the other developer assistant plugins like say GitHub Copilot. I couldn't really say which is better or worse. But I do think using Q Developer has made me faster in many tasks.
I wouldn't expect a tool that doesn't have access to the context of my editor and the files I have open to be very useful for actually coding. There's a lot of context to understand in even a basic application. If you're just asking a locally running app in ollama "give me a method to do x", don't be surprised if it doesn't know everything else happening in your app. Maybe it'll give you a halfway decent example of doing something, but devoid of how it actually plugs in to whatever you're making it might be entirely worthless.
Just in the past couple months there have been a number of "I am a senior/principal engineer and this is how I use LLMs". I would agree that the tools are not optimal yet but every iteration has improved for me.
Maybe whatever language you are coding it or whatever project you are working on is not a good fit? It is an equally perplexing situation for myself when I hear anecdotes like yours which don't align with my experience. The fact that you say everything is garbage calls into question either how you are using the tool or something else.
I can reliably use cursor's composer to reference a couple files, give a bullet list of what we are trying to do and point it to one of the better models and the output is junior engineer level or better output. When I say junior, I mean a junior who has experience with the codebase.
What kinds of projects are you working on? My experience is jot very good with these tools (expanded in a sibling comment).
Generally a lot of web-dev which is where I would assume LLMs shine the best. I noted elsewhere but I think it depends a lot on the age of the product and the level of velocity. For early life products where the speed of your velocity matters, I think you can get the most benefit. The more mature the product and the slower the team implements features, the benefits are still measurable but not as high.
Ah yeah, I can totally see how it can be useful for churning put tons of code. Even without copy-paste, just generating a ton of references and rewriting/improving them. Anecdotally, I’ve tried asking deepseek to review a few files of my code — it wasn’t bad at all, though not without false positives.
I agree with the other commenter that said if you're "vastly" more productive as a developer due to AI, you probably weren't that good to begin with. Otherwise, please provide concrete examples.
Myself, I do find it quite useful in a few respects. First and foremost, as a "better Google/StackOverflow." If something's not working, I can describe my exact scenario and usually get pointed in the right direction. Sometimes the LLM just wastes my time by very confidently telling me some function/library that solves my exact problem exists when in fact it doesn't.
Second, IntelliJ's local LLM is sort of a smarter autocomplete. It makes some outright wrong suggestions, but when there's areas where I have to do a lot of repetitive tasks that follow a simple pattern (like for instance, mapping fields from one type of object to another), it does a pretty good job of making correct suggestions. I definitely appreciate it but it's certainly not doing things like writing a significant portion of code in my style.
Seriously. It’s like half of the people in this thread are living in a completely different world.
And this is coming from someone who uses LLMs daily at the subscription, API (vscode and 3 nextjs apps) and local level. I have a custom langchain stack, prompt repo, you name it. And regardless of how little or how much I use what I have, or what soup de jour prompt or process (from Keep it simple to Prompt enhancers) I can’t say it’s made a meaningful difference in my life. Even with all of the customization and planning.
It’s a great search engine though.
Would it look like such a good search engine if the actual search engines hadn't progressively broken themselves over the last 15 years?
I swear half the time when I use it to look up the nuances of system API stuff, it's replaying forum, mailing list or Stackoverflow conversations that Google ought to be able to find but somehow can't.
> All I hear is anecdotal statements from people claiming LLMs have made them some percent more productive. Yet few actually say how or demonstrate it.
It's very difficult to measure productivity of most people, certainly most people in office jobs, so while you can have a gut feeling that you're doing better, it's no more measurable than pre-AI individual productivity measurement was
It’s not really about objective measurements, but practical applications. Like try this in the following manner and compare it to your previous workflow. Sensible advices like the ones found in The Pragmatic Programmer.
Sure, so it's always going to be annecdotal. That doesn't mean the benefits don't exist, just means they can't be objectively measured. Just like we can't objectively measure the output of a single knowlege worker, especially output on a single day
I have a similar experience. Tried to use it for real work and got frustrated by the chat’s inability to say “I don’t know”. It’s okay for code snippets demonstrating how something can be used (stack overflow essentially), also code reviews can be helpful if doing something for the first time. But they fail to answer questions I’m interested in like “what’s the purpose of X”.
I fixed the hinge in my oven by giving perplexity.com the make and problem. I saved an hour on the phone calling people to organise a visit some time in the next week.
Maybe you should stop using the Ai slop tools that don't work?
Very likely you would have been as successful without AI to fix your oven hinge yourself, there is tons of content about that online.
No. I'd already spend 30 Min looking at how to solve it myself. The search on perplexity was a hail marry before I started calling handymen.
And Henry Ford would reply: "Who is going to buy the cars?"
We have been living in a fake economy for quite some time where money is printed and distributed to the "tech" sector. Which isn't really "tech", but mostly entertainment (YouTube, Netflix, Facebook, ...).
Growth of the economy means nothing. The money that has been printed goes to shareholders. What the common man gets is inflation and job losses.
If you want to grow the real economy, build houses and reduce the cost of living.
> If you want to grow the real economy, build houses and reduce the cost of living.
Yes, I wonder why it is so hard for Western countries to understand that there's no future in a place where housing is more expensive than your average salary. If may look cool for a few years until most people have left or are living on the streets.
"there's no future in a place where housing is more expensive than your average salary."
don't get me wrong, everyone want cheaper housing but not their house
Maybe its time we stop seeing housing as an investment and more of a place to shelter oneself from the elements. One of the core pillars of survival
I for one would love it. If I have to sell housing then I have to buy housing, it's not a benefit to me unless I reduce my quality of life.
Plenty of housing. The problem is, people want cheap housing in places where everyone wants to live. I don't think that will happen any time soon.
This is a non-sense that spreads because of North American style of housing. If you're talking about sprawling suburban houses then you're right. But big cities have provided reasonable housing for lots of workers for centuries. The only thing you need is to build more apartments in the cities that have an excess of job positions.
No, you can't just "build more apartments". For these new inhabitants you will need more grocery stores, more bus/subway stops and overall transportation, more hospitals, more firefighters, more restaurants, more gyms, more tennis courts, more of everything.
Of course. Big cities with all this infrastructure are nothing new. They existed in the past and are big in alive in Asia and other parts of the world. Only in North America we have this bizarre world where it seems like a strange thing to build cities and provide infrastructure for workers!
There is basically no large city outside of subsaharan African & maybe the subcontinent that has that development style and anything even approaching a sustainable 2.1 total fertility rate
There is no cheap housing anywhere in the entire state of California. In the worst and poorest parts of the state where are basically no jobs or anything the housing is still way more expensive than anyone can afford.
So move out of there. Plenty of cheap housing in the country.
A friend tried to tell me China has a real estate crisis, because the value of houses is dropping due to building to many and people are losing on their investments. I asked him if he is sure cheap and available housing is a crisis.
Everyone in the industry losing their shirts and going out of business is a crisis. It happened 15 years ago in the US and we still haven't made it back to mid 90s level of housing starts.
[dead]
You should be curious why Nadella is looking for the world to grow at that rate. That’s because he wants Microsoft to grow into $500B/year in revenue by 2030, and it will be challenging without that economic growth to grow into that target. You can grow into a TAM, try to grow or broaden the TAM, or some combination of both. Without AI, it is unlikely the growth target can be met.
https://www.cnbc.com/2023/06/26/microsoft-ceo-nadella-said-r...
Annual growth rates during the Industrial Revolution where way lower than 10%. In the 18th century it was well below 1%, during the 19th century it was on average at 1-1.5% (the highest estimates go up to 3% annual growth for certain decades close to 1900).[0][1][2]
Some regions or sectors might have experienced higher growth spurts, but the main point stands: the overall economic growth was quite low by modern standards - even though I don't think GDP numbers alone adequately describe the huge societal changes of such sustained growth compared to agrarian cultures before the Industrial Revolution.
[0] https://web.archive.org/web/20071127032512/http://minneapoli... [1] https://www.bankofengland.co.uk/explainers/how-has-growth-ch... [2] https://academic.oup.com/ereh/article/21/2/141/3044162
It also gets all of these things wrong, like not paying attention to models of toilets and quirks for their repair, often speaking with an authoritative voice and deceiving you on the validity of its instructions.
All of the things you site are available via search engines, or better handled with expertise so you know how much of the response is nonsense.
Every time I use AI, it's a time waste.
Every time I contact an enterprise for support, the person I'm talking to gets lots of things wrong too. It takes skepticism on my part and some back and forth to clean up the mess.
On balance AI gets more things wrong than the best humans and fewer things wrong than average humans.
The difference is that a human will tell you things like "I think", "I'm pretty sure" or "I don't know" in order to manage expectations. The LLM will very matter-of-factly tell you something that's not right at all, and if you correct them the LLM will go and very confidently rattle off another answer based on what you just said, whether your were telling it the truth or not. If a human acted that way more than a few times we'd stop asking them questions or at least have to do a lot of "trust but verify." LLMs do this over and over again and we just kind of shrug our shoulders and go "well they do pretty good overall."
I can't count the number of times I've had a support person confidently tell me something that is obviously not applicable to my problem and makes completely flawed assumptions about cs/physics/networking/logic.
I get a lot of correct answers from llms, but sometimes they make shit up. Most of the time, it's some function in a library that doesn't actually exist. Sometimes even the wrong answers are useful because they tell me where to look in the reference docs. Ask it to search the web and cite sources, makes it easier to verify the answer.
I don't appreciate what's going on with AI art and AI generated slop, but the idea that they aren't a useful tool is just wild to me.
I'm not saying it's not useful, I'm saying that we hold humans giving us answers to a much higher standard than LLMs.
AI is a lossy data compression technique at best. One can always tell when an AI cheerleader/ex blockchain bro has hitched their financial wagon to this statistic based word vomit grift.
Please elaborate, preferably without breaking HN guidelines about dismissive name-calling
What is your personal productivity metric by which you have more than 10% increase? More money earned, less money spent, fewer working hours for same income, more leisure time? It needs to be something in aggregate to mean something related to what Nadella meant. There are many individual task which LLM system can help with. But there is also may ways for those gains to fail to aggregate into large overall gains. Both on personal level and on corporate, and economy wide level.
Going to safely assume you've never worked at an enterprise.
Because improving the productivity of every employee by 10% does not translate to the company being 10% more productive.
Processes and systems exist precisely to slow employees down so that they comply with regulations, best practices etc rather than move fast and break things.
And from experience with a few enterprise LLM projects now they are a waste of time. Because the money/effort to fix up the decades of bad source data far exceeds the ROI.
You will definitely see them used in chat bots and replacing customer service people though.
I think the 'grow at 10%' refers to the incremental part of the entire world/market.
during the industrial revolution(steam/electricity/internet), the world was growing, there're trains, cars, netflix
bussiness grown with productivity growing, even so, we lived through 2 world wars and dozens of economic crisis
but now is very different, when you repair the tank with LLM's help, when the labour value of repairers is decreased, there's no addition value are produced
there's a very simple thought experiment abt the result of productivity growing alone:
let's assume robotics become to a extremely high level, everything humen work can be reduced to 1/100 with help of robots, what will happen next?
You’re describing exactly what happened during both the Industrial Revolution and the advent of computer automation.
Prior to computerization and databases, millions of people were required for filing, typing, and physically transporting messages and information. All of those jobs, entire fields of work were deleted by computerization.
Even "computer" was originally a job title. For a person.
> let's assume robotics become to a extremely high level, everything humen work can be reduced to 1/100 with help of robots, what will happen next?
We work 35 hour years instead of 35 hour weeks?
Lol, ever the optimist I see.
It's always worth reminding people that wealth accumulation in the insanely rich isn't the only option
How close do we need to be for you to help a brother out? Feeling seriously unsupp0rted right now
Who suddenly knows how to measure developer productivity? I thought this was impossible.
Unless you’re producing 10% more money with AI you’re not doing shit.
Or fix a leaking faucet.
Time to sort NVDA?
A fellow degenerate gambler I see. The market can remain irrational longer than you can remain solvent, trade with caution. Being early is the same as being wrong.
A common hypothesis for why Nvidia is so hot is because they have an effective monopoly on the hardware to train AI models and it requires a crap ton of hardware.
With DeepSeek it’s been demonstrated you can get pretty damn for a lot cheaper. I can only imagine that there are tons of investors thinking that it’s better to invest their dollars in undercutting the costs of new models vs investing billions in hardware.
The question is, can Nvidia maintain their grip on the market in the face of these pressures. If you think they can’t, then a short position doesn’t seem like that big of a gamble.
it’s effectively a software moat wrt. GPU programming, there’s nothing stopping AMD from catching up besides insufficiently deep pockets and engineering culture
Not sure why AMD’s software side gets so much flack these days. For everything other than AI programming, their drivers range from fine to best in class.
I have an AMD minipc running linux that I use for steam gaming, light development, etc. The kernel taint bit is off.
There is one intel device on the pci/usb buses: wifi/bt, and it’s the only thing with a flaky driver in the system. People have been complaining about my exact issue for something like 10 years, across multiple product generations.
Nobody who controls the purse strings cares about the kernel taint bit if their model doesn’t train, if they’re burning developer time debugging drivers, if they have to burn even more dev time switching off of cuda, etc.
If AMD really cared about making money, they would’ve sent MI300s to all of the top CS research institutions for free and supported rocm on every single product. Investing any less than nvidia, the trillion dollar behemoth, is just letting big green expand their moat even more.
As I said, other than AI. The management made a big bet on crypto when nvidia made a big bet on AI.
That didn’t work out all that well in the medium term (it did in the short term), though it gave them runway to take a big chunk of intel’s server market.
Whether or not that was a good move, it’s not evidence of some engineering shortcoming.
A short position is always a gamble, because you could lose more than everything.
Highly regarded people unite :D
More seriously though: unless you have privileged information or have done truly extensive research, do not short stocks. And if you do have privileged information, still don't short stocks because unless you have enough money to defend yourself against insider trading like Musk and it's ilk, it's not going to be worth it.
It's perfectly reasonable to determine that a particular high growth stock is not going to perform as well going forward, in which case I'd shift allocation to other, better candidates.
Generally, being long equities is a long term positive expected value trade. You don't have to time the market, just be persistent. On the other hand, as you correctly alluded to, shorting equities requires decently precise timing, both on entry and exit.
I think its probably foolish to short nvidia until theres at least echoes of competition.
AMD wants it to be them, but the reality is that the moat is wide.
The closest for AI is Apple, but even then, I’m not certain its a serious competitor; especially not in the datacenter.
For Gaming there’s practically no worthwhile competition. Unreal Engine barely even fixes render bugs for Intel and AMD cards, and I know this for fact.
FD: I’m recently holding shares in nvidia due to the recent fluctuation, and my own belief that the moat is wider than we care to believe, as mentioned.
Using “bubble” sort? ;)
The combination of high and climbing price to earnings ratios for a smaller subset of tech firms, outsize retail investment in tech (cloaked by people buying crypto), and macro environment factors like high interest rates stimulating risky lending has me swapping this bubble toward the top of the list.
See further: https://www.morningstar.com/news/marketwatch/20250123167/six...
Bubble sort is very resource-hungry...
it's very dangerous, shorting in a market where a gamma squeeze can occur is extremely dangerous
other markets s like Taiwan are preferable
The "elephant in the room" is that AI is good enough, it's revolutionary in fact, but the issue now is the user needs more education to actually realize AI's value. No amount of uber-duper AI can help an immature user population lacking in critical thinking, which in their short shortsightedness seek self destructive pastimes.
It's not "good enough", it's mostly overhyped marketing garbage. LLM models are mostly as good as they're going to get. It's a limitation of the technology. It's impressive at what has been done, but that's it.
It doesn't take billions of dollars and all human knowledge to make a single human level intelligence. Just some hormones and timing. So LLMs are mostly a dead end. AGI is going to come from differenst machine learning paradigms.
This is all mostly hype by and for investors right now.
It's pretty good for a whole class of problems that humans currently do.
LLM direct response models are quite mature, yes (4o)
LLM based MoE architectures with some kind of reasoning process ( Claude 3+, o series, R1, grok 3 with thinking ), are the equivalent of v0.2 atm, and they're showing a lot of promise.
I spent more time yesterday trying to get "AI" to output runnable code, and retyping, than if I had just buckled down and done it myself.
But I don't think you can blame users if they're given an immature tool, when it really is on companies to give us a product that is obvious to use correctly.
Its not an exact analogy, but I always like to think of how doors are designed - if you have to put a sign on it, its a bad design. A well designed door requires zero thought, and as such, if "AI" usage is not obvious to 99% of the population, its probably a bad design.
Think of it like you're talking to someone so smart that they answer before you're finished explaining, and get the general idea wrong, or seem really pedantic and your misplaced use of a past tense verb that should have been active tense causes then to completely reinterpret what you're talking about. Think of our current LLMs like idiot savants, and trust them as much.
I don't use AI to write code if that code is not short and self contained. It's great at explaining code, great at strategy and design about code. Not so much at actually implementing code larger than 1/4 to 1/3rd it's output context window. After all, it's not "writing code", it's statistically generating tokens that look like code it's seen before. It's unknown if the training code in which the LLM is statistically generating a reply actually ran, it could have been pseudo code explaining that computer science concept, we don't know.
People seem to want a genie that does what they are thinking, and that is never going to work (at least with this technology.) I'm really talking about effective communications, and understanding how to communicate with a literal unreal non-human construct, a personality theater enhanced literary embodiment of knowledge. It's subtle, it requires effort on the user's side, more than it would if one were talking to a human expert in the area of knowledge you operate. You have to explain the situation so the AI can understand what you need, and developers are surprising bad at that. People in general are even worse at explaining. Implied knowledge is rampant in developer conversation, and an LLM struggles with ambiguity, such as implied references. Too many same acronyms in different parts of tech and science. It does work, but one really needs to treat LLMs like idiot savants.
Oddly enough, Databricks's query generator is named Genie. It's halfway decent, too.
"You're holding it wrong" only goes so far.
Remember that there is a lot of nuance to these sorts of deals.
I don’t have any domain knowledge, but I recently saw an executive put in restaurant reservations for five different places the night of our team offsite, so he would have optionality. An article could accurately claim that he later canceled 80% of the teams eating capacity!
But if it was reported in the press that your team was going to eat 5 meals at the same time before it was revealed that it was just an asshole screwing over small businesses, then that correction in eating capacity should be reported.
"should "
But often not.
That was the point in the parent. How this is being reported is bit skewed.
And also there is the problem that nobody reads corrections. Lies run around the globe before the Truth has tied its shoelaces, or some quote like that.
I've read the first 2 paragraphs 5 times and I still can't tell if Microsoft was renting datacenters and paying for them, or if Microsoft was leasing out datacenters and decided "no more AI data centers for you, 3rd parties".
And digging further into the article didn't help either.
The first one, they were acquiring datacenter space.
The thing that is driving me crazy in all these threads is that we have invariably a bunch of programmers saying "I use it for coding and I would never go back" but this is orthoganal to the question of whether it's a good business.
If you use gen-AI but don't pay the real cost, you're not really a data point on the economics of it. It could be useful to programmers and still be a terrible business if the cost of giving you that usefulness is more than you (or your employer) would pay.
My impression is that costs will continue to go down. Large investments are unlikely to be profitable for these businesses. Whoever is dumping billions into this is unlikely to get their money back. The new tooling, models, discoveries seem to be commoditized within months. There are no moats. If things keep going this way there will never be a point where employers (or anybody for that matter) have to pay the real cost.
Microsoft said they were going to spend 80 Billion on AI data centers, and they confirmed this again, so there is no 'scaling back'.
Speculation:
I suspect after they observed the incredible speed with which Grok was able to build out a leading edge AI infrastructure themselves in Memphis, far faster than what the traditional data-centers could offer it, Microsoft might have had an epiphany and went 'wait, why are we not doing this?'
"Why aren't we polluting a poor disenfranchised population with the cheapest, dirtiest solution we can find?"
somehow, Memphis, an ancient Egypt royal center and core of a grand imperial slave state.. is such a fitting name for these developments
Ancient Egypt was neither imperial nor a slave state; though it did have slavery, most Egyptians were not slaves.
https://en.wikipedia.org/wiki/Slavery_in_Egypt https://en.wikipedia.org/wiki/Slavery_in_ancient_Egypt
These pages are somewhat out of date, but they confirm what I said: "Ancient Egypt was a peasant-based economy and it was not until the Greco-Roman period that slavery had a greater impact."
no - by not specifying which Kingdom you were referring to, it revealed an incomplete, obstinate, reactive answer.. (as I have also done many times) So my remedy to that today was to read a bit before further damage is done. Non-English pages might also be useful on this massive and historical topic
Was the incomplete, obstinate, reactive answer in this case yours or mine?
the original comment was "Memphis the name, is so applicable here, due to slavery and imperial powers" .. I'll stay with that
Meanwhile: "Apple Says It Will Add 20k Jobs, Spend $500B, Produce AI Servers in US" https://www.bloomberg.com/news/articles/2025-02-24/apple-say...
https://news.ycombinator.com/item?id=43158168
This has nothing to do with supply/demand, and everything to do with geopolitics.
You think they would spend 500B without thinking there will be any demand?
You're missing the key phrase. "Says it will". Companies, of course, say all sorts of things. Sometimes, those things come to pass. But not really all that often.
Apply said the same thing the last two election cycles. They seem to be eternally indicating they're investing multiple hundreds of billions in the US. What they actually followed through on, is what I want to know.
It cost Apple $0 to say they'll spend $500B.
The 500B is just something to put on a press release.
If Apple can pull off "Siri with context," it will completely annihilate Microsoft's first mover advantage. They'll be left with a large investment in a zero-margin commodity (OpenAI).
Unfortunately Siri remains near useless at times even with Apple Intelligence™®
The "LLM Siri" hasn't been rolled out even in beta, estimates reckon 2026
https://www.macrumors.com/2024/11/21/apple-llm-siri-2026
Correct, Apple changing the UI before changing the backend might be one of the more stupid things I’ve ever seen.
The messiest launch ever. The renewed UI makes it easy to assume that the LLM-backed Siri is already here but just isn't much better than the old one. A marketing disaster.
Yes, although before full "LLM Siri," Apple promised an "enhanced" Siri with contextual understanding in iOS 18. The clock is ticking though—WWDC will be here before you know it.
Just like all the other voice assistants.
If history is our guide, that's never going to happen.
Apple will not beat Microsoft in any capacity here
Microsoft has all the context in the world just waiting for exploitation: Microsoft Graph data, Teams transcripts and recordings, Office data, Exchange data, Recall data(?), while not context per se even the XBox gaming data
> Apple will not beat Microsoft in any capacity here
I'm sure MS will provide AI to business, but if Apple get things right, they'll be the biggest provider of AI to the masses.
With a Siri that knows your email, calendar, location, history, search history, ability to get data from and do things in 3rd party Apps (with App Intents) and if it runs on your phone for security, it could be used by billions of consumers, not a few hundred million MS office users.
What was that restaurant I went to with Joan last fall? Send linkedin requests to all the people I've had emails from company X.
Of course they could take too long or screw things up.
I wouldn't be sure about that.
Siri's success would greatly depend on app developers adopting intents. The major players are going to be hesitant to give Apple that much access to data - the EU may help push them that way, but even still, Microsoft, Google, Facebook, and others want their AIs to be the one people use.
Siri is also limited to Apple products, and while lots of people have iPhones, many of them still have a PC, where Siri doesn't work.
Companies are also very concerned about employees accidentally or purposefully exfiltrating data via AI usage. Microsoft is working hard to build in guardrails there and Intune allows companies to block Siri intents, so Apple would have to do a lot to reassure corporate customers how they'll prevent Siri from sending data to a search engine or such.
But you might be right. I think it's way too early to tell, and that's why so much money is being poured into this. All the major players feel that they can't afford to wait on this.
A lot of developers have already adopted intents to support Shortcuts and existing Siri. There will be tremendous business pressure to be able to fit into a request like "Get me a car to my next appointment"
I don't think Microsoft legally has access to any Teams enterprise data like chats and recordings.
I’m sorry but what are you saying.
How are any of these unique competitive advantages over iCloud, App Store, Safari, and just generally more locked-in high margin mobile platform users than anyone?
If the money is in providing AI to businesses, to do things humans were previously paid to do - then Microsoft would be in a much better position than Apple, because they already have a big foothold whereas Apple has never really targeted business use.
I have serious doubts about this. Consider that Apple has somewhere around 2 billion users (a very, very optimistic estimate). This would be $250 per user - an utterly ridiculous number to spend on a set of features than nobody even uses. I think this is creative accounting to impress Trump and stave off the tariffs until his term ends.
Makes sense given what Nadella recently said:
- Self claiming some AGI milestone is "non-sensical benchmark hacking"
- The AI model race won't be winner-take-all
- The real benchmark is if the world is growing at 10%
MSFT is putting its focus on AI adoption in the app layer and "at-scale consumer properties"
Full conversation: https://www.youtube.com/watch?v=4GLSzuYXh6w
Old news and incorrect the way it is phrased : https://x.com/dylan522p/status/1894050388145508586
I'm not sure why there is so much poor reporting on accelerator demand recently, it seems there are a lot of people looking to sell a message that isn't grounded in reality.
Quite a lot of money is at stake in the control for retail investors' minds.
Lots of stupid takes get amplified by people who lack background. Look at the recent Intel/TSMC/Broadcom merger rumors. The story was "Canada could join the US" level of stupid to anyone with experience anywhere near chip fabrication but it still circulated for several days. Also, look at what it did to the stock price of INTC. Lots of money made and lost there.
The hype starts to head down towards reality.
No it's one level deeper. The exclusive claim to the "hype" is heading down towards reality.
Seems consumers just hate every product with AI functionality.
More like consumers hate every product where AI gets shoved in their face without any thought and --
Okay, maybe you're right.
The consumer market doesnt matter much here, imo.
I think most companies know b2b is the most lucrative segment for AI because it reduces one of their top costs - people. Companies selling AI are basically just selling the ultimate automation tool, which (in theory) is massive value for companies. Having a nice consumer product is a side gig.
Here’s the TD Cowen research note:
https://www.threads.net/@firerock31/post/DGbK1VkyKlp/in-late...
My theory:
- OpenAI and Oracle partnership: When Microsoft is 'at capacity' more demand can go to Oracle, so now Microsoft don't need to rapidly add capacity with leases (which likely have a lower ROI than Microsoft owned and operated centres).
- Longer term investments are still going ahead: Microsoft aren't cutting back on capex investment. They don't want to lease if they don't have to, but long term still see a huge market for compute that they will want to be a key supplier of.
I think Microsoft's goal here is to focus on expanding their capex to be a long term winner instead of chasing AI demand at any cost in the short term. Likely because they already think they're in a pretty strong position already.
key piece of info at the end, looks like they are leaving the spending on datacenter to OpenAI
> Microsoft’s alliance with OpenAI may also be evolving in ways that mean the software giant won’t need the same kind of investments. In January, OpenAI and SoftBank Group Corp. announced a joint venture to spend at least $100 billion and perhaps $500 billion on data centers and other AI infrastructure.
In a recent Dwarkesh podcast this week Nadella was just commenting on how they expected to benefit from reduced DC rental pricing and were preparing for Jevon's paradox to max out capacity. I guess they are calculating a ceiling now.
It’s additive math, is the overall plus or minus. There is always gonna be some push and pull.
“ TD Cowen posited in a second report on Monday that OpenAI is shifting workloads from Microsoft to Oracle Corp. as part of a relatively new partnership. The tech giant is also among the largest owners and operators of data centers in its own right and is spending billions of dollars on its own capacity. TD Cowen separately suggested that Microsoft may be reallocating some of that in-house investment to the US from abroad”
So, Microsoft’s move to ditch leases for “a couple hundred megawatts” of data center capacity, as noted in TFA, is a pretty intriguing shift—and it’s not just a random cutback. Per some reports from Capacity Media and Analytics India Magazine, it looks like they’re pulling some of their international spending back to the U.S. and dialing down the global expansion frenzy. For context, that “couple hundred megawatts” could power roughly 150,000 homes, (typical U.S. energy stats) so it’s a decent chunk of capacity they’re letting go.
IMO it's not a full-on retreat—Microsoft’s still on track to drop $80 billion this fiscal year on AI infrastructure, as they’ve reaffirmed. But there’s a vibe of recalibration here. They might’ve overcooked their AI capacity plans, especially after being the top data center lessee in 2023 and early 2024. Meanwhile, OpenAI—Microsoft’s big AI partner—is reportedly eyeing other options, like Project Stargate with SoftBank, which could handle 75% of its compute needs by 2030 (per The Information report). That’s a potential shift in reliance that might’ve spooked Microsoft into rethinking its footprint.
Also it seems they're redirecting at least some costs - over half that $80 billion is staying stateside, per Microsoft’s own blog, which aligns with CEO Satya Nadella’s January earnings call push to keep meeting “exponentially more demand.” It’s a pragmatic flex—trim the fat, dodge an oversupply trap, and keep the core humming. Whether it’s genius or just good housekeeping, it shows even the giants can pivot when the AI race gets too hot.
By itself this doesn’t mean much. If they’re reallocating funds to different data centers or building their own then this could be a wash.
> Wall Street stepped up its questions about the massive outlays after the Chinese upstart DeepSeek released a new open-source AI model that it claims rivals the abilities of US technology at a fraction of the cost.
And that's the crux of it and that's also why DeepSeek was such a big deal.
This could simply be datacenter deals started years ago that they are pulling out of now they have larger AI-optimized DC's being commissioned in places more suited to faster & larger power availability.
Related discussion on reddit:
https://old.reddit.com/r/stocks/comments/1ix27vd/microsoft_c...
OpenAI is pivoting away from MS. MS also has their own internal AI interests. Need to frame this for investors that doesn't look like we are losing out. "Nadella doesn't believe in AI anymore". Done and done.
a lot of this sounds like a normal course of business and stuff that msft does all the time. i don't understand the openai drama speculation on here. msft continues to have right of first refusal on openai training and exclusivity on inferencing. if someone else wants to build up openai capacity to spend money on msft for inferencing, msft would be thrilled. they recognize revenue on inferecing not training at the moment, so it's all upside to their revenue numbers
It's sad, people are already recklessly rearranging business logic via AI in key medical software systems due to global outsourcing linguistic reasons.
I have so much intense hatred for pg and Sama right now I rarely come to this shit show of a site
Surely smart people exists, right?
My theory is it has something to do with the new quantum tech and contrived.
> a couple of hundred megawatts.
So, something approximating an entire region or 10~100k H100s.
I guess Satya is not good for his $80B (for project Stargate) any more.
He is: this refers to spending Microsoft had planned on its own outside of Stargate. It says so in the article.
In other words “Stargate” is itself a lie: no new spending, just a political repackaging of existing plans.
That's nothing new, politicians lie all the time.
Everyone's response when a politician announcing some supposed massive spending program should be to say "show me the appropriation bill and receipts, then we can talk."
Appropriation bills don't go as far as they used to anymore as well...
Microsoft has committed $100B to Stargate. These cuts are meaningless by comparison.
Not surprised. You need massive amounts of compute for training LLMs from scratch, but practical uses of LLMs are increasingly focused on using existing models with minimal or no tweaks, which requires far less computing power.
I would like to note Bloomberg pulls these types of FUD before every NVDA earnings release. The last one they did was the false reports on Blackwell technical issues.
They have been doing it for years
"Bloomberg fined €5m over report of fake news release" - https://www.ft.com/content/32013b6a-202f-11ea-b8a1-584213ee7...
"Supermicro Statement on Bloomberg’s Claims" - https://www.supermicro.com/en/pressreleases/supermicro-state...
After Bloomberg reported a subpoena was sent to Nvidia and other unnamed technology companies...
"Nvidia Denies It Was Subpoenaed In Justice Department Antitrust Probe" - https://www.forbes.com/sites/antoniopequenoiv/2024/09/04/nvi...
Completely agree with you re: Bloomberg's somewhat shady history of reporting. However, in this case, the article is citing a research note written by TD Cowen equity research analysts.
For example, here's another article from MarketWatch citing this same research note -- https://www.marketwatch.com/story/the-research-note-thats-ra...
Leads me to lend more credence to the news even though OP is linking Bloomberg.
This is indeed worth noting.
Bloomberg seems to be a player in the market, or at least it is affected by players in the market.
It always publishes news in good rhythm with the market.
this article is just a reference, still need to wait for Microsoft's announcement
so, historically speaking, who should i short? ;)
Unless you learn of a Bloomberg article before it goes out, you shouldn't.
when there's a scam, don't be scammed
I’ve worked in equity research, and an accusation of FUD is an accusation of intentionally malign behavior.
Getting reporting wrong is not the same as FUD.
"Years later, Bloomberg doubles down on disputed Supermicro supply chain hack story" - https://www.datacenterdynamics.com/en/news/years-later-bloom...
Sorry I forgot.
Yes, this is absolutely FUD.
This is the insistence to spread FUD for … some uncertain aim.
Perhaps it hitting all these various firms, in order to leverage their strong reputation, to cause the price to drop, allowing someone - perhaps Bloomberg himself, to make a profit off of it.
No no, I know I sound like a conspiracy theorist now. But my eyes were opened by the sharing of the story.
The fact that FT, WSJ, Fox or a million other sites haven’t latched onto this obvious scheme, is heinous, and once again a sign of our completely captured media.
When you double down on the incorrect reporting instead of retracting or correcting it, as Bloomberg did with their ludicrous spy chip story, it becomes FUD regardless of your initial intent.
To what purpose? Enron spread FUD to further its goals.
We can INVENT goals here, to FIT the conclusion we have already reached - that Bloomberg is spreading FUD.
But this is because we have already concluded, and are now finding things to fit our conclusion.
what happened to Jevons paradox then ?
Once companies start charging and those paying now like 200 a month realize it isn’t as game changing as Silicon Valley wants you to believe the AI car is going over the cliff , driven by its AI Agent of course
No stargate?
Maybe stargate is the reason why MS doesn't need these leases?
I don’t think they are a compute partner for stargate.
IIRC, MS has right-of-first-refusal on providing any new/necessaary compute for stargate.
But also iirc; Stargate was well along before trumps horse/pony show also no idea how much is still necessary to purchase.
It's already done. Their AI is self-evolving now. No need for data centers anymore.
Yikes
I'm honestly in two minds on this one. On one hand, I do agree that valuations have run a bit too far in AI and some shedding is warranted. A skeptical position coming from a company like MSFT should help.
On the other hand, I think MSFT was trying to pull a classic MSFT on AI. They thought they can piggyback on top of OpenAI's hard-work and profit massively from it and are now having second thoughts, thats better too. MSFT has mostly launched meh products on top of AI.
Man, bubble popping already?
Yep. Investors are shifting capital out of the market. Always happens. The little guys end up paying for the losses.
[dead]
Is the AI Bubble already popping?
Do you think Big Tech will go down when it does or will it further consolidate and centralize power?
Given how things go when past bubbles have popped, this is likely to be "both" I think. Just not all at once
When the bubble pops you see things collapse
Then it becomes a feeding frenzy as companies and IPs get bought up on the cheap by whoever has a bit of money left
When the dust clears, some old players are gone, some are still around but weaker, some new players have emerged that resemble conglomerates of the old players, but overall a lot of the previous existing power is consolidated into fewer hands
I doubt it. MS has what they need in oAI partnership. I think this is more likely just a reflection of the broader economic environment. Going into a recession, cut investment, try to retain as much talent as you can afford to for the next few years.