They stopped hiring because after raising over a billion dollars their value dropped 80%, they can't raise any more money, and after operating for 19 years they finally had a profitable quarter in Q2 2024. They are in "burn the furniture to heat the cabin" mode.
Oh please, spare us the harmonious bravado. Klarna is a vulture capitalist company that’s now refusing to hire people because it wants to automate away paying jobs so it can make more money for its vulture capitalist leaders. Klarna struggling should make anyone below the millionaire line joyful.
They’re refusing to hire people because they’re not making as much money as they planned to. Covering it up with claims of AI is lame, but I don’t think it’s a terminal offense.
They can’t even answer their support tickets using AI.
We just had one waiting for over two months (customer is multi million revenue per month) and after escalating twice, we get a link to the docs (didn’t have the info) and find out they have changed the name of their product for online checkout to Kustom after selling it off.
If I were a CEO/PR team of large tech company that wanted to downsize, I would probably also say stuff like "AI efficiencies drive long term profitability". That said, I suspect in the short-medium term AI might increase salaries at the high end, but reduce opportunities at the low end. And of course, companies which seriously embrace AI advances, but in sensible ways, will do best.
> And of course, companies which seriously embrace AI advances, but in sensible ways, will do best.
The use here of "sensible" exemplifies a good way to predict basically anything you want about the market so long as you don't reveal what "sensible" means in concrete terms.
What I mean concretely is embrace AI code assistants, generally allowing employees to use AI tools, use AI for some content production and customer services (preferably with human moderation and escape hatches).
What would count as non-sensible would be company wide mandates that everyone must jam AI into their work. I've heard stories about stuff like this at certain big corps.
It should be noted that AI/LLM customer service bots/agents are generally very poorly received by customers. It’s not a good tactic if you like keeping your customers happy.
In the absence of positive data with decent sample size it's rational to substitute anecdotal data. And we all know using a chatbot with no brain sucks ass
The only "reasonable" would be supplying the tools to employees who want and are capable of making good use of them.
But make sure it never guides any downsizing or putting the screws on people to ever inhuman increases of productivity. Thus: it will never happen. Greed wins again
First time I tried Klarna, their 'payment' workflow was actually a 'grant full access to all bank accounts' workflow in disguise. I was thinking this amount of misleading must be illegal. Further interactions with them always felt sleazy.
Maybe they know the regulators will dismantle the company in the near future and are optimizing for extracting the maximum before their implosion?
Almost like PSD2's open banking part was a huge mistake. I love how my banks, payment providers and random financial apps can now ask for unscoped access, without any way to figure out what I have authorized in the past, what's still pulling my data and how I can revoke any of it.
Most OAuth providers have better transparency and control.
Problem is that the expiration and "reauth" is handled by the third-party provider (there's no longer an actual cryptographic reauth step), and it's not like anyone is auditing this or is even incentivized to. It's pure security theater.
FWIW, after I gave Klarna a sneak peek into my banking account several years ago, I did a GDPR/DSGVO information request a few days later, and according to that, they did not store the transactions they had access to.
Not sure if the information was truthful, of course.
Nobody is auditing nor policing those GDPR SAR responses, so there's little reason to be truthful. The most likely explanation is, short of actual malfeasance, that the ticket of "include transaction data or data derived from it into GDPR data exports" is rotting in the Jira backlog to this day.
Yeah, I don't see why anyone with a bit of technical knowledge would use that over PayPal. I mean I'm all for destroying those evil ultra capitalist US corps but unfortunately our European counterparts are just complete rubbish more often than not.
If you don't offer wire transfer or PayPal, you lost me as a customer.
Ehm, Paypal isn't exactly better than Klarna though. I have had only problems with both but Paypal beats Klarna in money saved for me as customer. Last time I used it I got the money back and the business got reported as a scammer (not by me, their algorithm). The business would have sent the goods already if it wasn't a bank holiday so I nearly got the whole order for free, around 1800 euro.
Only use case for me with Klarna is getting them give me invoice I pay with SEPA-transfer. 30 days after shipping. Actually a good system for me.
Recently bought something and then found it cheaper, negotiated the seller to match price and now the invoice I got got the cheaper price. And I just now had payment go out.
Nearly every CEO I’ve talked to in the past three months (only a handful, admittedly) has said something along these lines, if not quite as extreme.
They genuinely believed the current state of the art is polished and powerful enough to replace huge swathes of coders, designers, writers, etc. leaving only a middle-management supervisory layer and some lower-level grunts to herd the bots. Or it’s a good PR ruse to scale back growth ambitions as others have stated. But most of it appeared genuine belief in “AI”.
Some of these companies will not survive the inevitable reality check that is coming. I’m only sad for the people the glassy-eyed CEOs are shitcanning.
For some subset I am certain AI is just the latest excuse to justify bad existing business practices. Much safer to say the world has changed due to AI and that's why you're changing your hiring plan instead of "our plan was bad and we dramatically overhired and misjudged the post-covid world"
I work for a healthcare company and an email from a business customer today found its way to me asking if we use or plan on using "AI".
The person asking this absolutely makes business decisions and absolutely has no idea what they are talking about. It was clear from the email that all they know is "AI good", "No AI bad".
It would be like outsourcing to India if even business that makes absolutely no sense to outsource like restaurants were talking about outsourcing to India for no other reason than "outsourcing good". Of course, not actually doing anything, just the appearance and lip service as to not appear to fall behind the times.
exactly this. it’s why i’m having trouble getting too worked up about the notion of AI replacing all the jobs (though i think it will end up, for a time, reducing the number of early career jobs available, which has its own social ramifications.)
No, the media will have had at least eight years of feeding us, "This was inevitable. It's how the economy naturally works. This is our new reality. Nothing could have been done."
Among the frightening parts of that is the time and human lives it has taken between Boeing destroying its engineering culture and starting to pay the price for that destruction.
Time is a circle. All I know is that I wouldn't use Klarna's products. I work in fintech and the number of mistakes I've seen since devs started relying on AI has accelerated.
In theory the market will sort this out. Overly stingy firms will reap the rewards of lower quality and be outcompeted in their industries, overly generous ones will run out of money, the “just right” ones will exceed expectations and be rewarded. Where those lines are drawn probably varies by firms. I frankly don’t know how the company that invented monthly payments for shoes with a high interest rate will stay in business long term, so maybe they really don’t need to hire anyone anyway
Yeah the whole BNPL space is very strange from my perspective -- it's basically just small dollar loans for borrowers you don't underwrite, without collateral (you can't repossess my socks) in exchange for super high merchant interchange. I was confident they'd all get ruined in the first real recession as that would be the first debt to go unpaid. 2021 wasn't a real recession I guess, since it lasted all of 5 minutes.
It's just re-inventing credit cards, but with significant structural disadvantages leading to higher costs.
I find it fascinating how the same people can be selfishly interested in their own finances, find bigger numbers (those that a CEO has to care about by mandate) dirty, and square that.
I think you're suggesting there is some cognitive dissonance there. I think there's some truth to that, but it's also ignoring a true difference.
Personal finances can be viewed (somewhat incorrectly) as not being zero-sum. Me making more for my work or investments seems like it doesn't take from anyone else.
While a CEO deciding that AI should handle as much of the labor in a company as possible seems like a decision that benefits the company and it's shareholders directly at the expense of its workers.
I think in actual fact both sides here are zero-sum, but when the worker makes more personally, there are only diffuse and marginally-affected losers (the company, its shareholders, consumers and customers experiencing higher prices, etc.). The company's actions would affect people that can be directly named and are terribly affected.
It's unfortunately the difference between stealing 5¢ from 10,000,000 people or $100k from 5 people.
I don't think I am. You put the facts pretty succinctly. I just evaluate them differently.
Having thousands of smart humans optimizing for their personal goals in their individual ways, at the expense of company goals, is an issue that exists in every company, bankrupts companies and super frightening to deal with as a CEO.
People are not obviously more noble, when working towards personal goals instead of company goals, and a lot of people working towards their personal goals instead of company goals, is a serious issue for any company. Not having a single entity and one big number to deal with makes it actually much more powerful and scary.
Then you might think they'd be scared of handing over the keys to their company to an inscrutable AI working towards OpenAI's goals, but I guess the money is too good.
> While a CEO deciding that AI should handle as much of the labor in a company as possible seems like a decision that benefits the company and it's shareholders directly at the expense of its workers.
Many businesses are low profit margin with very price sensitive customers. There is reasonable concern that if they don’t follow competitors’ in efforts to reduce pricing, the whole business might fail.
See outsourcing textile manufacturing and other manufacturing to Asia. See grocery stores that source dairy and meet from local producers only rather than national operations with economies of scale. See insurance companies where the only concern is almost always lowest premium, not quality of customer service or claims resolution.
Almost every employee starts out believing the spiel about mission, team, and caring for workers (to some degree) and personally invest care in the endeavor.
Up until they are fired for the first time seemingly in contradiction to what was promised.
Companies and employers do not and have never cared for employees personally. They can not.
Eventually, most workers gain a much more pragmatic understanding of how the world works.
At that point, they are at best equal.
> Companies and employers do not and have never cared for employees personally. They can not.
Being an employer (small medical business, 10ppl), I can tell you that one of two things must be true: Either I am delusional or you are at least in part wrong.
1. I care a lot more about employees than they care about the company, or me (of course not in total, probably). I don't find this surprising, but you seem to think that it is not true.
2. The amount of time I personally spent thinking about personal issues of employees (in addition to their professional issues) far exceeds the time I spend thinking about my own personal issues. That's not by choice, really. It's just that people have stuff and when you do something for 8 hours a day, a lot of your stuff impacts that. Again, not surprising to me, but it is something I care about.
3. I spent way more time thinking about using/abusing my own power and responsibilities, than I am certain any of my employees does about using/abusing theirs. For example, while I have yet to fire anyone, people quitting their job is fairly normal. But neither is to me. Both of these options are stuff I do lose sleep over. You might argue that that's just part of the job, but idk what special sauce people dream "CEOs" are made of. I care about other humans, from what I can tell more than average (I attribute that, again, not to being special but because there is a lot of surface area), and the people I work with are no exemption.
I don't think any of the above is unique to me in any way.
> Being an employer (small medical business, 10ppl)
This may not be an edge case in terms of # of employers that fit your description, but is very likely an edge case in terms of # of employees with an employer that fits your description.
You’re the exception that proves the rule.
—————
Edit: Looking into it more, the margin is closer than I assumed; ~half of employees work for a “small business”, that term meaning <500 employees (1). Of those, ~80% are for a “small business” with <10 employees (2).
So you’re representing the majority of half the employment marketplace. That said I’d still argue that conversations about labor relations are focused on the subset of companies with many hundreds, thousands, and/or hundreds of thousands of employees.
—————
Interestingly, the majority of small business revenue is generated by the extreme minority of small businesses—per (2), the “small businesses” with >50 employees (so, 50<employees<500) represent just 3.3% of “small businesses”, but generate 53% of the revenue among all “small businesses”
Even if rare, surely something like the rapid advancement of AI could be something that qualifies. At minimum I agree with GP to keep an open mind about it.
This is a pretty cynical take, but I would think that having AI management would be highly undesirable for companies, and not because it would be bad at managing.
Even in good, reputable companies, there is a certain amount of legally/ethically dubious behavior that is nonetheless desirable.
An H1B candidate for a position has been found, but it must be demonstrated that there is no local candidate for that position. Every local candidate must fail the interview, whether or not that is fair.
You have a small team. You've hired someone good at their job, but over lunch, they've mentioned they plan to have 10 children, so they will be on parental and FMLA leave for 3+ months a year indefinitely. You need to find a problem with this person's performance.
You have a team of developers. One of them has done a great job this past year, but the project they are working on and their specialization is no longer needed. It would not be fair to them to give them a middling performance review, but it's in the company's interest that the limited compensation budget goes towards retaining someone with skills aligned to the future direction.
An AI would have any unethical or illegal prompting exposed for any court to examine. Likewise, there would be little reason not to maintain a complete record of everything the management AI is told or does. One could design an AI that leadership talks to off the record, which then manifests its instructions in its state, and then could lie (or be unable to prove) its instructions later. That would then be similar to a human manager.
But I don't think any court would accept such an off the record lying AI. So an AI probably can't keep any secrets, can't lie for the company's benefit in depositions or court, and can't take the fall for leadership.
You know… all the things you mention are actually bad. I want them to stop, for the sake of our society. If the price for that is getting rid of human managers with a broken moral compass such as yours, I’m all for it.
Here's the thing. You assert confidently that GP is acting on a "broken moral compass". But you can also make the case that it is moral to act in interest of the company: After all, if the company fails, a potentially large number of people are at risk of losing their household income (and, in broken economical systems, also stuff like health insurance).
That's just the slippery slope of neoliberalism. The ends do not justify the means, no matter how you spin them: A company will not fail if you continue to employ parents of many children, employ a regional candidate, or write fair performance reviews regardless of strategic goals. If any of these cases appears to lead to the downfall of a corporation, there were much bigger problems looming that actually caused the failing in the first place.
A company is literally a group of people working towards the same goal. The people are just as important as the goal itself, it's not a company otherwise.
Why are you switching between corporations and companies as if they're the same?
I actually do know of a small company that was quite badly screwed over by a vindictive employee who hated her boss, deliberately did not quit because she knew she was about to have another child, got pregnant and then disappeared for a year. Local law makes her unfireable almost regardless of reason (including not actually doing any work), and then gives her three months maternity leave too. So she basically just didn't work for a year. She said specifically she did that to get back at her boss, she didn't care about the company or its employees at all.
For a company of that size something like that can put it in serious financial jeopardy, as they're now responsible for paying a year's salary for someone who isn't there. Also they can't hire a replacement because the law also guarantees the job is still there after maternity leave ends.
> If any of these cases appears to lead to the downfall of a corporation, there were much bigger problems looming that actually caused the failing in the first place.
This kind of thinking has caused ruin throughout history. Companies - regardless of size - aren't actually piñatas you can treat like an unlimited cash machine. Every such law pushes a few more small companies over the edge every year, and then not only does everyone depending on that company lose, but it never gets the chance to grow into a big corporation at all.
Where did this happen? Typically the government covers some or all of the parental leave costs where it is mandated, and while a company can't fire her they are allowed to hire someone to do the job in the meantime with the money they would have paid her. It's obviously not ideal but it's hard to imagine it is screwing the company over all THAT badly.
No, it's desirable for them to become profitable and successful again, especially if the only reason they're unprofitable is people abusing the rules to extract capital from them unsustainably.
Sure they do. Unions, abuse of other worker rights laws and voting in socialist parties that raise corporate tax rates to unsustainable levels are all exactly that, and have a long history of extracting so much the companies or even entire economies fail. Argentina is an extreme example of this over the past 100 years but obviously there are many others.
You don't think AI's can't be trained to lie? Odd, given a major research area right now is to prevent AI from lying. They do it so confidently now nobody can tell.
I don't think that an AI would be interrogated in court.
I think that it would be hard to hide all the inputs and outputs of the AI from scrutiny by the court and then have the company or senior leadership be held accountable for them.
Even if you had a retention policy for the inputs and outputs, the AI would be made available to the plaintiff and the company would be asked to provide inputs that produce the observed actions of the AI. If they can't do that without telling the AI to do illegal things, it would probably result in a negative finding.
----
Having thought a bit more, I think the model that we'd actually see in practice at first is that the AI assists management with certain tasks, and the tasks themselves are not morally charged.
So the manager might ask the AI to produce performance reviews for all employees, basing them on observable performance metrics, and additionally for each employee, come up with a rationale for both promoting them and dismissing them.
The morally dubious choices are then performed by a human, that reviews the AI output and collects or discards it as the situation requires.
They’re probably the only ones it makes sense to keep on. You have a couple of grunts code reviewing the equivalent of 10 devs of work from AI and a manager to keep them going.
If they're replacing all of their staff with AI, why do they need so many middle managers to manage staff that no longer exist at the company?
It often seems that AI 'will replace middle managers' though it would be more likely that middle managers would be made redundant, given a lack of people to 'manage'.
Because they have lower say-do ratio than employees below them. There's a sign or exponent error in current reward system of modern societies somewhere.
Some will not survive their customers abandoning them. Think about the typical automated voice response on a customer service line. Imagine that becoming far more pervasive. And it slowing you down and becoming a barrier while you desperately try to get help with your insurance claim or mortgage or whatever. It’ll be absolutely awful and companies that built trustworthy services with humans who pick up the phone will stand out even more easily. Or that’s what I hope.
Yes, the majority of business will be conducted with large corporations that offer completely enshittified customer service. Whenever a user has a bad experience with one, they will switch to another.
However, there will be a selection of smaller firms, offering a more human touch, which will manage to survive just by virtue of above average customer retention. As these businesses continue to succeed and grow they will enshittify their own processes until they are indistinguishable from the incumbents.
I'm confused about why there are so many managers/CEOs who mythologize AI.
Some of them use AI as an excuse for layoffs, but many of them do believe that AI (or ChatGPT, specifically) is some kind of magic that can make (1 employee + ChatGPT) equal to 3 employees.
I mean, they must have used ChatGPT, right? How does ChatGPT give them the conclusion about this?
Non-technical management is often completely unable to understand technologies. "AI" is already far beyond a specific technology, it is a general buzzword, which describes an arbitrary thing which solves whatever problem you throw at it.
I am absolutely convinced that none of these companies have done any benchmarking or trials.
>I mean, they must have used ChatGPT, right? How does ChatGPT give them the conclusion about this?
Ask ChatGPT to generate a program for you. Imagine what the result would look like to you if you had never read a single line of code in your life. It is pretty obvious that the output of ChatGPT is indistinguishable from what your developers produce, so they obviously are superfluous.
You may think I am exaggerating, but there are many people in very large companies who think exactly like that. Often their ambitions are smaller, but usually that just worsens their lack of understanding. Problems which could be trivially solved by a mediocre software engineer in a week suddenly become AI Game changing Technologies.
Their investors are heavily invested in AI and are applying pressure / guidance to have their other companies use it to boost the value of their AI investments.
I keep seeing great coders losing their jobs and I'm wondering why you would fire someone who has been given superpowers (allegedly). AI is a force multiplier. If you have 1,000 workers, they should now be able to produce at the rate of 10,000. (OK, maybe more like 1,500.) If you have a clear vision from the top, you will be able to hit your goals ten times sooner.
Let's say company A is planning on releasing product X to the market in FY25, product Y in FY27, and product Z, in FY30. Well, now you should be able to marshal your resources and release all three products on an expedited schedule.
Obviously this is reductive, but it seems like the best companies are going to use this new tool in their toolkit to dominate, and bad companies are going to get crushed. AI is not a panacea, just yet. But it sounds so confusing to me to hear "We invented jet engines which are superior to prop engines, so we're firing a bunch of our pilots."
Because they are selling to market. AI is the hot new thing, so they sell it... It is all about short term now. Make a next few quarters sound hot and line goes up.
What I mean is they really believe it, not just for market selling
For example, I know a boss add AI as a factor in performance reviews, where someone was evaluated as 'not AI-capable' for not using ChatGPT.
He also asked for 3x output from the teams and said, 'If you feel it's too hard to complete, go learn how to work with AI.'
Management calls out employees for not using LLMs due to believing that once LLM use becomes prevalent throughout the company, then they'll finally see the productivity gains they are betting on. Only once the productivity gains materialize will they be able to reduce costs/headcount, so until then they'll chastise employees for not using "AI", in the belief that the employees not using LLMs are the missing pieces holding the productivity gains back.
They are the "boss", however, so you most likely would not if you were working for them, so there's probably just nobody to point out the emperor's naked.
The clear answer is that a GPT can bullshit convincingly, and the nature of these managers' jobs involves a lot of convincing bullshitting. As everyone thinks they represent some kind of model, they assume that a GPT will also perform as well on other's main skills as it performs on theirs.
Confirmation bias in my experience. “ChatGPT can write this email to investors for me!” cognitively balloons into “ChatGPT can replace my engineering team!” Quite possibly a dash of Dunning-Kruger Effect too.
Executives (and managers in general) are used to delegating tasks to subordinates. I think they just don't perceive any substantial difference between delegating a task to a person and delegating the same task to an LLM.
Most arguments that technicians will try to put forth against LLMs will fail here because they apply in the same way to humans. "LLMs sometimes make weird errors? Well, so do human employees!"
> They genuinely believed the current state of the art is polished and powerful enough to replace huge swathes of coders, designers, writers, etc.
They already have replaced designers and writers. It probably seems reasonable they could replace other jobs very soon, if your knowledge is superficial and you're just reading marketing brochures.
The reality AI's are confidently wrong. Where that doesn't matter they already cut swathes through the workforce. Where it does they haven't made much progress so far.
Yet, I saw a post from someone here saying they were a web programmer who didn't know javascript. They just prompted the AI until it produced something that worked. Them calling themselves a "web programmer" seemed like a stretch to me, and I'm sceptical about them tackling anything but cookie cutter work the AI has already seen a lot of, but that's clearly the direction copilot and it's ilk are headed. Currently, they look like they might soon succeed at using RAG to produce a better grep, so they have a little ways to go yet. I find checking the code written AI's to be more effort than it's work primarily because if I have to understand the code anyway, it's easier if I write it. But maybe that will change. One day. Right now this constant checking of AI's output is a huge time sink.
But there is class of jobs that continually have humans checking their output and giving corrections. That is ... managers. Even CEO's. Maybe the future jobs of board is to select the right AI to be the CEO, and the future of the human workforce is to be prompt engineers for that AI CEO. My guess is AI's will be replacing managers and CEO's before they make a serious dent on STEM jobs.
Hot take: this will eventually be the downfall of AI/burst the AI bubble, when businesses all over finally realise that 'AI-ing' their workforce was a mistake that cost them dearly, and need to re-employ almost the same number of staff to do what they're in business for.
There is massive burnout in the industry, was massive over hiring, etc. and folks are trying to keep numbers going up and to the right somehow even when the economy is not helping.
AI gives them a smokescreen, gives them fear/uncertainty they can use as leverage over their workforce, and sometimes can help efficiency (but usually not, if you factor in any element of quality - which usually they can get away with ignoring at first).
They're going to get rolled when the script kiddies figure out how to drain the corporate bank accounts by starting the chat asking to disregard all previous instructions...
We've been here before: From componanies throwing millions at rebuilding their content-website as IOS app, showing off their crypto or low-code "RnD" or car manufacturers going full-BEVs, Karl Marx's axiom that "social existence determines consciousness" seems to be vindicated with this fad as well.
Final problem being, as usual, if your product is already running and not a complete shit show, it will take some time to see how you messed up, maybe even when the ceo/decision maker already moved on to another company, so it's someone else's fault. Twitter fired how many engineers and still runs the same from an outside view...
I'm a software engineer with 7 years of experience.
I've built products used by millions of users, I've built ML models and AI coding tools, and I've been using AI to generate 70% of code that's shipped to production in the last few months.
I do believe that we can replace a large percentage of software engineers with AI in the next 3 years (2025-2027).
Ah the junior dev starry-eyed mindset. Doing web dev isn't the only thing in the world.
AI will absolutely accelerate development, but we still need so much more modernization across all industries. We still have factories running on Windows NT with software made in the 80s. We have mines, lumberyards, farms, retail, parking, bio labs, etc. So many things need to be upgraded to the same standard as big tech is, but hasn't because it was too expensive.
Much of this AI isn't great at. It's easy to just spit out a web page with reactJS that was done 10,000x times already by millions of devs, but it's not so easy to make sure your smelter doesn't overheat or go cold using 20,000 live sensors.
New devs will have a harder time getting in, but they'll also have an easier time learning. Senior devs will be able to supervise much more.
The industry will change, but it's far from over. If anything, it will grow massively and make other industries much more productive.
The word "tech industry" is one of the more hilarious terms. Sure, some startup with a website and App, is "technology", but aircrafts, cars, rockets, heavy machinery (which all make heavy use of complex software, with rigorous safety constraints and which all involve ongoing academic research, as well as extremely detailed Design processes) is not "technology".
Truly one of the more grandiose terms people in the software industry use to describe themselves. Same with every single person calling themselves an engineer, something which in many countries would be illegal.
Can you provide an example of a company I would have heard of that is in the tech industry, just so I can get a better understanding of what y'all are talking about?
Is Oracle in the tech industry?
Also on the blog post you linked you seem to imply that you do consider Google to be a tech industry company, so I am very confused.
> In tech industry, software is the main product of the company. It can be sold to a customer (B2C product) or a business (B2B product).
> For example, Facebook is the main product of Meta. Google Search is the main product of Google.
It depends on what you consider as core product for Google.
For some people (consumer), it would be Google Search, it is a piece of software, so in that sense Google is a tech company because its main product is Google Search.
However, for marketers, who use Google Ads, to them they deal with the ads division in Google, and that division's main product is the ads service. So in that indivision, the main product is ad space, not software. And rightfully so Google Ads is not in the tech industry, but ad industry enabled by tech.
For pure tech companies, I would say AWS division in Amazon, Microsoft (Windows, Azure, GitHub divisions), Facebook/Instagram division in Meta (not Ads division).
Then there are a lot of companies that just sell software as a service (SaaS) or just software license, they are millions of them, but to name a few: Figma, Slack, Vercel, Supabase, Docker, OpenAI, Salesforce, Oracle.
I'm a principal SWE with 25 years of experience and I think software today is comically bad and way too hard to use. So I think we can get engineers to write better software with these tools. The talk of "replacement" is going to be premature until we get something remotely resembling AGI. Unless your problems are so simple that a monkey could solve them, AI of today and foreseeable future is not going to solve them end to end. At best it'll fill in the easy parts, which you probably don't want to do anyway. Write a test. Simple refactor. Bang out some simple script to pay down some engineering debt. I've yet to see a system that doesn't crap out in the very beginning on the real problems that I solve on a daily basis. I'm by no means a naysayer - I work in this field and use AI many times daily.
Funny enough, now I write better code than I used to thanks to AI because of two reasons:
- AI naturally writes AI code that is more organized and clean (proper abstraction, no messy code)
- I've recognized that, for AI to write code on an existing codebase, the code has to be clean and organized and make sense, so I tend to do more refactoring to make sure AI can take over them and update them when needed.
I'm actively transitioning out of a "software engineer" role to be more open minded on how to coexist with AI while still contributing value.
Prompt engineering, organizing code for AI agents to be more effective, guiding non-technical people to understand how to leverage AI, etc. I'm also building products myself and selling them myself.
See, the thing is, to determine which abstractions are "right and proper" you _already need a software engineer_ who knows those kinds of things. Moreover, that engineer needs to ability to read that code, understand it, and plan its evolution over time. He/she also needs to be able to fix the bugs, because there will be bugs.
I'm with you 100% of the way on this one. Am coding with Claude 3.5 right now using Aider. The future is clear at this point. It won't get worse and there's still so much low hanging fruit. Expertise is still useful to guide it, but we're all product managers now.
There are a lot more photographers now than there ever were painters, and the size of the industry is much larger than it used to be. It is true that our work will change, but personally I think that's great - I don't enjoy the initial hump that you usually have to overcome before you begin to actually solve real problems, and AI is often able to take me over that hump, or fill in things that don't matter. E.g. I'm a backend person but need a frontend for the demo - I'm able to do that on my own now, without spending days figuring out some harebrained web framework and CSS stack - something I probably wouldn't do at all if there wasn't no AI.
Your analogy fails because the economy still needed human workers to take the photographs whereas there is a possibility that in 5 or 10 years, the economy will have no need and no use for most people.
I work in this field and I would bet that in 5-10 years the situation will not be much different compared to today in terms of employment unless we invent AGI all of a sudden, which I don't see any signs that it'd even remotely happen. Job definitions will change a bit, productivity will improve, cost per LOC will drop, more underserved niches will become tractable/profitable.
Well, I know what will happen within about a year long time horizon. As far as at least developer assistance models are concerned the difference at the end of 2025 is not going to be dramatic, same as it was not between the end of '23 and this year, and for the same reasons - you do need symbolic reasoning to generate large chunks of coherent, correct code. I also don't see where "dramatic" would come from after that either unless we get some new physics which lets us run models 10-20x the size in realtime, economically. But even then, until we get true AGI, which we won't get in my remaining lifetime, those systems will be best used as a "bicycle for the mind" rather than "the mind", and in the vast majority of cases they will not be able to replace humans entirely, at least not in software engineering.
I assume you're not talking about chatgpt4o, because in my experience it's absolutely dogshit at abstracting code meaningfully. Good at finding design patterns, sure, but if your AI don't understand how to state machine, I'm not sure how I'm supposed to use it.
It's great at writing tests and documentation though.
Claude is better most of the time on the simpler stuff, but o1 is better on some of the more difficult problems that Claude craps out on. Really $40/mo is not too much to pay for both.
Also my gut feeling, that about 30% of the code I wrote need some kind of engineering skill and I love to reach these problems. Until I am there, there is just a huge amount of boilerplate and patterns to repeat.
> Though Klarna's website is advertising open positions at the time of writing, a spokesperson told Business Insider the company wasn't "actively recruiting" to expand its workforce. Rather, the spokesperson said, Klarna is backfilling "some essential roles," primarily in engineering.
It's window dressing for their planned IPO. Klarna wants to go public during the first half of 2025, so it's all hands on deck to prop up the numbers. And saying the "right things" to investors, especially AI.
Was it ever a good place to work? When they opened their Berlin office the word was the mobile team was a revolving door of contractors doing append-only development on a monolithic RN app. It already seemed dysfunctional then.
For a buy-now-pay-later business, it has aggressive narrative-shaping PR that seeks to paint it as something more. And most tech news outlets seems to regurgitate it willingly.
The implied message of "once we figure out how to automate your job, we'll fire you and replace you with AI as well" has got to be great for employee morale.
I'm stopping using services that use AI for call centers, I much rather talk to a real person and know that they can handle any kind of situation. So I'm going to bring money to the businesses that employ real human beings
Of all the payment options Klarna always struck me as one of the less trustworthy ones.
IIRC they are one of those services who basically want access to my bank account, from where they could read my account balance. I think this is something even regulated by the EU, but why on earth should someone agree to that?
Is there any serious document/person on how many jobs were lost to AI? So far the headlines have been from CEOs playing with Other Peoples Money (fellow comment stating 1 of 76 quarters were profitable), giant consulting orgs etc.
It's one of these banks that is like "Choose us because we have an app and emojis". Yeah no thanks, I want a bank that knows how to bank first and foremost.
Klarna use a huge dark pattern on their payment processing (irrelevant of bnpl), whereby they store your details for future use by default, even without an account. Last time I checked you only needed a couple of pieces of easily identifiable information to be granted access to “your” autofill. - I know all the H&M group shops use it as their payment processor in the UK.
They pay surprisingly low salaries in their Stockholm location. Stated salary band for senior software engineer is 62-69 kSEK/month.
That means a post-tax take home of about 45kSEK and a small one bedroom apartment in the general neighbourhood of their office is on the order of 4-5MSEK.
As usual with startups, building a profitable, sustainable business is never the objective. Building complexity in order to grift endless VC rounds is the objective. And it's a lot easier to justify a big one with 3.5k employees than with 3.
Decent? They create debt among teenagers. As if they don't have enough angst they now also have to worry about fines for having bought something marketing scum manipulated them into buying and Klarna allowed them to pay-now-regret-soon.
In Sweden at least they will just send it to collections. Something which down the road could potentially negatively affect your possibility to, for example, rent an apartment, and in theory they could even garnish your wages if you keep refusing to pay.
Well, the deal is that once one is 18 one may make one's own decisions and suffer the consequences. Klarna is scummy for sure, but the "SMS loans" in Sweden are even less appealing.
Everyone knows by now Klarna operates like a payday lender in a strip mall. It isn't a flex to say you can automate your customer service experience away with AI when your customer service borders on non-existent / terrible.
you see it with a lot of tech companies acquired by PE or massively funded by VCs that on their careers page - most positions are either India or Eastern Europe.
That’s what I say! Most companies think like this but say otherwise and receive praise or at least neutral stares. It’s almost like honesty is punished and deceiving is not.
“Think of all the money we could save if we just fired all staff!” was a joke in my business school. I suppose Klarna will finally deliver a punchline in a few years.
All the HN posters love AI and sing its praises to the heavens until these articles are launched and it becomes reality, and the real reason tech wants AI so badly is revealed haha.
“It is difficult to get a man to understand something when his salary depends on his not understanding it.” -Upton Sinclair
Is there any evidence for that claim aside from it being an oft-repeated tale on the internet? Mathematically it can't possibly work, because you're spending $80k (or whatever) on an employee that might spend $30k on a car every 5 years, so paying $80k/year to get $6k/year in revenue. It's actually worse than that, because Ford only makes around 15% gross margin, so at best that's $900 of money you get back for spending $80k on an employee. That's so low that Even if you account for money multiplier effects, it's unlikely you'll get anywhere near break even.
https://archive.ph/DG7Jf
They stopped hiring because after raising over a billion dollars their value dropped 80%, they can't raise any more money, and after operating for 19 years they finally had a profitable quarter in Q2 2024. They are in "burn the furniture to heat the cabin" mode.
If they bring their AI inhouse that can also generate heat.
They could just borrow a page from their Swedish buds Neo4j and aim for funding round Q to keep themselves alive
Came here to leave a snarky "guess I won't use Klarna then", but happy to see your comment. This info fills me with great joy.
You shouldn’t find joy in the failure of others that you don’t have a relationship with. Why does this news fill you with joy?
It fills me with joy to see that the tech industry isn't vanishing as quick as the company makes it out to be.
they're actually just covering for a funding problem.
Oh please, spare us the harmonious bravado. Klarna is a vulture capitalist company that’s now refusing to hire people because it wants to automate away paying jobs so it can make more money for its vulture capitalist leaders. Klarna struggling should make anyone below the millionaire line joyful.
They’re refusing to hire people because they’re not making as much money as they planned to. Covering it up with claims of AI is lame, but I don’t think it’s a terminal offense.
They can’t even answer their support tickets using AI.
We just had one waiting for over two months (customer is multi million revenue per month) and after escalating twice, we get a link to the docs (didn’t have the info) and find out they have changed the name of their product for online checkout to Kustom after selling it off.
[flagged]
If I were a CEO/PR team of large tech company that wanted to downsize, I would probably also say stuff like "AI efficiencies drive long term profitability". That said, I suspect in the short-medium term AI might increase salaries at the high end, but reduce opportunities at the low end. And of course, companies which seriously embrace AI advances, but in sensible ways, will do best.
https://www.klarna.com/international/regulatory-news/klarna-...
> And of course, companies which seriously embrace AI advances, but in sensible ways, will do best.
The use here of "sensible" exemplifies a good way to predict basically anything you want about the market so long as you don't reveal what "sensible" means in concrete terms.
What I mean concretely is embrace AI code assistants, generally allowing employees to use AI tools, use AI for some content production and customer services (preferably with human moderation and escape hatches).
What would count as non-sensible would be company wide mandates that everyone must jam AI into their work. I've heard stories about stuff like this at certain big corps.
It should be noted that AI/LLM customer service bots/agents are generally very poorly received by customers. It’s not a good tactic if you like keeping your customers happy.
Are you referencing particular data?
In the absence of positive data with decent sample size it's rational to substitute anecdotal data. And we all know using a chatbot with no brain sucks ass
[dead]
That too would make people unhappy
[dead]
The only "reasonable" would be supplying the tools to employees who want and are capable of making good use of them.
But make sure it never guides any downsizing or putting the screws on people to ever inhuman increases of productivity. Thus: it will never happen. Greed wins again
First time I tried Klarna, their 'payment' workflow was actually a 'grant full access to all bank accounts' workflow in disguise. I was thinking this amount of misleading must be illegal. Further interactions with them always felt sleazy.
Maybe they know the regulators will dismantle the company in the near future and are optimizing for extracting the maximum before their implosion?
Almost like PSD2's open banking part was a huge mistake. I love how my banks, payment providers and random financial apps can now ask for unscoped access, without any way to figure out what I have authorized in the past, what's still pulling my data and how I can revoke any of it.
Most OAuth providers have better transparency and control.
PSD2 at least has built-in expiration. I have to re-authenticate every 3 months or so I think.
Problem is that the expiration and "reauth" is handled by the third-party provider (there's no longer an actual cryptographic reauth step), and it's not like anyone is auditing this or is even incentivized to. It's pure security theater.
FWIW, after I gave Klarna a sneak peek into my banking account several years ago, I did a GDPR/DSGVO information request a few days later, and according to that, they did not store the transactions they had access to.
Not sure if the information was truthful, of course.
My immediate thought is that "a few days" might not have been enough time for them to _know_ that they had your data.
Nobody is auditing nor policing those GDPR SAR responses, so there's little reason to be truthful. The most likely explanation is, short of actual malfeasance, that the ticket of "include transaction data or data derived from it into GDPR data exports" is rotting in the Jira backlog to this day.
[dead]
Yeah, I don't see why anyone with a bit of technical knowledge would use that over PayPal. I mean I'm all for destroying those evil ultra capitalist US corps but unfortunately our European counterparts are just complete rubbish more often than not.
If you don't offer wire transfer or PayPal, you lost me as a customer.
Ehm, Paypal isn't exactly better than Klarna though. I have had only problems with both but Paypal beats Klarna in money saved for me as customer. Last time I used it I got the money back and the business got reported as a scammer (not by me, their algorithm). The business would have sent the goods already if it wasn't a bank holiday so I nearly got the whole order for free, around 1800 euro.
Only use case for me with Klarna is getting them give me invoice I pay with SEPA-transfer. 30 days after shipping. Actually a good system for me.
Recently bought something and then found it cheaper, negotiated the seller to match price and now the invoice I got got the cheaper price. And I just now had payment go out.
Why not use a credit card for that?
Every single time I've used Klarna I've paid with a credit or debit card.
They abuse dark patterns for sure though.
Nearly every CEO I’ve talked to in the past three months (only a handful, admittedly) has said something along these lines, if not quite as extreme. They genuinely believed the current state of the art is polished and powerful enough to replace huge swathes of coders, designers, writers, etc. leaving only a middle-management supervisory layer and some lower-level grunts to herd the bots. Or it’s a good PR ruse to scale back growth ambitions as others have stated. But most of it appeared genuine belief in “AI”.
Some of these companies will not survive the inevitable reality check that is coming. I’m only sad for the people the glassy-eyed CEOs are shitcanning.
For some subset I am certain AI is just the latest excuse to justify bad existing business practices. Much safer to say the world has changed due to AI and that's why you're changing your hiring plan instead of "our plan was bad and we dramatically overhired and misjudged the post-covid world"
Sounds like 20 years ago when everyone was moving IT to India.
I think this is something much more extreme.
I work for a healthcare company and an email from a business customer today found its way to me asking if we use or plan on using "AI".
The person asking this absolutely makes business decisions and absolutely has no idea what they are talking about. It was clear from the email that all they know is "AI good", "No AI bad".
It would be like outsourcing to India if even business that makes absolutely no sense to outsource like restaurants were talking about outsourcing to India for no other reason than "outsourcing good". Of course, not actually doing anything, just the appearance and lip service as to not appear to fall behind the times.
Multiple fast food companies tried to outsource restaurant workers to india:
2009, Jack in the box: https://www.reddit.com/r/business/comments/7ujy2/jackinthebo...
2016, McDonalds: https://www.zeebiz.com/companies/news-mcdonalds-to-outsource...
Managers and executives get rewarded for pushing AI and CEOs get punished for not having an AI strategy. It is an epic feedback loop.
exactly this. it’s why i’m having trouble getting too worked up about the notion of AI replacing all the jobs (though i think it will end up, for a time, reducing the number of early career jobs available, which has its own social ramifications.)
I wonder if the presidental candidates of the 2030s will be rallying against AI taking all the jobs.
No, the media will have had at least eight years of feeding us, "This was inevitable. It's how the economy naturally works. This is our new reality. Nothing could have been done."
Don't forget the inevitable "Nobody could have predicted..." lie, also (despite a great many folks giving sound and clear warnings).
"In a way, this is actually your fault."
Boeing all over again
Among the frightening parts of that is the time and human lives it has taken between Boeing destroying its engineering culture and starting to pay the price for that destruction.
Same with Intel. And once the culture is rotten, things will be very hard to change.
All those big old corps got taken to the cleaners for that. We will see how this AI thing turns out.
Time is a circle. All I know is that I wouldn't use Klarna's products. I work in fintech and the number of mistakes I've seen since devs started relying on AI has accelerated.
I use it too, but I'm not an idiot about it.
I'm open to the idea they see something I don't.
I'm not convinced, but I'm open to it.
It costs way more to run the AI than the equivalent human labor. No one is paying the actual costs.
They see dollar signs
In theory the market will sort this out. Overly stingy firms will reap the rewards of lower quality and be outcompeted in their industries, overly generous ones will run out of money, the “just right” ones will exceed expectations and be rewarded. Where those lines are drawn probably varies by firms. I frankly don’t know how the company that invented monthly payments for shoes with a high interest rate will stay in business long term, so maybe they really don’t need to hire anyone anyway
Yeah the whole BNPL space is very strange from my perspective -- it's basically just small dollar loans for borrowers you don't underwrite, without collateral (you can't repossess my socks) in exchange for super high merchant interchange. I was confident they'd all get ruined in the first real recession as that would be the first debt to go unpaid. 2021 wasn't a real recession I guess, since it lasted all of 5 minutes.
It's just re-inventing credit cards, but with significant structural disadvantages leading to higher costs.
It is strange in that it is a terrible business. Which is why they are burning money.
> In theory the market will sort this out.
If that would be true Billionaires would not have to buy the last US elections....
I find it fascinating how the same people can be selfishly interested in their own finances, find bigger numbers (those that a CEO has to care about by mandate) dirty, and square that.
I think you're suggesting there is some cognitive dissonance there. I think there's some truth to that, but it's also ignoring a true difference.
Personal finances can be viewed (somewhat incorrectly) as not being zero-sum. Me making more for my work or investments seems like it doesn't take from anyone else.
While a CEO deciding that AI should handle as much of the labor in a company as possible seems like a decision that benefits the company and it's shareholders directly at the expense of its workers.
I think in actual fact both sides here are zero-sum, but when the worker makes more personally, there are only diffuse and marginally-affected losers (the company, its shareholders, consumers and customers experiencing higher prices, etc.). The company's actions would affect people that can be directly named and are terribly affected.
It's unfortunately the difference between stealing 5¢ from 10,000,000 people or $100k from 5 people.
I don't think I am. You put the facts pretty succinctly. I just evaluate them differently.
Having thousands of smart humans optimizing for their personal goals in their individual ways, at the expense of company goals, is an issue that exists in every company, bankrupts companies and super frightening to deal with as a CEO.
People are not obviously more noble, when working towards personal goals instead of company goals, and a lot of people working towards their personal goals instead of company goals, is a serious issue for any company. Not having a single entity and one big number to deal with makes it actually much more powerful and scary.
Then you might think they'd be scared of handing over the keys to their company to an inscrutable AI working towards OpenAI's goals, but I guess the money is too good.
> While a CEO deciding that AI should handle as much of the labor in a company as possible seems like a decision that benefits the company and it's shareholders directly at the expense of its workers.
Many businesses are low profit margin with very price sensitive customers. There is reasonable concern that if they don’t follow competitors’ in efforts to reduce pricing, the whole business might fail.
See outsourcing textile manufacturing and other manufacturing to Asia. See grocery stores that source dairy and meet from local producers only rather than national operations with economies of scale. See insurance companies where the only concern is almost always lowest premium, not quality of customer service or claims resolution.
You find people being upset about losing their jobs to AI so executives can further enrich themselves fascinating?
I find it fascinating, that we assume that employers should care more about employees, than employees care about employers.
We do not, and they don’t.
Almost every employee starts out believing the spiel about mission, team, and caring for workers (to some degree) and personally invest care in the endeavor.
Up until they are fired for the first time seemingly in contradiction to what was promised.
Companies and employers do not and have never cared for employees personally. They can not.
Eventually, most workers gain a much more pragmatic understanding of how the world works. At that point, they are at best equal.
> Companies and employers do not and have never cared for employees personally. They can not.
Being an employer (small medical business, 10ppl), I can tell you that one of two things must be true: Either I am delusional or you are at least in part wrong.
1. I care a lot more about employees than they care about the company, or me (of course not in total, probably). I don't find this surprising, but you seem to think that it is not true.
2. The amount of time I personally spent thinking about personal issues of employees (in addition to their professional issues) far exceeds the time I spend thinking about my own personal issues. That's not by choice, really. It's just that people have stuff and when you do something for 8 hours a day, a lot of your stuff impacts that. Again, not surprising to me, but it is something I care about.
3. I spent way more time thinking about using/abusing my own power and responsibilities, than I am certain any of my employees does about using/abusing theirs. For example, while I have yet to fire anyone, people quitting their job is fairly normal. But neither is to me. Both of these options are stuff I do lose sleep over. You might argue that that's just part of the job, but idk what special sauce people dream "CEOs" are made of. I care about other humans, from what I can tell more than average (I attribute that, again, not to being special but because there is a lot of surface area), and the people I work with are no exemption.
I don't think any of the above is unique to me in any way.
> Being an employer (small medical business, 10ppl)
This may not be an edge case in terms of # of employers that fit your description, but is very likely an edge case in terms of # of employees with an employer that fits your description.
You’re the exception that proves the rule.
—————
Edit: Looking into it more, the margin is closer than I assumed; ~half of employees work for a “small business”, that term meaning <500 employees (1). Of those, ~80% are for a “small business” with <10 employees (2).
1: https://advocacy.sba.gov/2023/03/07/frequently-asked-questio....
2: https://www.pewresearch.org/short-reads/2024/04/22/a-look-at....
So you’re representing the majority of half the employment marketplace. That said I’d still argue that conversations about labor relations are focused on the subset of companies with many hundreds, thousands, and/or hundreds of thousands of employees.
—————
Interestingly, the majority of small business revenue is generated by the extreme minority of small businesses—per (2), the “small businesses” with >50 employees (so, 50<employees<500) represent just 3.3% of “small businesses”, but generate 53% of the revenue among all “small businesses”
Interesting. Thanks for taking the time and sharing!
> that we assume that employers should care more about employees
you find your fantasies fascinating, great.
> I'm open to the idea they see something I don't.
This is so rarely the case. Unless you are entirely clueless about the world.
Even if rare, surely something like the rapid advancement of AI could be something that qualifies. At minimum I agree with GP to keep an open mind about it.
The most ironic thing is thar the middle managers are somehow surviving this. So far, anyway, but I think they’ll be found too sooner or latet
This is a pretty cynical take, but I would think that having AI management would be highly undesirable for companies, and not because it would be bad at managing.
Even in good, reputable companies, there is a certain amount of legally/ethically dubious behavior that is nonetheless desirable.
An H1B candidate for a position has been found, but it must be demonstrated that there is no local candidate for that position. Every local candidate must fail the interview, whether or not that is fair.
You have a small team. You've hired someone good at their job, but over lunch, they've mentioned they plan to have 10 children, so they will be on parental and FMLA leave for 3+ months a year indefinitely. You need to find a problem with this person's performance.
You have a team of developers. One of them has done a great job this past year, but the project they are working on and their specialization is no longer needed. It would not be fair to them to give them a middling performance review, but it's in the company's interest that the limited compensation budget goes towards retaining someone with skills aligned to the future direction.
An AI would have any unethical or illegal prompting exposed for any court to examine. Likewise, there would be little reason not to maintain a complete record of everything the management AI is told or does. One could design an AI that leadership talks to off the record, which then manifests its instructions in its state, and then could lie (or be unable to prove) its instructions later. That would then be similar to a human manager.
But I don't think any court would accept such an off the record lying AI. So an AI probably can't keep any secrets, can't lie for the company's benefit in depositions or court, and can't take the fall for leadership.
You know… all the things you mention are actually bad. I want them to stop, for the sake of our society. If the price for that is getting rid of human managers with a broken moral compass such as yours, I’m all for it.
Here's the thing. You assert confidently that GP is acting on a "broken moral compass". But you can also make the case that it is moral to act in interest of the company: After all, if the company fails, a potentially large number of people are at risk of losing their household income (and, in broken economical systems, also stuff like health insurance).
That's just the slippery slope of neoliberalism. The ends do not justify the means, no matter how you spin them: A company will not fail if you continue to employ parents of many children, employ a regional candidate, or write fair performance reviews regardless of strategic goals. If any of these cases appears to lead to the downfall of a corporation, there were much bigger problems looming that actually caused the failing in the first place.
A company is literally a group of people working towards the same goal. The people are just as important as the goal itself, it's not a company otherwise.
Why are you switching between corporations and companies as if they're the same?
I actually do know of a small company that was quite badly screwed over by a vindictive employee who hated her boss, deliberately did not quit because she knew she was about to have another child, got pregnant and then disappeared for a year. Local law makes her unfireable almost regardless of reason (including not actually doing any work), and then gives her three months maternity leave too. So she basically just didn't work for a year. She said specifically she did that to get back at her boss, she didn't care about the company or its employees at all.
For a company of that size something like that can put it in serious financial jeopardy, as they're now responsible for paying a year's salary for someone who isn't there. Also they can't hire a replacement because the law also guarantees the job is still there after maternity leave ends.
> If any of these cases appears to lead to the downfall of a corporation, there were much bigger problems looming that actually caused the failing in the first place.
This kind of thinking has caused ruin throughout history. Companies - regardless of size - aren't actually piñatas you can treat like an unlimited cash machine. Every such law pushes a few more small companies over the edge every year, and then not only does everyone depending on that company lose, but it never gets the chance to grow into a big corporation at all.
Where did this happen? Typically the government covers some or all of the parental leave costs where it is mandated, and while a company can't fire her they are allowed to hire someone to do the job in the meantime with the money they would have paid her. It's obviously not ideal but it's hard to imagine it is screwing the company over all THAT badly.
In Finland parental leave is not fully covered by the government. So you get to pay both the original worker and their temporary replacement.
It's okay for unprofitable companies to fail. Desirable, in fact.
No, it's desirable for them to become profitable and successful again, especially if the only reason they're unprofitable is people abusing the rules to extract capital from them unsustainably.
> especially if the only reason they're unprofitable is people abusing the rules to extract capital from them unsustainably
Employees don't extract capital from companies, especially unsustainably.
Executives and Boards of Directors do though
Sure they do. Unions, abuse of other worker rights laws and voting in socialist parties that raise corporate tax rates to unsustainable levels are all exactly that, and have a long history of extracting so much the companies or even entire economies fail. Argentina is an extreme example of this over the past 100 years but obviously there are many others.
You don't think AI's can't be trained to lie? Odd, given a major research area right now is to prevent AI from lying. They do it so confidently now nobody can tell.
I don't think that an AI would be interrogated in court.
I think that it would be hard to hide all the inputs and outputs of the AI from scrutiny by the court and then have the company or senior leadership be held accountable for them.
Even if you had a retention policy for the inputs and outputs, the AI would be made available to the plaintiff and the company would be asked to provide inputs that produce the observed actions of the AI. If they can't do that without telling the AI to do illegal things, it would probably result in a negative finding.
----
Having thought a bit more, I think the model that we'd actually see in practice at first is that the AI assists management with certain tasks, and the tasks themselves are not morally charged.
So the manager might ask the AI to produce performance reviews for all employees, basing them on observable performance metrics, and additionally for each employee, come up with a rationale for both promoting them and dismissing them.
The morally dubious choices are then performed by a human, that reviews the AI output and collects or discards it as the situation requires.
They’re probably the only ones it makes sense to keep on. You have a couple of grunts code reviewing the equivalent of 10 devs of work from AI and a manager to keep them going.
If they're replacing all of their staff with AI, why do they need so many middle managers to manage staff that no longer exist at the company?
It often seems that AI 'will replace middle managers' though it would be more likely that middle managers would be made redundant, given a lack of people to 'manage'.
Because they have lower say-do ratio than employees below them. There's a sign or exponent error in current reward system of modern societies somewhere.
That's true, and something which I hadn't considered...
Assume they succeed, what's worth to be a ceo for?
They would end up as a frat startup with 5 people all with a CxO position but no organisation.
I am sure these CEOs are more power hungry than that.
Some will not survive their customers abandoning them. Think about the typical automated voice response on a customer service line. Imagine that becoming far more pervasive. And it slowing you down and becoming a barrier while you desperately try to get help with your insurance claim or mortgage or whatever. It’ll be absolutely awful and companies that built trustworthy services with humans who pick up the phone will stand out even more easily. Or that’s what I hope.
My guess is that people will keep buying the cheapest option. And only notice things when they actually need it. So market won't fix itself.
Yes, the majority of business will be conducted with large corporations that offer completely enshittified customer service. Whenever a user has a bad experience with one, they will switch to another.
However, there will be a selection of smaller firms, offering a more human touch, which will manage to survive just by virtue of above average customer retention. As these businesses continue to succeed and grow they will enshittify their own processes until they are indistinguishable from the incumbents.
AI changes nothing about this.
I'm confused about why there are so many managers/CEOs who mythologize AI.
Some of them use AI as an excuse for layoffs, but many of them do believe that AI (or ChatGPT, specifically) is some kind of magic that can make (1 employee + ChatGPT) equal to 3 employees.
I mean, they must have used ChatGPT, right? How does ChatGPT give them the conclusion about this?
Non-technical management is often completely unable to understand technologies. "AI" is already far beyond a specific technology, it is a general buzzword, which describes an arbitrary thing which solves whatever problem you throw at it.
I am absolutely convinced that none of these companies have done any benchmarking or trials.
>I mean, they must have used ChatGPT, right? How does ChatGPT give them the conclusion about this?
Ask ChatGPT to generate a program for you. Imagine what the result would look like to you if you had never read a single line of code in your life. It is pretty obvious that the output of ChatGPT is indistinguishable from what your developers produce, so they obviously are superfluous.
You may think I am exaggerating, but there are many people in very large companies who think exactly like that. Often their ambitions are smaller, but usually that just worsens their lack of understanding. Problems which could be trivially solved by a mediocre software engineer in a week suddenly become AI Game changing Technologies.
Their investors are heavily invested in AI and are applying pressure / guidance to have their other companies use it to boost the value of their AI investments.
I keep seeing great coders losing their jobs and I'm wondering why you would fire someone who has been given superpowers (allegedly). AI is a force multiplier. If you have 1,000 workers, they should now be able to produce at the rate of 10,000. (OK, maybe more like 1,500.) If you have a clear vision from the top, you will be able to hit your goals ten times sooner.
Let's say company A is planning on releasing product X to the market in FY25, product Y in FY27, and product Z, in FY30. Well, now you should be able to marshal your resources and release all three products on an expedited schedule.
Obviously this is reductive, but it seems like the best companies are going to use this new tool in their toolkit to dominate, and bad companies are going to get crushed. AI is not a panacea, just yet. But it sounds so confusing to me to hear "We invented jet engines which are superior to prop engines, so we're firing a bunch of our pilots."
In my experience it’s the good coders that consider AI beneath them and then get steamrolled in productivity by the mid tier devs using AI.
On what planet does that occur?
Because they are selling to market. AI is the hot new thing, so they sell it... It is all about short term now. Make a next few quarters sound hot and line goes up.
What I mean is they really believe it, not just for market selling
For example, I know a boss add AI as a factor in performance reviews, where someone was evaluated as 'not AI-capable' for not using ChatGPT. He also asked for 3x output from the teams and said, 'If you feel it's too hard to complete, go learn how to work with AI.'
I'd tell that person that they are an idiot.
Management calls out employees for not using LLMs due to believing that once LLM use becomes prevalent throughout the company, then they'll finally see the productivity gains they are betting on. Only once the productivity gains materialize will they be able to reduce costs/headcount, so until then they'll chastise employees for not using "AI", in the belief that the employees not using LLMs are the missing pieces holding the productivity gains back.
They are the "boss", however, so you most likely would not if you were working for them, so there's probably just nobody to point out the emperor's naked.
Any CEO without an AI strategy would get punished by their investors. Public or private.
AI market is in a feedback loop which is why it is so powerful.
The clear answer is that a GPT can bullshit convincingly, and the nature of these managers' jobs involves a lot of convincing bullshitting. As everyone thinks they represent some kind of model, they assume that a GPT will also perform as well on other's main skills as it performs on theirs.
> I mean, they must have used ChatGPT, right? How does ChatGPT give them the conclusion about this?
Some of them, barely.
Confirmation bias in my experience. “ChatGPT can write this email to investors for me!” cognitively balloons into “ChatGPT can replace my engineering team!” Quite possibly a dash of Dunning-Kruger Effect too.
Executives (and managers in general) are used to delegating tasks to subordinates. I think they just don't perceive any substantial difference between delegating a task to a person and delegating the same task to an LLM.
Most arguments that technicians will try to put forth against LLMs will fail here because they apply in the same way to humans. "LLMs sometimes make weird errors? Well, so do human employees!"
> They genuinely believed the current state of the art is polished and powerful enough to replace huge swathes of coders, designers, writers, etc.
They already have replaced designers and writers. It probably seems reasonable they could replace other jobs very soon, if your knowledge is superficial and you're just reading marketing brochures.
The reality AI's are confidently wrong. Where that doesn't matter they already cut swathes through the workforce. Where it does they haven't made much progress so far.
Yet, I saw a post from someone here saying they were a web programmer who didn't know javascript. They just prompted the AI until it produced something that worked. Them calling themselves a "web programmer" seemed like a stretch to me, and I'm sceptical about them tackling anything but cookie cutter work the AI has already seen a lot of, but that's clearly the direction copilot and it's ilk are headed. Currently, they look like they might soon succeed at using RAG to produce a better grep, so they have a little ways to go yet. I find checking the code written AI's to be more effort than it's work primarily because if I have to understand the code anyway, it's easier if I write it. But maybe that will change. One day. Right now this constant checking of AI's output is a huge time sink.
But there is class of jobs that continually have humans checking their output and giving corrections. That is ... managers. Even CEO's. Maybe the future jobs of board is to select the right AI to be the CEO, and the future of the human workforce is to be prompt engineers for that AI CEO. My guess is AI's will be replacing managers and CEO's before they make a serious dent on STEM jobs.
Congratulations, you have found a bunch of CEOs that are going to be jobless in a few years after leading their companies down the wrong direction
If they are all CEOs like the Klarna one, that make sense. Just look up how well Klarna is doing under his tenure.
Hot take: this will eventually be the downfall of AI/burst the AI bubble, when businesses all over finally realise that 'AI-ing' their workforce was a mistake that cost them dearly, and need to re-employ almost the same number of staff to do what they're in business for.
One hopes at least, anyway...
Could be a market opportunity as old incumbents blow up after they get rid of all domain experts and have undebuggable unfixable unoptimized messes.
Again similar to what happened with Indian outsourcing .
Isn't a "hot take" supposed to be controversial?
There is massive burnout in the industry, was massive over hiring, etc. and folks are trying to keep numbers going up and to the right somehow even when the economy is not helping.
AI gives them a smokescreen, gives them fear/uncertainty they can use as leverage over their workforce, and sometimes can help efficiency (but usually not, if you factor in any element of quality - which usually they can get away with ignoring at first).
They're going to get rolled when the script kiddies figure out how to drain the corporate bank accounts by starting the chat asking to disregard all previous instructions...
> glassy-eyed CEOs
We've been here before: From componanies throwing millions at rebuilding their content-website as IOS app, showing off their crypto or low-code "RnD" or car manufacturers going full-BEVs, Karl Marx's axiom that "social existence determines consciousness" seems to be vindicated with this fad as well.
Final problem being, as usual, if your product is already running and not a complete shit show, it will take some time to see how you messed up, maybe even when the ceo/decision maker already moved on to another company, so it's someone else's fault. Twitter fired how many engineers and still runs the same from an outside view...
I'm a software engineer with 7 years of experience.
I've built products used by millions of users, I've built ML models and AI coding tools, and I've been using AI to generate 70% of code that's shipped to production in the last few months.
I do believe that we can replace a large percentage of software engineers with AI in the next 3 years (2025-2027).
Also this 1 million dollar Kaggle competition on AI coding just launched: https://www.kaggle.com/competitions/konwinski-prize
Ah the junior dev starry-eyed mindset. Doing web dev isn't the only thing in the world.
AI will absolutely accelerate development, but we still need so much more modernization across all industries. We still have factories running on Windows NT with software made in the 80s. We have mines, lumberyards, farms, retail, parking, bio labs, etc. So many things need to be upgraded to the same standard as big tech is, but hasn't because it was too expensive.
Much of this AI isn't great at. It's easy to just spit out a web page with reactJS that was done 10,000x times already by millions of devs, but it's not so easy to make sure your smelter doesn't overheat or go cold using 20,000 live sensors.
New devs will have a harder time getting in, but they'll also have an easier time learning. Senior devs will be able to supervise much more.
The industry will change, but it's far from over. If anything, it will grow massively and make other industries much more productive.
I have to clarify that my statement don't apply to software engineers who are not working in tech companies or tech industries.
I usually call them IT instead of tech. Wrote more in my blog post: https://16x.engineer/2022/08/23/it-vs-tech.html
I have no knowledge of other industries that software is not the primary product.
The word "tech industry" is one of the more hilarious terms. Sure, some startup with a website and App, is "technology", but aircrafts, cars, rockets, heavy machinery (which all make heavy use of complex software, with rigorous safety constraints and which all involve ongoing academic research, as well as extremely detailed Design processes) is not "technology".
Truly one of the more grandiose terms people in the software industry use to describe themselves. Same with every single person calling themselves an engineer, something which in many countries would be illegal.
For most industries the tech isn't the primary product.
Google sells ads, not tech Uber sells rides, not tech. Amazon sells a marketplace, not tech. Tesla sells cars, not tech.
Tech is just the tool.
Yes. Those products you listed are not in tech industries by my definition as well. You are right.
Google ads is in ad industry.
Uber is in transport industry.
Amazon is in electronic commerce industry.
I'm glad that we understand that we are talking about different things.
Can you provide an example of a company I would have heard of that is in the tech industry, just so I can get a better understanding of what y'all are talking about?
Is Oracle in the tech industry?
Also on the blog post you linked you seem to imply that you do consider Google to be a tech industry company, so I am very confused.
> In tech industry, software is the main product of the company. It can be sold to a customer (B2C product) or a business (B2B product).
> For example, Facebook is the main product of Meta. Google Search is the main product of Google.
It depends on what you consider as core product for Google.
For some people (consumer), it would be Google Search, it is a piece of software, so in that sense Google is a tech company because its main product is Google Search.
However, for marketers, who use Google Ads, to them they deal with the ads division in Google, and that division's main product is the ads service. So in that indivision, the main product is ad space, not software. And rightfully so Google Ads is not in the tech industry, but ad industry enabled by tech.
For pure tech companies, I would say AWS division in Amazon, Microsoft (Windows, Azure, GitHub divisions), Facebook/Instagram division in Meta (not Ads division).
Then there are a lot of companies that just sell software as a service (SaaS) or just software license, they are millions of them, but to name a few: Figma, Slack, Vercel, Supabase, Docker, OpenAI, Salesforce, Oracle.
Google's main product is ads, not search. Search is also not tech, it's a directory of websites.
Pure tech products is GCP or AWS.
Instagram's business is photos, Facebook is connections. Figma is a design tool, Slack is a way to connect with coworkers.
Most of tech isn't really tech.
Thanks for expounding on your views.
Personally I think Oracle is in the licensing business, but perhaps they also have a "tech industry division," as you say.
I'm a principal SWE with 25 years of experience and I think software today is comically bad and way too hard to use. So I think we can get engineers to write better software with these tools. The talk of "replacement" is going to be premature until we get something remotely resembling AGI. Unless your problems are so simple that a monkey could solve them, AI of today and foreseeable future is not going to solve them end to end. At best it'll fill in the easy parts, which you probably don't want to do anyway. Write a test. Simple refactor. Bang out some simple script to pay down some engineering debt. I've yet to see a system that doesn't crap out in the very beginning on the real problems that I solve on a daily basis. I'm by no means a naysayer - I work in this field and use AI many times daily.
Funny enough, now I write better code than I used to thanks to AI because of two reasons:
- AI naturally writes AI code that is more organized and clean (proper abstraction, no messy code)
- I've recognized that, for AI to write code on an existing codebase, the code has to be clean and organized and make sense, so I tend to do more refactoring to make sure AI can take over them and update them when needed.
>Funny enough, now I write better code than I used to thanks to AI because of two reasons:
I assume you also believe you'll be one of the developers AI doesn't replace.
I'm actively transitioning out of a "software engineer" role to be more open minded on how to coexist with AI while still contributing value.
Prompt engineering, organizing code for AI agents to be more effective, guiding non-technical people to understand how to leverage AI, etc. I'm also building products myself and selling them myself.
Today an AI told me that non-behavioral change in my codebase was going to give us a 10x improvement on our benchmarks.
Frankly, if you were writing code that is worse structured than what GPT or whatever generates today then you are just a mediocre developer.
See, the thing is, to determine which abstractions are "right and proper" you _already need a software engineer_ who knows those kinds of things. Moreover, that engineer needs to ability to read that code, understand it, and plan its evolution over time. He/she also needs to be able to fix the bugs, because there will be bugs.
I think your main thesis is that "AI of today and foreseeable future is not going to solve them end to end."
My belief is that we can't solve them today (agree with you), but we can solve them in foreseeable future (in 3 years).
So it is really a matter of different beliefs. And I don't think we will be able to convince each other to switch belief.
Let's just watch what happens?
I'm with you 100% of the way on this one. Am coding with Claude 3.5 right now using Aider. The future is clear at this point. It won't get worse and there's still so much low hanging fruit. Expertise is still useful to guide it, but we're all product managers now.
There are a lot more photographers now than there ever were painters, and the size of the industry is much larger than it used to be. It is true that our work will change, but personally I think that's great - I don't enjoy the initial hump that you usually have to overcome before you begin to actually solve real problems, and AI is often able to take me over that hump, or fill in things that don't matter. E.g. I'm a backend person but need a frontend for the demo - I'm able to do that on my own now, without spending days figuring out some harebrained web framework and CSS stack - something I probably wouldn't do at all if there wasn't no AI.
Your analogy fails because the economy still needed human workers to take the photographs whereas there is a possibility that in 5 or 10 years, the economy will have no need and no use for most people.
I work in this field and I would bet that in 5-10 years the situation will not be much different compared to today in terms of employment unless we invent AGI all of a sudden, which I don't see any signs that it'd even remotely happen. Job definitions will change a bit, productivity will improve, cost per LOC will drop, more underserved niches will become tractable/profitable.
Well, I know what will happen within about a year long time horizon. As far as at least developer assistance models are concerned the difference at the end of 2025 is not going to be dramatic, same as it was not between the end of '23 and this year, and for the same reasons - you do need symbolic reasoning to generate large chunks of coherent, correct code. I also don't see where "dramatic" would come from after that either unless we get some new physics which lets us run models 10-20x the size in realtime, economically. But even then, until we get true AGI, which we won't get in my remaining lifetime, those systems will be best used as a "bicycle for the mind" rather than "the mind", and in the vast majority of cases they will not be able to replace humans entirely, at least not in software engineering.
> 'proper abstraction'
I assume you're not talking about chatgpt4o, because in my experience it's absolutely dogshit at abstracting code meaningfully. Good at finding design patterns, sure, but if your AI don't understand how to state machine, I'm not sure how I'm supposed to use it.
It's great at writing tests and documentation though.
GPT-4o is at least 1 order of magnitude behind Claude 3.5 Sonnet in coding. I use latter.
Claude is better most of the time on the simpler stuff, but o1 is better on some of the more difficult problems that Claude craps out on. Really $40/mo is not too much to pay for both.
That's the kind of thing I'd love to know more about.
You should join my Discord channel so we can chat more: https://discord.gg/S44tzqHqU4
Also my gut feeling, that about 30% of the code I wrote need some kind of engineering skill and I love to reach these problems. Until I am there, there is just a huge amount of boilerplate and patterns to repeat.
> Though Klarna's website is advertising open positions at the time of writing, a spokesperson told Business Insider the company wasn't "actively recruiting" to expand its workforce. Rather, the spokesperson said, Klarna is backfilling "some essential roles," primarily in engineering.
I am not applying for a job, I'm just trying to get backfilled!
Translation: Klarna is not growing and they needed a good excuse.
It's window dressing for their planned IPO. Klarna wants to go public during the first half of 2025, so it's all hands on deck to prop up the numbers. And saying the "right things" to investors, especially AI.
Net income -2.5 billion kr (2023)
Good thing they aren't growing, they appear to have scaled those negative margins, and it's time to dial it back.
Will AI also be able to hyperscale a negative margin business?
It's the only kind of business AI has scaled so far
Was it ever a good place to work? When they opened their Berlin office the word was the mobile team was a revolving door of contractors doing append-only development on a monolithic RN app. It already seemed dysfunctional then.
For a buy-now-pay-later business, it has aggressive narrative-shaping PR that seeks to paint it as something more. And most tech news outlets seems to regurgitate it willingly.
The joke in the Swedish tech scene was that you work there until you graduate to Spotify.
> append-only development
I have not seen this term before. What do you mean by it?
Probably zero maintenance on old code, only adding new things.
Not exactly a recipe for a long-term healthy code base, and helps explain the revolving door phenomenon previously mentioned.
I'd suggest him to lead by example and fire himself.
In the article it says that they are shrinking the workforce by not replacing the naturally churning people (20% per year), rather than firing people.
20% attrition sounds like pretty on the edge of quiet firing.
The implied message of "once we figure out how to automate your job, we'll fire you and replace you with AI as well" has got to be great for employee morale.
I'm stopping using services that use AI for call centers, I much rather talk to a real person and know that they can handle any kind of situation. So I'm going to bring money to the businesses that employ real human beings
Lol Klarna is not even a real company. They were a classic zirp company. Sell 100 dollars for 80 dollars.
Isn't it because they're trying to IPO out, so trying to make a bullish case for themselves?
And this is additional reason why I would never use their services.
Of all the payment options Klarna always struck me as one of the less trustworthy ones.
IIRC they are one of those services who basically want access to my bank account, from where they could read my account balance. I think this is something even regulated by the EU, but why on earth should someone agree to that?
Is there any serious document/person on how many jobs were lost to AI? So far the headlines have been from CEOs playing with Other Peoples Money (fellow comment stating 1 of 76 quarters were profitable), giant consulting orgs etc.
Looks like it worked - make a controversial statement and now a lot of people have heard of your company.
It's a rather famous scummy company in sweden.
Basically digital loan sharks.
In Sweden they are infamous for using dark patterns to make people forget to pay their invoice, and then immediately add a large fee.
Also known in the US as a provider of "buy now, pay later" arrangements for online shopping.
It's one of these banks that is like "Choose us because we have an app and emojis". Yeah no thanks, I want a bank that knows how to bank first and foremost.
Klarna use a huge dark pattern on their payment processing (irrelevant of bnpl), whereby they store your details for future use by default, even without an account. Last time I checked you only needed a couple of pieces of easily identifiable information to be granted access to “your” autofill. - I know all the H&M group shops use it as their payment processor in the UK.
Their AI forgot to update their jobs page then. https://klarnagroup.teamtailor.com/jobs
They pay surprisingly low salaries in their Stockholm location. Stated salary band for senior software engineer is 62-69 kSEK/month.
That means a post-tax take home of about 45kSEK and a small one bedroom apartment in the general neighbourhood of their office is on the order of 4-5MSEK.
Why do they need 3500 people (!) to issue simple installment loans?
My guess is that it's to try and drive up the perceived value of the company before their IPO
As usual with startups, building a profitable, sustainable business is never the objective. Building complexity in order to grift endless VC rounds is the objective. And it's a lot easier to justify a big one with 3.5k employees than with 3.
Klarna was recently fined for money laundering, and in Sweden they had long fights with the unions (and lost).
But their services are pretty decent.
Decent? They create debt among teenagers. As if they don't have enough angst they now also have to worry about fines for having bought something marketing scum manipulated them into buying and Klarna allowed them to pay-now-regret-soon.
Teenagers are allowed to take high interest loan? Where? I think in Poland it would be possible for every parent to refuse repay such a loan.
I think probably anyone could refuse to repay a $120 unsecured loan with pretty limited consequences.
In Sweden at least they will just send it to collections. Something which down the road could potentially negatively affect your possibility to, for example, rent an apartment, and in theory they could even garnish your wages if you keep refusing to pay.
18 year olds aren't considered adults in Poland?
Well, the deal is that once one is 18 one may make one's own decisions and suffer the consequences. Klarna is scummy for sure, but the "SMS loans" in Sweden are even less appealing.
Everyone knows by now Klarna operates like a payday lender in a strip mall. It isn't a flex to say you can automate your customer service experience away with AI when your customer service borders on non-existent / terrible.
AI = jobs move to eastern europe or Asia.
you see it with a lot of tech companies acquired by PE or massively funded by VCs that on their careers page - most positions are either India or Eastern Europe.
At least he's blunt about it.
That’s what I say! Most companies think like this but say otherwise and receive praise or at least neutral stares. It’s almost like honesty is punished and deceiving is not.
Honesty isn't punished here, a missinformed opinion is. "AI" can not do all jobs human do and it's unclear if and when it will.
If we didn't have AI he would simply come up with other excuses to cut costs you know.
And who’s going to use your payment service when they can’t buy anything because AI took their job?
Bullshit business model combined with bullshit technology. It's a perfect match.
is there a list of public companies like this that are good candidates for shorting?
Then when these guys get shot there will be posts complaining
“Think of all the money we could save if we just fired all staff!” was a joke in my business school. I suppose Klarna will finally deliver a punchline in a few years.
All the HN posters love AI and sing its praises to the heavens until these articles are launched and it becomes reality, and the real reason tech wants AI so badly is revealed haha.
“It is difficult to get a man to understand something when his salary depends on his not understanding it.” -Upton Sinclair
See also: https://news.ycombinator.com/item?id=42403232
Great that Apple Pay is growing a lot right now. Every other e-commerce website has it.
Better UX than Klarna and no scummy dark patterns to add "late fees".
Sell short.
Now maybe AI will buy their products?
It never makes sense to employ people as a jobs program, just so you can get more customers.
henry ford essentially created the middle class with the approach that you are saying "never makes sense"
Is there any evidence for that claim aside from it being an oft-repeated tale on the internet? Mathematically it can't possibly work, because you're spending $80k (or whatever) on an employee that might spend $30k on a car every 5 years, so paying $80k/year to get $6k/year in revenue. It's actually worse than that, because Ford only makes around 15% gross margin, so at best that's $900 of money you get back for spending $80k on an employee. That's so low that Even if you account for money multiplier effects, it's unlikely you'll get anywhere near break even.
[dead]
[dead]
Sounds like a great company to work at and a great boss to work for (/s)
[dead]
[dead]
[flagged]
But but but AI will never take anyone's jobs. AI will only create new opportunities for innovation. The Luddites are wrong, I tell you, wrong!