I’ve seen Picallilli’s stuff around and it looks extremely solid. But you can’t beat the market. You either have what they want to buy, or you don’t.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.
This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.
Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.
There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.
Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.
> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
So let's all just give zero fucks about our moral values and just multiply monetary ones.
>So let's all just give zero fucks about our moral values and just multiply monetary ones.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.
It’s completely reasonable to take a moral stance that you’d rather see your business fail and shut down than do X, even if X is lucrative.
But don’t expect the market to care. Don’t write a blog post whining about your morals, when the market is telling you loud and clear they want X. The market doesn’t give a shit about your idiosyncratic moral stance.
Edit: I’m not arguing that people shouldn’t take a moral stance, even a costly one, but it makes for a really poor sales pitch. In my experience this kind of desperate post will hurt business more than help it. If people don’t want what you’re selling, find something else to sell.
> when the market is telling you loud and clear they want X
Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.
What you (and others in this thread) are also doing is a sort of maximalist dismissal of AI itself as if it is everything that is evil and to be on the right side of things, one must fight against AI.
This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.
800 million weekly active users for ChatGPT. My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it. To do the contrary would be highly egoistic and suggest that I am somehow more intelligent than all those people and I know more about what they want for themselves.
I could obviously give you examples where LLMs have concrete usecases but that's besides the larger point.
> 1B people in the world smoke. The fact something is wildly popular doesn’t make it good or valuable. Human brains are very easily manipulated, that should be obvious at this point.
You should be. You should be equally suspicious of everything. That's the whole point. You wrote:
> My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it.
Enough people doing something doesn't make that something good or desirable from a societal standpoint. You can find examples of things that go in both directions. You mentioned gaming, social media, movies, carnivals, travel, but you can just as easily ask the same question for gambling or heavy drugs use.
Just saying "I defer to their judgment" is a cop-out.
I don’t do zero sum games, you can normalize every bad thing that ever happened with that rhetoric.
Also, someone benefiting from something doesn’t make it good. Weapons smuggling is also extremely beneficial to the people involved.
Yes but if I go with your priors then all of these are similarly to be suspect
- gaming
- netflix
- television
- social media
- hacker news
- music in general
- carnivals
A priori, all of these are equally suspicious as to whether they provide value or not.
My point is that unless you have reason to suspect, people engaging in consumption through their own agency is in general preferable. You can of course bring counter examples but they are more of caveats against my larger truer point.
Social media for sure and television and Netflix in general absolutely.
But again, providing value is not the same as something being good. A lot of people think inaccuracies by LLMs to be of high value because it’s provided with nice wrappings and the idea that you’re always right.
>The only thing people don’t give a shit about is your callous and nihilistic dismissal.
This was you interpreting what the parent post was saying. I'm similarly providing a value judgement that you are doing a maximalist AI dismissal. We are not that different.
This is a YC forum. That guy is giving pretty honest feedback about a business decision in the context of what the market is looking for. The most unkind thing you can do to a founder is tell them they’re right when you see something they might be wrong about.
You mean, when evaluating suppliers, do I push for those who don't use AI?
Yes.
I'm not going to be childish and dunk on you for having to update your priors now, but this is exactly the problem with this speaking in aphorisms and glib dismissals. You don't know anyone here, you speak in authoritative tone for others, and redefine what "matters" and what is worthy of conversation as if this is up to you.
> Don’t write a blog post whining about your morals,
why on earth not?
I wrote a blog post about a toilet brush. Can the man write a blog post about his struggle with morality and a changing market?
I understand that website studios have been hit hard, given how easy it is to generate good enough websites with AI tools. I don't think human potential is best utilised when dealing with CSS complexities. In the long term, I think this is a positive.
However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.
I think it's just as likely that business who have gone all-in on AI are going to be the ones that get burned. When that hose-pipe of free compute gets turned off (as it surely must), then any business that relies on it is going to be left high and dry. It's going to be a massacre.
It's not as simple as putting all programmers into one category. There can be oversupply of web developers but at the same time undersupply of COBOL developers. If you are a very good developer, you will always be in demand.
> If you are a very good developer, you will always be in demand.
"Always", in the same way that five years ago we'd "never" have an AI that can do a code review.
Don't get me wrong: I've watched a decade of promises that "self driving cars are coming real soon now honest", latest news about Tesla's is that it can't cope with leaves; I certainly *hope* that a decade from now will still be having much the same conversation about AI taking senior programmer jobs, but "always" is a long time.
Some people will lose their homes. Some marriages will fail from the stress. Some people will chose to exit life because of it all.
It's happened before and there's no way we could have learned from that and improved things. It has to be just life changing, life ruining, career crippling. Absolutely no other way for a society to function than this.
.com implosion, tech jobs of all kinds went from "we'll hire anyone who knows how to use a mouse" to the tech jobs section of the classifieds was omitted entirely for 20 months. There have been other bumps in the road since then but that was a real eye-opener.
Ive honestly never intentionally visited it (as in, went to the root page and started following links) - it was just where google sent me when searching answers to specific technical questions.
In contrast to others, I just want to say that I applaud the decision to take a moral stance against AI, and I wish more people would do that. Saying "well you have to follow the market" is such a cravenly amoral perspective.
nobody is against his moral stance. the problem is that he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim. if you're a millionaire and can hold whatever moral line you want without ever worrying about rent, food, healthcare, kids, etc. then "selling out" is optional and bad. if you're joe schmoe with a mortgage and 5 months of emergency savings, and you refuse the main kind of work people want to pay you for (which is not even that controversial), you’re not some noble hero, you’re just blowing up your life.
> he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim
No. It is the AI companies that are externalizing their costs onto everyone else by stealing the work of others, flooding the zone with garbage, and then weeping about how they'll never survive if there's any regulation or enforcement of copyright law.
No, of course you don't have to – but don't torture yourself. If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
If you found it unacceptable to work with companies that used any kind of digital database (because you found centralization of information and the amount of processing and analytics this enables unbecoming) then you should probably look for another venture instead of finding companies that commit to pen and paper.
> If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
Maybe they will, and I bet they'll be content doing that. I personally don't work with AI and try my best to not to train it. I left GitHub & Reddit because of this, and not uploading new photos to Instagram. The jury is still out on how I'm gonna share my photography, and not sharing it is on the table, as well.
I may even move to a cathedral model or just stop sharing the software I write with the general world, too.
Nobody has to bend and act against their values and conscience just because others are doing it, and the system is demanding to betray ourselves for its own benefit.
Before that AI craze, I liked the idea of having a CC BY-NC-ND[0] public gallery to show what I took. I was not after any likes or anything. If I got professional feedback, that'd be a bonus. I even allowed EXIF-intact high resolution versions to be downloaded.
Now, I'll probably install a gallery webapp to my webserver and put it behind authentication. I'm not rushing because I don't crave any interaction from my photography. The images will most probably be optimized and resized to save some storage space, as well.
Sorry for them- after I got laid off in 2023 I had a devil of a time finding work to the point my unemployment ran out - 20 years as a dev and tech lead and full stack, including stints as a EM and CTO
Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
Yeah. It is much harder now than it used to be. I know a couple of people who came from the US ~15 to 10 years ago and they had it easy. It was still a nightmare with banks that don’t want to deal with US citizens, though.
As Americans, getting a long-term visa or residency card is not too hard, provided you have a good job. It’s getting the job that’s become more difficult. For other nationalities, it can range from very easy to very hard.
Yeah it depends on which countries you're interested in. Netherlands, Ireland, and the Scandinavian ones are on the easier side as they don't require language fluency to get (dev) jobs, and their languages aren't too hard to learn either.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff
If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
This is also not to say that service providers should not have any moral standards. I just don't understand the expectation in this particular case. You ignore what the market wants and where a lot/most of new capital turns up. What's the idea? You are a service provider, you are not a market maker. If you refuse service with the market that exists, you don't have a market.
Regardless, I really like their aesthetics (which we need more of in the world) and do hope that they find a way to make it work for themselves.
How do you measure „absolute top tier“ in CSS and HTML? Honest question. Can he create code for difficult-to-code designs? Can he solve technical problems few can solve in, say, CSS build pipelines or rendering performance issues in complex animations? I never had an HTML/CSS issue that couldn’t be addressed by just reading the MDN docs or Can I Use, so maybe I’ve missed some complexity along the way.
Being absolute top tier at what has become a commodity skillset that can be done “good enough” by AI for pennies for 99.9999% of customers is not a good place to be…
I had a discussion yesterday with someone that owns a company creating PowerPoints for customers. As you might understand, that is also a business that is to be hit hard by AI. What he does is offer an AI entry level option, where basically the questions he asks the customer (via a Form) will lead to a script for running AI. With that he is able to combine his expertise with the AI demand from the market, and gain a profit from that.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
I started TextQuery[1] with same moralistic standing. Not in respect of using AI or not, but that most software industry is suffering from rot that places more importance on making money, forcing subscription vs making something beautiful and detail-focused. I poured time in optimizing selections, perfecting autocomplete, and wrestling with Monaco’s thin documentation. However, I failed to make it sustainable business. My motivation ran out. And what I thought would be fun multi-year journey, collapsed into burnout and a dead-end project.
I have to say my time was better spent on building something sustainable, making more money, and optimizing the details once having that. It was naïve to obsess over subtleties that only a handful of users would ever notice.
There’s nothing wrong with taking pride in your work, but you can’t ignore what the market actually values, because that's what will make you money, and that's what will keep your business and motivation alive.
I want to sympathize but enforcing a moral blockade on the "vast majority" of inbound inquiries is a self-inflicted wound, not a business failure. This guy is hardly a victim when the bottleneck is explicitly his own refusal to adapt.
if the alternative to 'selling out' is making your business unviable and having to beg the internet for handouts(essentially), then yes, you should "sell out" every time.
>especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that.
I intentionally ignored the biggest invention of the 21st century out of strange personal beliefs and now my business is going bankrupt
Yes I find this a bit odd. AI is a tool, what specific part of it do you find so objectionable OP? For me, I know they are never going to put the genie back in the bottle, we will never get back the electricity spent on it, I might as well use it. We finally got a pretty good Multivac we can talk to and for me it usually gives the right answers back. It is a once in a lifetime type invention we get to enjoy and use. I was king of the AI haters but around Gemini 2.5 it just became so good that if you are hating it or criticizing it you aren’t looking at it objectively anymore.
Man, I definitely feel this, being in the international trade business operating an export contract manufacturing company from China, with USA based customers. I can’t think of many shittier businesses to be in this year, lol. Actually it’s been pretty difficult for about 8 years now, given trade war stuff actually started in 2017, then we had to survive covid, now trade war two. It’s a tough time for a lot of SMEs. AI has to be a handful for classic web/design shops to handle, on top of the SMEs that usually make up their customer base, suffering with trade wars and tariff pains. Cash is just hard to come by this year. We’ve pivoted to focus more on design engineering services these past eight years, and that’s been enough to keep the lights on, but it’s hard to scale, it is just a bandwidth constrained business, can only take a few projects at a time. Good luck to OP navigating it.
Some folks have moral concerns about AI. They include:
* The environmental cost of inference in aggregate and training in specific is non-negligible
* Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.
* AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.
* The primary companies who are selling AI products have, at times, controversial pasts or leaders.
* Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.
* The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.
* AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.
Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)
I'm not sure it's helpful to accuse "them" of bad faith, when "them" hasn't been defined and the post in question is a summary of reasons many individual people have expressed over time.
> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
Although there’s a ton of hype in “AI” right now (and most products are over-promising and under-delivering), this seems like a strange hill to die on.
imo LLMs are (currently) good at 3 things:
1. Education
2. Structuring unstructured data
3. Turning natural language into code
From this viewpoint, it seems there is a lot of opportunity to both help new clients as well as create more compelling courses for your students.
No need to buy the hype, but no reason to die from it either.
Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.
Really depends what the moral objection is. If it's "no machine may speak my glorious tongue", then there's little to be said; if it's "AI is theft", then you can maybe make an argument about hypothetical models trained on public domain text using solar power and reinforced by willing volunteers; if it's "AI is a bubble and I don't want to defraud investors", then you can indeed argue the object-level facts.
Indeed, facts are part of the moral discussion in ways you outlined. My objection was that just listing some facts/opinions about what AI can do right now is not enough for that discussion.
I wanted to make this point here explicitly because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
It's very funny reading this thread and seeing the exact same arguments I saw five years ago for the NFT market and the metaverse.
All of this money is being funneled and burned away on AI shit that isn't even profitable nor has it found a market niche outside of enabling 10x spammers, which is why companies are literally trying to force it everywhere they can.
Interesting. I agree that this has been a hard year, hardest in a decade. But comparison with 2020 is just surprising. I mean, in 2020 crazy amounts of money were just thrown around left and right no? For me, it was the easiest year of my career when i basically did nothing and picked up money thrown at me.
Too much demand, all of a sudden. Money got printed and i went from near bankruptcy in mid-Feb 2020 to being awash with money by mid-June.
And it continued growing nonstop all the way through ~early Sep 2024, and been slowing down ever since, by now coming to an almost complete stop - to the point i ever fired all sales staff because they were treading water with no even calls let alone deals, for half a year before being dismissed in mid-July this year.
I think it won't return - custom dev is done. The myth of "hiring coders to get rich" is over. No surprise it did, because it never worked, sooner or later people had to realise it. I may check again in 2-3 years how market is doing, but i'm not at all hopeful.
It’s ironic that Andy calls himself “ruthlessly pragmatic”, but his business is failing because of a principled stand in turning down a high volume of inbound requests. After reading a few of his views on AI, it seems pretty clear to me that his objections are not based in a pragmatic view that AI is ineffective (though he claims this), but rather an ideological view that they should not be used.
Ironically, while ChatGPT isn’t a great writer, I was even more annoyed by the tone of this article and the incredible overuse of italics for emphasis.
Yeah. For all the excesses of the current AI craze there's a lot of real meat to it that will obviously survive the hype cycle.
User education, for example, can be done in ways that don't even feel like gen AI in ways that can drastically improve activation e.g. recommendation to use feature X based on activity Y, tailored to their use case.
If you won't even lean into things like this you're just leaving yourself behind.
I agree that this year has been extremely difficult, but as far as I know, a large number of companies and individuals still made a fortune.
Two fundamental laws of nature: the strong prey on the weak, and survival of the fittest.
Therefore, why is it that those who survive are not the strong preying on the weak, but rather the "fittest"?
Next year's development of AI may be even more astonishing, continuing to kill off large companies and small teams unable to adapt to the market. Only by constantly adapting can we survive in this fierce competition.
I’ve seen Picallilli’s stuff around and it looks extremely solid. But you can’t beat the market. You either have what they want to buy, or you don’t.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.
I think everyone in the programming education business is feeling the struggle right now. In my opinion this business died 2 years ago – https://swizec.com/blog/the-programming-tutorial-seo-industr...
This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.
Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.
There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.
Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.
> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
So let's all just give zero fucks about our moral values and just multiply monetary ones.
>So let's all just give zero fucks about our moral values and just multiply monetary ones.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.
It’s completely reasonable to take a moral stance that you’d rather see your business fail and shut down than do X, even if X is lucrative.
But don’t expect the market to care. Don’t write a blog post whining about your morals, when the market is telling you loud and clear they want X. The market doesn’t give a shit about your idiosyncratic moral stance.
Edit: I’m not arguing that people shouldn’t take a moral stance, even a costly one, but it makes for a really poor sales pitch. In my experience this kind of desperate post will hurt business more than help it. If people don’t want what you’re selling, find something else to sell.
> when the market is telling you loud and clear they want X
Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.
[1]https://www.pcmag.com/news/microsoft-exec-asks-why-arent-mor...
[2]https://fortune.com/2025/08/18/mit-report-95-percent-generat...
Exactly. Microsoft for instance got a noticeable backlash for cramming AI everywhere, and their future plans in that direction.
How do you know? I give a shit. A ton of people in this thread give a shit. This blog post is a great way to communicate with others who give a shit.
The only thing people don’t give a shit about is your callous and nihilistic dismissal.
What you (and others in this thread) are also doing is a sort of maximalist dismissal of AI itself as if it is everything that is evil and to be on the right side of things, one must fight against AI.
This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.
Yet to see anything good come from it, and I’m not talking about machine learning for specific use cases.
And if we look at the players who are the winners in the AI race, do you see anyone particularly good participating?
800 million weekly active users for ChatGPT. My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it. To do the contrary would be highly egoistic and suggest that I am somehow more intelligent than all those people and I know more about what they want for themselves.
I could obviously give you examples where LLMs have concrete usecases but that's besides the larger point.
> 1B people in the world smoke. The fact something is wildly popular doesn’t make it good or valuable. Human brains are very easily manipulated, that should be obvious at this point.
Almost all smokers agree that it is harmful for them.
Can you explain why I should not be equally suspicious of gaming, social media, movies, carnivals, travel?
You should be. You should be equally suspicious of everything. That's the whole point. You wrote:
> My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it.
Enough people doing something doesn't make that something good or desirable from a societal standpoint. You can find examples of things that go in both directions. You mentioned gaming, social media, movies, carnivals, travel, but you can just as easily ask the same question for gambling or heavy drugs use.
Just saying "I defer to their judgment" is a cop-out.
I don’t do zero sum games, you can normalize every bad thing that ever happened with that rhetoric. Also, someone benefiting from something doesn’t make it good. Weapons smuggling is also extremely beneficial to the people involved.
Yes but if I go with your priors then all of these are similarly to be suspect
- gaming
- netflix
- television
- social media
- hacker news
- music in general
- carnivals
A priori, all of these are equally suspicious as to whether they provide value or not.
My point is that unless you have reason to suspect, people engaging in consumption through their own agency is in general preferable. You can of course bring counter examples but they are more of caveats against my larger truer point.
Social media for sure and television and Netflix in general absolutely. But again, providing value is not the same as something being good. A lot of people think inaccuracies by LLMs to be of high value because it’s provided with nice wrappings and the idea that you’re always right.
That's definitely not what I am doing, nor implying, and while you're free to think it, please don't put words in my mouth.
>The only thing people don’t give a shit about is your callous and nihilistic dismissal.
This was you interpreting what the parent post was saying. I'm similarly providing a value judgement that you are doing a maximalist AI dismissal. We are not that different.
We are basically 100-ϵ% the same. I have no doubt.
Maybe the only difference between us is that I think there is a difference between a description and an interpretation, and you don't :)
In the grand scheme of things, is it even worth mentioning? Probably not! :D :D Why focus on the differences when we can focus on the similarities?
Ok change my qualifier from interpretation to description if it helps. I describe you as someone who dismisses AI in a maximalist way
>Maybe the only difference between us is that I think there is a difference between a description and an interpretation, and you don't :)
>Ok change my qualifier from interpretation to description if it helps.
I... really don't think AI is what's wrong with you.
This is a YC forum. That guy is giving pretty honest feedback about a business decision in the context of what the market is looking for. The most unkind thing you can do to a founder is tell them they’re right when you see something they might be wrong about.
Which founder is wrong? Not only the brainwashed here are entrepreneurs
Are you going to hire him?
If not, for the purpose of paying his bills, your giving a shit is irrelevant. That’s what I mean.
You mean, when evaluating suppliers, do I push for those who don't use AI?
Yes.
I'm not going to be childish and dunk on you for having to update your priors now, but this is exactly the problem with this speaking in aphorisms and glib dismissals. You don't know anyone here, you speak in authoritative tone for others, and redefine what "matters" and what is worthy of conversation as if this is up to you.
> Don’t write a blog post whining about your morals,
why on earth not?
I wrote a blog post about a toilet brush. Can the man write a blog post about his struggle with morality and a changing market?
I understand that website studios have been hit hard, given how easy it is to generate good enough websites with AI tools. I don't think human potential is best utilised when dealing with CSS complexities. In the long term, I think this is a positive.
However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.
> we don't see the authors getting the recognition.
In that sense AI has been the biggest heist that has ever been perpetrated.
I think it's just as likely that business who have gone all-in on AI are going to be the ones that get burned. When that hose-pipe of free compute gets turned off (as it surely must), then any business that relies on it is going to be left high and dry. It's going to be a massacre.
Sure, and it takes five whole paragraphs to have a nuanced opinion on what is very obvious to everyone :-)
>the type of business that's going to be hit hard by AI [...] will be the ones that integrate AI into their business the most
There. Fixed!
AI is not a tool, it is an oracle.
Prompting isn't a skill, and praying that the next prompt finally spits out something decent is not a business strategy.
Seeing how many successful businesses are a product of pure luck, using an oracle to roll the dice is not significantly different.
Totally agree, but I’d state it slightly differently.
This type of business isn’t going to be hit hard by AI; this type of business owner is going to be hit hard by AI.
> And the type of businesses that survive will be the ones that integrate AI into their business the most successfully.
I am an AI skeptic and until the hype is supplanted by actual tangible value I will prefer products that don't cram AI everywhere it doesn't belong.
what happen if the market is right and this is "new normal"?????
same like StackOverflow down today and seems like not everyone cares anymore, back then it would totally cause breakdown because SO is vital
> what happen if the market is right and this is "new normal"?????
Then there's an oversupply of programmers, salaries will crash, and lots of people will have to switch careers. It's happened before.
It's not as simple as putting all programmers into one category. There can be oversupply of web developers but at the same time undersupply of COBOL developers. If you are a very good developer, you will always be in demand.
> If you are a very good developer, you will always be in demand.
"Always", in the same way that five years ago we'd "never" have an AI that can do a code review.
Don't get me wrong: I've watched a decade of promises that "self driving cars are coming real soon now honest", latest news about Tesla's is that it can't cope with leaves; I certainly *hope* that a decade from now will still be having much the same conversation about AI taking senior programmer jobs, but "always" is a long time.
waymo and tesla already operate in certain areas, even if tech is ready
regulation still very much a thing
Some people will lose their homes. Some marriages will fail from the stress. Some people will chose to exit life because of it all.
It's happened before and there's no way we could have learned from that and improved things. It has to be just life changing, life ruining, career crippling. Absolutely no other way for a society to function than this.
I'm young, please when was that and in what industry
After the year 2000. dot com burst.
An tech employee posted he looked for job for 6 months, found none and has joined a fast food shop flipping burgers.
That turned tech workers switching to "flipping burgers" into a meme.
.com implosion, tech jobs of all kinds went from "we'll hire anyone who knows how to use a mouse" to the tech jobs section of the classifieds was omitted entirely for 20 months. There have been other bumps in the road since then but that was a real eye-opener.
I haven’t visited StackOverflow for years.
I stopped using it much even before the AI wave.
Ive honestly never intentionally visited it (as in, went to the root page and started following links) - it was just where google sent me when searching answers to specific technical questions.
It became as annoying as experts exchange the very thing it railed against!
buggywhips are having a temporary setback.
In contrast to others, I just want to say that I applaud the decision to take a moral stance against AI, and I wish more people would do that. Saying "well you have to follow the market" is such a cravenly amoral perspective.
nobody is against his moral stance. the problem is that he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim. if you're a millionaire and can hold whatever moral line you want without ever worrying about rent, food, healthcare, kids, etc. then "selling out" is optional and bad. if you're joe schmoe with a mortgage and 5 months of emergency savings, and you refuse the main kind of work people want to pay you for (which is not even that controversial), you’re not some noble hero, you’re just blowing up your life.
> he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim
No. It is the AI companies that are externalizing their costs onto everyone else by stealing the work of others, flooding the zone with garbage, and then weeping about how they'll never survive if there's any regulation or enforcement of copyright law.
No, of course you don't have to – but don't torture yourself. If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
If you found it unacceptable to work with companies that used any kind of digital database (because you found centralization of information and the amount of processing and analytics this enables unbecoming) then you should probably look for another venture instead of finding companies that commit to pen and paper.
> If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
Maybe they will, and I bet they'll be content doing that. I personally don't work with AI and try my best to not to train it. I left GitHub & Reddit because of this, and not uploading new photos to Instagram. The jury is still out on how I'm gonna share my photography, and not sharing it is on the table, as well.
I may even move to a cathedral model or just stop sharing the software I write with the general world, too.
Nobody has to bend and act against their values and conscience just because others are doing it, and the system is demanding to betray ourselves for its own benefit.
Life is more nuanced than that.
Good on you. Maybe some future innovation will afford everyone the same opportunity.
How large an audience do you want to share it to? Self host photo album software, on hardware you own, behind a password, to people you trust.
Before that AI craze, I liked the idea of having a CC BY-NC-ND[0] public gallery to show what I took. I was not after any likes or anything. If I got professional feedback, that'd be a bonus. I even allowed EXIF-intact high resolution versions to be downloaded.
Now, I'll probably install a gallery webapp to my webserver and put it behind authentication. I'm not rushing because I don't crave any interaction from my photography. The images will most probably be optimized and resized to save some storage space, as well.
[0]: https://creativecommons.org/licenses/by-nc-nd/4.0/
Sorry for them- after I got laid off in 2023 I had a devil of a time finding work to the point my unemployment ran out - 20 years as a dev and tech lead and full stack, including stints as a EM and CTO
Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
Come to Europe. Salaries are (much) lower, but we can use good devs and you'll have vacation days and health care.
Moving to Europe is anything but trivial. Have you looked at y'all's immigration processes recently? It can be a real bear.
Yeah. It is much harder now than it used to be. I know a couple of people who came from the US ~15 to 10 years ago and they had it easy. It was still a nightmare with banks that don’t want to deal with US citizens, though.
As Americans, getting a long-term visa or residency card is not too hard, provided you have a good job. It’s getting the job that’s become more difficult. For other nationalities, it can range from very easy to very hard.
Yeah it depends on which countries you're interested in. Netherlands, Ireland, and the Scandinavian ones are on the easier side as they don't require language fluency to get (dev) jobs, and their languages aren't too hard to learn either.
If you have a US or Japanese passport and want to try NL: https://expatlaw.nl/dutch-american-friendship-treaty aka https://en.wikipedia.org/wiki/DAFT . It applies to freelancers.
Yeah, I'm in NL, so this is my frame of reference. Also, in many companies English is the main language, so that helps.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff
If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
This is also not to say that service providers should not have any moral standards. I just don't understand the expectation in this particular case. You ignore what the market wants and where a lot/most of new capital turns up. What's the idea? You are a service provider, you are not a market maker. If you refuse service with the market that exists, you don't have a market.
Regardless, I really like their aesthetics (which we need more of in the world) and do hope that they find a way to make it work for themselves.
> what the market wants
Pretty sure the market doesn't want more AI slop.
Andy Bell is absolute top tier when it comes to CSS + HTML, so when even the best are struggling you know it's starting to get hard out there.
How do you measure „absolute top tier“ in CSS and HTML? Honest question. Can he create code for difficult-to-code designs? Can he solve technical problems few can solve in, say, CSS build pipelines or rendering performance issues in complex animations? I never had an HTML/CSS issue that couldn’t be addressed by just reading the MDN docs or Can I Use, so maybe I’ve missed some complexity along the way.
Being absolute top tier at what has become a commodity skillset that can be done “good enough” by AI for pennies for 99.9999% of customers is not a good place to be…
When 99.99% of the customers have garbage as a website, 0.01% will grow much faster and topple the incumbents, nothing changed.
Lots of successful companies have garbage as a website (successful in whatever sense, from Fortune 500 to neighbourhood stores).
I had a discussion yesterday with someone that owns a company creating PowerPoints for customers. As you might understand, that is also a business that is to be hit hard by AI. What he does is offer an AI entry level option, where basically the questions he asks the customer (via a Form) will lead to a script for running AI. With that he is able to combine his expertise with the AI demand from the market, and gain a profit from that.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
I started TextQuery[1] with same moralistic standing. Not in respect of using AI or not, but that most software industry is suffering from rot that places more importance on making money, forcing subscription vs making something beautiful and detail-focused. I poured time in optimizing selections, perfecting autocomplete, and wrestling with Monaco’s thin documentation. However, I failed to make it sustainable business. My motivation ran out. And what I thought would be fun multi-year journey, collapsed into burnout and a dead-end project.
I have to say my time was better spent on building something sustainable, making more money, and optimizing the details once having that. It was naïve to obsess over subtleties that only a handful of users would ever notice.
There’s nothing wrong with taking pride in your work, but you can’t ignore what the market actually values, because that's what will make you money, and that's what will keep your business and motivation alive.
[1]: https://textquery.app/
I want to sympathize but enforcing a moral blockade on the "vast majority" of inbound inquiries is a self-inflicted wound, not a business failure. This guy is hardly a victim when the bottleneck is explicitly his own refusal to adapt.
Survival is easy if you just sell out.
if the alternative to 'selling out' is making your business unviable and having to beg the internet for handouts(essentially), then yes, you should "sell out" every time.
The guy won’t work with AI, but works with Google…
Surely there's AI usage that's not morally reprehensible.
Models that are trained only on public domain material. For value add usage, not simply marketing or gamification gimmicks...
How many models are only trained on legal[0] data? Adobe's Firefly model is one commercial model I can think of.
[0] I think the data can be licensed, and not just public domain; e.g. if the creators are suitably compensated for their data to be ingested
> How many models are only trained on legal[0] data?
None, since 'legal' for AI training is not yet defined, but Olma is trained on the Dolma 3 dataset, which is
1. Common crawl
2. Github
3. Wikipedia, Wikibooks
4. Reddit (pre-2023)
5. Semantic Scholar
6. Project Gutenberg
* https://arxiv.org/pdf/2402.00159
Nice, I hadn't heard of this. For convenience, here are HuggingFace models trained on Olma:
https://huggingface.co/datasets/allenai/dolma
https://huggingface.co/models?dataset=dataset:allenai/dolma
I wonder if there is a pivot where they get to keep going but still avoid AI. There must be for a small consultancy.
>especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that.
I intentionally ignored the biggest invention of the 21st century out of strange personal beliefs and now my business is going bankrupt
I don't think it's fair to call them "strange" personal beliefs
Yes I find this a bit odd. AI is a tool, what specific part of it do you find so objectionable OP? For me, I know they are never going to put the genie back in the bottle, we will never get back the electricity spent on it, I might as well use it. We finally got a pretty good Multivac we can talk to and for me it usually gives the right answers back. It is a once in a lifetime type invention we get to enjoy and use. I was king of the AI haters but around Gemini 2.5 it just became so good that if you are hating it or criticizing it you aren’t looking at it objectively anymore.
Everyone gets to make their own choices and take principled stances of their choosing. I don’t find that persuasive as a buy my course pitch though
“we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that”
Nice to have the luxury of turning your nose up at money.
Man, I definitely feel this, being in the international trade business operating an export contract manufacturing company from China, with USA based customers. I can’t think of many shittier businesses to be in this year, lol. Actually it’s been pretty difficult for about 8 years now, given trade war stuff actually started in 2017, then we had to survive covid, now trade war two. It’s a tough time for a lot of SMEs. AI has to be a handful for classic web/design shops to handle, on top of the SMEs that usually make up their customer base, suffering with trade wars and tariff pains. Cash is just hard to come by this year. We’ve pivoted to focus more on design engineering services these past eight years, and that’s been enough to keep the lights on, but it’s hard to scale, it is just a bandwidth constrained business, can only take a few projects at a time. Good luck to OP navigating it.
> we won’t work on product marketing for AI stuff, from a moral standpoint
Can someone explain this?
Some folks have moral concerns about AI. They include:
* The environmental cost of inference in aggregate and training in specific is non-negligible
* Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.
* AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.
* The primary companies who are selling AI products have, at times, controversial pasts or leaders.
* Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.
* The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.
* AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.
Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)
These points are so wide and multi dimensionsal that one must really wonder whether they were looking for reasons for concern.
I'm not sure it's helpful to accuse "them" of bad faith, when "them" hasn't been defined and the post in question is a summary of reasons many individual people have expressed over time.
"Please don't downvote me for trying to provide a neutral answer to this person's question"
Please note, that there are some accounts downvoting any comment talking about downvoting by principle.
Interesting how someone can clearly be brilliant in one area and totally have their head buried under the sand in another, and not even realize it.
> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
Although there’s a ton of hype in “AI” right now (and most products are over-promising and under-delivering), this seems like a strange hill to die on.
imo LLMs are (currently) good at 3 things:
1. Education
2. Structuring unstructured data
3. Turning natural language into code
From this viewpoint, it seems there is a lot of opportunity to both help new clients as well as create more compelling courses for your students.
No need to buy the hype, but no reason to die from it either.
> imo LLMs are (currently) good at 3 things
Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.
Really depends what the moral objection is. If it's "no machine may speak my glorious tongue", then there's little to be said; if it's "AI is theft", then you can maybe make an argument about hypothetical models trained on public domain text using solar power and reinforced by willing volunteers; if it's "AI is a bubble and I don't want to defraud investors", then you can indeed argue the object-level facts.
Indeed, facts are part of the moral discussion in ways you outlined. My objection was that just listing some facts/opinions about what AI can do right now is not enough for that discussion.
I wanted to make this point here explicitly because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
I think some people prefer living in reality
[dead]
On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
It's very funny reading this thread and seeing the exact same arguments I saw five years ago for the NFT market and the metaverse.
All of this money is being funneled and burned away on AI shit that isn't even profitable nor has it found a market niche outside of enabling 10x spammers, which is why companies are literally trying to force it everywhere they can.
Previously: https://news.ycombinator.com/item?id=46070842
Interesting. I agree that this has been a hard year, hardest in a decade. But comparison with 2020 is just surprising. I mean, in 2020 crazy amounts of money were just thrown around left and right no? For me, it was the easiest year of my career when i basically did nothing and picked up money thrown at me.
Why would your company or business suddenly require no effort due to covid.
Too much demand, all of a sudden. Money got printed and i went from near bankruptcy in mid-Feb 2020 to being awash with money by mid-June.
And it continued growing nonstop all the way through ~early Sep 2024, and been slowing down ever since, by now coming to an almost complete stop - to the point i ever fired all sales staff because they were treading water with no even calls let alone deals, for half a year before being dismissed in mid-July this year.
I think it won't return - custom dev is done. The myth of "hiring coders to get rich" is over. No surprise it did, because it never worked, sooner or later people had to realise it. I may check again in 2-3 years how market is doing, but i'm not at all hopeful.
Switched into miltech where demand is real.
[dead]
It’s ironic that Andy calls himself “ruthlessly pragmatic”, but his business is failing because of a principled stand in turning down a high volume of inbound requests. After reading a few of his views on AI, it seems pretty clear to me that his objections are not based in a pragmatic view that AI is ineffective (though he claims this), but rather an ideological view that they should not be used.
Ironically, while ChatGPT isn’t a great writer, I was even more annoyed by the tone of this article and the incredible overuse of italics for emphasis.
Yeah. For all the excesses of the current AI craze there's a lot of real meat to it that will obviously survive the hype cycle.
User education, for example, can be done in ways that don't even feel like gen AI in ways that can drastically improve activation e.g. recommendation to use feature X based on activity Y, tailored to their use case.
If you won't even lean into things like this you're just leaving yourself behind.
I agree that this year has been extremely difficult, but as far as I know, a large number of companies and individuals still made a fortune.
Two fundamental laws of nature: the strong prey on the weak, and survival of the fittest.
Therefore, why is it that those who survive are not the strong preying on the weak, but rather the "fittest"?
Next year's development of AI may be even more astonishing, continuing to kill off large companies and small teams unable to adapt to the market. Only by constantly adapting can we survive in this fierce competition.