All these tools to build something, but nothing to build. I feel like I am part of a Pyramid Scheme where every product is about building something else, but nothing reaches the end user.
Note: nothing against fluid.sh, I am struggling to figure out something to build.
One of my first professional coding jobs was in 2007 when Facebook first introduced 'Facebook Apps'. I worked for a startup making a facebook app, and EVERY SINGLE app company had the same monetization strategy: Selling ads for other facebook apps.
So the lifecycle of an app would be:
1) Create your game/quiz/whatever app.
2) Pay a successful app $x per install, and get a bunch of app installs.
3) Put all sorts of scammy "get extra in game perks if you refer your friends" to try to become viral.
4) Hope to become big enough that people start finding you without having to pay for ads.
5) Sell ads to other facebook app startups to generate installs for them.
It was a completely circular economy. There was not product or income source other than the next layer of the pyramid.
Yes I remember those days! I joined a startup whose first product was a Facebook app in 2007. We were right around the corner from Facebook HQ on Forest and High, and we were alpha partners for the launch of Pages. We created a feature film streaming app (the learning was: no one watches 100-minute videos on Facebook). While we never intended to be a Facebook-app company, technically it was the first thing we launched.
Fast forward 18 years, and the company is going strong with millions of subscribers and distributing Oscar winning films such as Demi Moore’s The Substance.
For problems that can be solved with only a small amount of simple code that is true. However software can become very complex and the larger/more complex the problem is the more important software developers are. It quickly becomes easier to teach software developers enough of your domain than to teach domain experts software.
In a complex project the hard parts about software are harder than the hard parts about the domain.
I've seen the type of code electrical engineers write (at least as hard a domain as software). They can write code, but it isn't good.
That's true both ways though: if a theoretical physicist wants to display a model for a new theorem, it'd be probably easier for them to learn some python or js than for a software engineer to understand the theorems.
If this is the case is discoverable, for at least one direction. Reproducability is known to be a problem in some of the sciences, for various reasons. Find a paper that includes its data and software/methodology used for analysis, and try to get it running and producing the same results. Evaluate the included software/methodology on whatever software quality standards you feel are necessary or appropriate.
Hard disagree with hard parts of software are harder than domain. I don’t know your story, skills, or domain. But this doesn’t match my experience and others around me at all.
Really depends on the domain. I've been in jobs where the domain was much harder than my job as a software engineer, but I've also been in jobs where I quickly got to understand the domain better than the domain experts, or at least parts of it. I believe this is not because I'm smart (I'm not), but because software engineering requires precise requirements, which requires unrelenting questioning and attention to details.
The ability to acquire domain knowledge quickly however, isn't exactly the same as the ability to develop complex software.
Maybe you and others around you are all in some form of engineering capacity? Because I have seen software everywhere from coffee shops, bicycle repairs, to K12 education - all of whom would hard disagree with you.
Web dev is low entry barrier and most web devs don’t need a very deep knowledge base.
Embedded, low level language, using optimizations of the OS / hardware require MUCH more specialized knowledge. Most of the 4 year undergraduate program for Computer Science self selects for mathematics inclined students who then learn how to read and learn advanced mathematics / programming concepts.
There’s nothing that is a hard limit to prevent domain expert autodidacts from picking up programming, but the deeper the programming knowledge, the more the distribution curves of programmers / non-programmers will be able to succeed.
Non programmers are more likely to be flexible to find less programming-specific methods to solve the overall problem, which I very much welcome. But I think LLM-based app development mostly just democratizes the entry into programming.
It is my experience that most of these business domain experts snore the moment you talk about anything related to the difficulties of creating software.
Yeah, I think the issue has more to do with the curiosity level of the participant rather than whether they are a business domain expert or a software engineering expert.
There’s a requisite curiosity necessary to cross the discomfort boundary into how the sausage is made.
Until a few months ago, domain experts who ciuldn't code would "make do" with some sort of Microsoft Excel Spreadsheet From Hell (MESFH), an unholy beast that would usually start small and then always grow up to become a shadow ERP (at best) or even the actual ERP (at worst).
The best part, of course, is that this mostly works, most of the time, for most busineses.
Now, the same domain experts -who still cannot code- will do the exact same thing, but AI will make the spreadsheet more stable (actual data modelling), more resilient (backup infra), more powerful (connect from/to anything), more ergonomic (actual views/UI), and generally more easy to iterate upon (constructive yet adversarial approach to conflicting change requests).
Every single time I try to get a domain expert at $job to let me learn more about the domain it goes goes nowhere.
My belief is that engineers should be the prime candidates to be learning the domain, because it can positively influence product development. There’s too many layers between engineers and the the domain IME
I mostly agree, but I see programmers more as “language interpreters”. They can speak the computer’s language fluently and know enough about the domain to be able to explain it in some abstractions.
The beauty of LLMs is that they can quickly gather and distill the knowledge on both sides of that relationship.
"Are there more or less examples of successful companies in a given domain that leverage software to increase productivity than software companies which find success in said domain?"
Programming is not something you can teach to people who are not interested in it in the first place. This is why campaigns like "Learn to code" are doomed to fail.
Whereas (good) programmers strive to understand the domain of whatever problem they're solving. They're comfortable with the unknown, and know how to ask the right questions and gather requirements. They might not become domain experts, but can certainly learn enough to write software within that domain.
Generative "AI" tools can now certainly help domain experts turn their requirements into software without learning how to program, but the tech is not there yet to make them entirely self-sufficient.
So we'll continue to need both roles collaborating as they always have for quite a while still.
Hhmm I think that's more difficult than using these tools for creating software. If generated software doesn't compile, or does the wrong thing, you know there's an issue. Whereas if the LLM gives you seemingly accurate information that is actually wrong, you have no way of verifying it, other than with a human domain expert. The tech is not reliable enough for either task yet, but software is easy to verify, whereas general information is not.
I’ve been a year deep into my first job out of tech. There is a never ending slew of problems where being able to code, specially now with AI, means you have wizard-like powers to help your coworkers.
My codebase is full of one-offs that slowly but surely converge towards cohesive/well-defined/reusable capabilities based on ‘real’ needs.
I’m now starting to pitch consulting to a niche to see what sticks. If the dynamic from the office holds (as I help them, capabilities compound) then I’ll eventually find something to call ‘a product’.
That made me remember that one time many years ago, when I had a friend who literally called me a wizard.. He was working as a shift manager at a call center, and one of his most difficult tasks he kept ranting about was scheduling employees, who were not the most consistent bunch, and had varied skillset, yet he had to meet very strict support availability requirements.
He kept ranting about what a b*tch of a problem that was, every time we went out drinking, and one day, something got into me, and thought there must be some software that can help with this.
Surely there was, and I set up a server with an online web UI where every employee could put in when they were able to work, and the software figured out how to assign timeslots to cover requirements.
I thought it was a nice exercise for me in learning to admininster a linux server, but when I showed it to my friend, he looked me in the eye and told me I a saved him a day of work every week, and called me a wizard :D
It occured to me, how naturally part of the programming profession is to make things in fixed amounts of time, that turn difficult and time consuming tasks a human needed to do into something that essentially just happens on its own.
The problem we have as software engineers (from an entrepreneur's pov) is that we mostly struggle with stuff that's removed from the client's problem.
I mean it in terms of owning the solution to a problem, being accountable/responsible for something working e2e not just the software or even the product - the service/experience of the customer that makes them want to give you money. Once you put on another hat - guess what - you'd probably be the star of some operations team or a great supervisor of some department. You would automate everything around you to a point others think you're the most capable person they've ever seen in that role.
In macroeconomic, you have an aggregate production functions that represents output for a country or something. In many of these function you'll have a parameter for technology, it acts as a multiplier over inputs, so the greater the measure of technology the greater the output. Quite a few of these also exhibt a characteristic where output drops if technology increases too fast. To illustrate this, imagine a scenario in real life that kind of looks like a rapid evolution of some kind of technology of home phones, to cell phones, to smart phones at a rate faster than people know how to make use of them, while also spending money adoption making the intermediary adoptions quite wasteful.
I think we see an aspect of this here, a lot of things we took for granted are changing, shared assumptions are being challenged and it's a period we're all relearning new things. To some extent spending too much time diving on the current iteration of AI tooling might be for nothing if gets invalidated by another sudden jump.
With all these new tools people are building, I can't help but feel they are building foundations on moving soil.
I am in the same boat but I recently found I could also use these tool so reverse engineer stuff as well. For example I purchased this label printer from china and was unsatisfied with the printing quality under Linux. So I "coded" a go script to print via BLE instead of CUPS [1]. To do this I de-compiled the android app that comes with the printer and instead of spending hours going through it I just told an Agentic AI to do this for me.
I am now so deep into the rabbit hole that I have made a version that runs entirely in the browser and an ESP32 version. I have now also taken the printer apart to find that the built in BLE is an external module and I could interface directly with the printer by replacing it with my own custom PCB...
I’m really enjoying these LLMs for making ad-hoc tooling / apps for myself. Things that I only need for a day or a week, that don’t need to work perfectly (I can work around bugs).
It’s really liberating. Instead of saying “gosh I wish there was an app that…” I just make the app and use it and move on.
Maybe have it build some toy apps just for fun! My wife and I were talking once about typing speed and challenged each other to a typing competition. the existing ones I found weren't very good and were riddled with ads, so I had Claude build one for us to use.
Or maybe ask yourself what do you like to do outside of work? maybe build an app or claude skill to help with that.
If you like to cook, maybe try building a recipe manager for yourself. I set up a repo to store all of my recipes in cooklang (similar to markdown), and set up claude skills to find/create/evaluate new recipes.
Building the toy apps might help you come up with ideas for larger things too.
Everybody wants to build infra. Automate something which is known and well understood. Hoping someone else will use it to solve end user's problem which is hard to understand, messy and often highly contextual.
To summarize: Everyone wants to automate stuff. Most people do not want to touch boring, large problems.
I find myself building fun tools for myself and things that help with quality of life slightly, but I don’t need all this extra enterprise stuff for that. I actually find myself more likely to use something I built because I am proud of it, even if there is already something on the market that addresses my need.
This is not even AI - it's pre-AI, and everyone has continued to try to create things that other people can use as a dependency, just on a much higher pace.
I've found writing simulations that my childhood brain would have LOVED to see run fun and fulfilling.
Someone on HN pointed out how all the LLM companies are basically going “we made this thing, can y'all please find the billion dollar application for it?” and that really made a lot of things - namely why I’m frequently raising an eyebrow at these tools and the vague promises/demand that we use them - click into place.
Don’t get me wrong, I have found uses for various AI tools. But nothing consistent and daily yet, aside from AI audio repair tools and that’s not really the same thing.
Side note, been watching gold prospecting channels lately, there will be these dig sites/claims people go to, they'll do their thing, dig a hole, run it through some angled ramp water contraption... they get like nothing, it's the experience I suppose. But I was wondering what the owner gets from all these people showing up.
If infirmation arbitrage is the game then it's now a race to distribution channels and trust.
Also what does society need? Smart workers and people who believe in the system... so where does that leave us? We need to make something that would better enable children to want to grow up in the world and participate. Otherwise were doing nothing of value and in a death spiral
There are companies making a lot of money directly from software largely written by LLMs especially since Claude Code was released, but they aren't mentioning LLMs or AI in any marketing, client communications, or public releases. I'm at least very aware that we need to be able to retire before LLMs swamp or obsolete our niche, and don't want to invite competition.
Outside of tech companies, I think this is extremely common.
This type of software is mainly created to gain brand recognition, influence, or valuation, not to solve problems for humans. Its value is indirect and speculative.
These are the pets.com of the current bubble, and we'll be flooded by them before the damn thing finally pops.
Honestly, I may be an accelerationist in terms of poisoning the LLM well if it gets us sooner to an industry-wide consensus that LLM output is a significant security risk.
Hey HN,
My name is Collin and I'm working on fluid.sh (https://fluid.sh) the Claude Code for Infrastructure.
What does that mean?
Fluid is a terminal agent that do work on production infrastructure like VMs/K8s cluster/etc. by making sandbox clones of the infrastructure for AI agents to work on, allowing the agents to run commands, test connections, edit files, and then generate Infra-as-code like an Ansible Playbook to be applied on production.
Why not just use an LLM to generate IaC?
LLMs are great at generating Terraform, OpenTofu, Ansible, etc. but bad at guessing how production systems work. By giving access to a clone of the infrastructure, agents can explore, run commands, test things before writing the IaC, giving them better context and a place to test ideas and changes before deploying.
I got the idea after seeing how much Claude Code has helped me work on code, I thought "I wish there was something like that for infrastructure", and here we are.
Why not just provide tools, skills, MCP server to Claude Code?
Mainly safety. I didn't want CC to SSH into a prod machine from where it is running locally (real problem!). I wanted to lock down the tools it can run to be only on sandboxes while also giving it autonomy to create sandboxes and not have access to anything else.
Fluid gives access to a live output of commands run (it's pretty cool) and does this by ephemeral SSH Certificates. Fluid gives tools for creating IaC and requires human approval for creating sandboxes on hosts with low memory/CPU and for accessing the internet or installing packages.
I greatly appreciate any feedback or thoughts you have, and I hope you get the chance to try out Fluid!
Why would you not put a description like this on your actual website? Your homepage does not explain anything about what this actually does. Are you really expecting infrastructure engineers to install your app with a bash command after only providing the following information?
Claude Code for infrastructure. Debug, act, and audit everything Fluid does on your infrastructure.
Create sandboxes from VMs, investigate, plan, execute, generate Ansible playbooks, and audit everything.
> By giving access to a clone of the infrastructure, agents can explore, run commands, test things before writing the IaC, giving them better context and a place to test ideas and changes before deploying.
And you thought the costs for burning tokens was high... let's amp it up by spinning up a bunch of cloud infra and let the agents fumble about.
DevOps is my gig, I use agents extensively, I would never do this. This is so wasteful
An agent that runs things in remote sandboxes to set things up doesn’t really fit with Infrastructure as Code.
Lately I have been setting up Pulumi stacks in ephemeral AWS accounts managed by AWS Organizations and working on a Kubernetes cluster locally with Tilt. So far, Claude is pretty good with those things. It seems to have pretty good knowledge of Pulumi, basic knowledge of Tilt, and good knowledge of Kubernetes. It’s a little out of date on some things and needs reminding to RTFM, but it can get a lot done by itself. If it were a real point of friction, a cheat sheet (sorry, “skill”) would be enough to solve the majority of issues.
The example you provide seems to be more along the lines of SSHing into remote boxes and setting things up manually. That’s not really helpful when you want to work on repeatable infra. You try to distinguish yourself from generating Terraform etc., but actually that’s what’s valuable in my experience.
This allows the agent to make any changes in a production clone vs agents running on a production VM. For example, you wouldn't want claude editing crucial config on the chance it brings everything down vs letting it do in a cloned environment where it can test whatever.
And how is this different than just pointing Terraformer at your existing infrastructure and rebuilding it in another account? That is assuming your company is standing complicated infra up by hand and if they are, your entire “DevOps” team or who ever is responsible needs to be fired
This is exciting. But I had to read and check everything twice to figure it out, as some already commented. Strong Feedback loop is an ultimate unlock for AI agents and having twins is exactly the right approach.
> I didn't want CC to SSH into a prod machine from where it is running locally (real problem!). I wanted to lock down the tools it can run to be only on sandboxes while also giving it autonomy to create sandboxes and not have access to anything else.
This is already the modern way to run infra. If your running simple apps, why are you even spinning up vms? Container running platforms make this so easy.
I've noticed a lot of LLM-based tools that are essentially this sort of thing. Just a slightly more specific prompt wrapper around the core capability that can already do the thing. It's so bad.
Lol, that does sounds a little scary but if it works it works. Mainly I built this to prevent there being a chance that changes affect production. This is meant to be used with scale (say hundreds of VMs) vs 1. From a safety perspective running Claude Code with just a watchful eye would not fly in my environment, which is why I built something like this.
Yeah. The times I have let claude off the read-only leash, it's gone fine for me too (with stern warnings not to do anything stupid, and a close eye). But that's not really solving the same problem as this project, I guess. From what I can see this is using a safer and more reproducible method (and not k8s native, so it feels a little foreign to me).
Opus 4.5 is pretty good about following instructions to not do anything destructive, but Gemini 3 Flash actively disregards my advice and just starts running commands. Definitely recommend setting up default-readonly access for stuff like this and requiring some kind of out-of-band escalation process for when you need to do writes/destroys.
I do the same. I was thinking about creating read-only kubeconfigs for him to make sure it can't do bad stuff but with a good SKILL.md, it works perfectly.
Making clones of production isn't trivial. Is your app server clone going to connect to your production database? It is going to spin up your whole stack? Seems a bit naive.
A better approach is to have AI understand how prod is built and make the changes there instead of having AI inspect it and figure out how to apply one off changes.
Clever solution. I think ops (like this) and observability will be pretty hot markets for a while soon. The code is quite cheap now, but actually running it and keeping it running still requires some amount of background. I've had a number of acquaintances ask me how they can get their vibe coded app available for others to use.
I really like this idea. I do a lot of kubernetes ops with workloads I'm unfamiliar with (and not directly responsible for) and often give claude read access in order to help me debug things, including with things like a grafana skill in order to access the same monitoring tools humans have. It's saved me dozens of hours in the last months - and my job is significantly less frustrating now.
Your method of creating ansible playbooks makes _tons_ of sense for this kind of work. I typically create documentation (with claude) for things after I've worked through them (with claude) but playbooks is a very, very clever move.
I would say something similar but as an auditable, controllable kubernetes operator would be pretty welcome.
The real problem is just the volatility for the employees. Unless Board of Directors/Owners punish downtime, you risk a dark pattern of uptime just being a nice-to-have when I can just replace any expertise with the next kid out of college + Claude.
So you really need customers to react. And this isn't theoretical - people have already lost their jobs and there's really, really good people in the market available right now.
First I’m personally never going to create infrastructure in the console. I’m going to use IAC from the get go. That means I can reproduce my infra on another account easily.
Second if I did come across an environment where this was already the case, there are tools for both Terraform and CloudFormation where you can reverse your infra to reproducible IAC.
After that, let Claude go wild in my sandbox account with a reasonably scoped IAM role with temporary credentials
> LLMs are great at generating Terraform, OpenTofu, Ansible, etc. but bad at guessing how production systems work.
Sorry, that last part is absolutely not the case from my experience. IaC also uses the API to inquire about the infrastructure, and there are existing import/export tools around it, so I’m not exactly sure what you are gaining by insisting on abandoning it. IaC also has the benefit of being reusable and commitable.
People try to write plausible copy, then come to HN to learn it's not often reality
It's largely because every devops situation is a snowflake and humans love to generalize. Turns out we don't all have the same problems. I haven't seen a startup that's been successful in devops at a level above the HCL / yaml
So this is a client/server thing to control KVM via libvert and provision SSH keys to allow LLM agent access to the VMs?
How does the Ansible export work? Do the agents hack around inside the VM and then write a playbook from memory, or are all changes made via Ansible?
If Ansible playbooks are the artifact, what does features does Fluid offer over just having agents iterate on an Ansible codebase and having Ansible drive provisioning?
- The website tells less than your comment here. I want to try but have no idea how destructive it can be.
- You need to add / mention how to do things in the RO mode only.
- Always explain destructive actions.
Few weeks ago I had to debug K8S on the GCP GDC metal, Claude Code helped me tons, but... I had to recreate whole cluster next day because agent ran too fast deleted things it should not delete or at least tell me the full impact. So some harness would be nice.
Hey! Yes I updated the website with some more of my comments.
- RO mode would be a good idea
- Agreed on explaining destructive actions. The only (possibly) destructive action is creating the sanbox on the host, but that asks the user's permission if the host doesn't have enough resources. Right now it supports VMs with KVM. It will not let you create a sandbox if the host doesn't have enough ram or cpus.
- The kubernetes example is exactly what this is built for, giving AI access is dangerous but there is always a chance of it messing something. Thanks for the comment!
I'm already using LLM to generate things and I'm not sure what this adds. The Demo isn't really doing it for me but maybe I'm wrong target for it. (What is running on that server? You don't know. Build your cattle properly!)
Maybe this is better for one man band devs trying to get something running without caring beyond, it's running.
But fluid lets AI investigate, explore, run commands, and edit files in a production-cloned sandbox. LLMs are great at writing IaC, but the LLMs won't get the right context from just generating an Ansible Playbook. They need a place to run commands safely and test changes before writing the IaC. Much like a human, hence the sandbox.
I use Pulumi for work, and their AI solution (Pulumi Neo) works amazingly well in troubleshooting cloud issues. It's informed of the cloud state and recent changes right from their platform, which is pretty amazing. Compared to using Azure CoPilot for the same purposes, Pulumi Neo was faster in generating responses, and these responses were actionable and solved my issues. CoPilot was laughably useless comparably.
This general idea is exactly why I love nix. The immutability of it is powerful. It can be useful for both running your agents in a certain environment AND your agents are useful at writing your nix config. I expand on this in a blog post here https://jamesst.one/posts/agents-nix
Great idea! A few weeks ago a non-technical client of mine decided to optimize his AWS infra bill with the help of AI. The costs went down significantly along with the application.
This is really cool, I don't want to think about infra tbh just want to build. Is there a wold where an on-prem version of this exists? I buy a box, install shell script, and it just works?
Yo, fluid is built with on-prem in mind, specifically VMs. This is my initial use case for it. I am currently working on a remote version of fluid, where instead of CLI tool, it would be more of a codex/claude code app with a UI where you can install a server and then command hundreds of agents at once to work on infrastructure. Is this what you had in mind?
This lets AI work on cloned production sandboxes vs running on production instances. Yes you can sandbox Claude Code on a production box, but it cannot test changes like it would for production-breaking changes. Sandboxes give AI this flexibility allowing it to safely test changes and reproduce things via IaC like Ansible playbooks.
It should be. This is the least friction way to do so as server Linux operating systems still have not agreed on a common application format / package manager.
> It should be. This is the least friction way to do so as server Linux operating systems still have not agreed on a common application format / package manager.
Nowhere in your response did you mention security.
I love how the landing page is straight to the point and has zero marketing BS. It achieves the opposite of AI-written text, while still being polished.
this is not the way to do devops, we have IaC, reviews, and promotion for a reason
it's clear infra level decisions are well beyond what LLMs / agents are capable of today, this is area is too high risk, devops is slow to adopt new tooling because of its role and nature
this is still devops. we use cloud-init to setup the vm.
i run the underlying hardware infrastructure and we've automated the provisioning such that we have an api that can start/stop compute at will. even bare metal.
the point of this is that the current $/token model is awful, especially if you're using a lot of tokens. it should be $/minute. pay for what you use.
tokens are a rough proxy for usage over time, so I am paying for what I use, less than running a TPU pod myself, required for the models I use, i.e. I don't saturate the compute so it's cheaper to pay-go
1. No, I commented, it is not even possible to downvote a reply to your own comment, seems other people must disagree with what you said or how you said it
2. It's against HN guidelines to talk about your downvotes, especially making claims about who has done it
The most likely reason for your downvotes is promoting your own (incomplete) project under someone else's. What did you hope to bring to the conversation?
Many places have "dev", "test" "prod"... but IMHO you need "sandpit" as well.
From an ops point of view as orgs get big enough, dev wraps around to being prod-like... in the sense that it has the property that there's going to be a lot of annoyed people whose time you're wasting if you break things.
You can take the approach of having more guard rails and controls to stop people breaking things but personally I prefer the "sandpit" approach, where you have accounts / environments where anything goes. Like, if anyone is allowed to complain it's broken, it's not sandpit anymore. That makes them an ok place to let agents loose for "whole system" work.
I see tools like this as a sort of alternative / workaround.
Sandpit should be a personal (often local, if possible) dev environment. The reason people get mad about dev being broken for long periods of time is that they cannot use dev to test their changes if your code (that they depend on) is broken in dev for long periods of time.
There’s no sandpit, only prod and dev, and you’re not allowed to break prod. Your developers work in partitions of prod. Dev is used for DR and other infra testing.
Hey, I get it. I don't want LLMs on prod at all. I made this to let agents connect to production cloned sandboxes, not production itself. I hope this helps your concerns, but I understand either way. Lmk with any other questions.
For example, if you had an on-prem footprint with thousands of VMs, a production cloned sandbox would be a clone of a VM to let AI safely make changes, install packages, etc.
Yeah, working on the landing page. Feel free to ask any other questions!
Commercial Carpet Cleaning London is a trusted specialist in professional cleaning solutions for businesses across the capital. We deliver high-quality commercial carpet cleaning designed to revitalise workspaces, extend the life of flooring and create healthier environments for staff and visitors. Our skilled team provides expert office sofa and carpet cleaning, ensuring upholstery looks pristine and fresh. We also offer comprehensive end of tenancy cleaning for offices, retail units and commercial premises, helping landlords and tenants meet the highest standards. Our services include Reliable, efficient and detail-focused, we support London businesses with tailored cleaning solutions. Wood floor and hard floor cleaning, using industry-approved methods to restore shine and remove deep-set dirt. We are experienced in Persian rug cleaning, providing delicate care for valuable textiles, and our advanced stain removal techniques tackle even the toughest marks. Additionally, we offer professional jet washing for outdoor areas, ensuring entrances, pathways and façades remain spotless.
All these tools to build something, but nothing to build. I feel like I am part of a Pyramid Scheme where every product is about building something else, but nothing reaches the end user.
Note: nothing against fluid.sh, I am struggling to figure out something to build.
One of my first professional coding jobs was in 2007 when Facebook first introduced 'Facebook Apps'. I worked for a startup making a facebook app, and EVERY SINGLE app company had the same monetization strategy: Selling ads for other facebook apps.
So the lifecycle of an app would be:
1) Create your game/quiz/whatever app.
2) Pay a successful app $x per install, and get a bunch of app installs.
3) Put all sorts of scammy "get extra in game perks if you refer your friends" to try to become viral.
4) Hope to become big enough that people start finding you without having to pay for ads.
5) Sell ads to other facebook app startups to generate installs for them.
It was a completely circular economy. There was not product or income source other than the next layer of the pyramid.
It didn't last long.
Yes I remember those days! I joined a startup whose first product was a Facebook app in 2007. We were right around the corner from Facebook HQ on Forest and High, and we were alpha partners for the launch of Pages. We created a feature film streaming app (the learning was: no one watches 100-minute videos on Facebook). While we never intended to be a Facebook-app company, technically it was the first thing we launched.
Fast forward 18 years, and the company is going strong with millions of subscribers and distributing Oscar winning films such as Demi Moore’s The Substance.
What a beautiful microcosm of the attention economy.
Hate to break it to you, but it’s still going on, just outside the fb app api.
The recent YC -> Circle -> Coinbase -> YC comes to mind
What is this?
Aren't most ads in scummy mobile games ads for other scummy mobile games, to this very day?
Yes, but those apps also have scummy microtransactions, so at least there is SOME outside revenue entering the system.
That is the problem with software developers with expertise in software, but no deep domain knowledge outside the CS world.
It is my belief with some exceptions it is almost always easier to teach a domain expert to code than it is to teach a software developer the domain.
For problems that can be solved with only a small amount of simple code that is true. However software can become very complex and the larger/more complex the problem is the more important software developers are. It quickly becomes easier to teach software developers enough of your domain than to teach domain experts software.
In a complex project the hard parts about software are harder than the hard parts about the domain.
I've seen the type of code electrical engineers write (at least as hard a domain as software). They can write code, but it isn't good.
That's true both ways though: if a theoretical physicist wants to display a model for a new theorem, it'd be probably easier for them to learn some python or js than for a software engineer to understand the theorems.
If this is the case is discoverable, for at least one direction. Reproducability is known to be a problem in some of the sciences, for various reasons. Find a paper that includes its data and software/methodology used for analysis, and try to get it running and producing the same results. Evaluate the included software/methodology on whatever software quality standards you feel are necessary or appropriate.
Hard disagree with hard parts of software are harder than domain. I don’t know your story, skills, or domain. But this doesn’t match my experience and others around me at all.
Really depends on the domain. I've been in jobs where the domain was much harder than my job as a software engineer, but I've also been in jobs where I quickly got to understand the domain better than the domain experts, or at least parts of it. I believe this is not because I'm smart (I'm not), but because software engineering requires precise requirements, which requires unrelenting questioning and attention to details.
The ability to acquire domain knowledge quickly however, isn't exactly the same as the ability to develop complex software.
Maybe you and others around you are all in some form of engineering capacity? Because I have seen software everywhere from coffee shops, bicycle repairs, to K12 education - all of whom would hard disagree with you.
Not all kinds of programming are the same.
Web dev is low entry barrier and most web devs don’t need a very deep knowledge base.
Embedded, low level language, using optimizations of the OS / hardware require MUCH more specialized knowledge. Most of the 4 year undergraduate program for Computer Science self selects for mathematics inclined students who then learn how to read and learn advanced mathematics / programming concepts.
There’s nothing that is a hard limit to prevent domain expert autodidacts from picking up programming, but the deeper the programming knowledge, the more the distribution curves of programmers / non-programmers will be able to succeed.
Non programmers are more likely to be flexible to find less programming-specific methods to solve the overall problem, which I very much welcome. But I think LLM-based app development mostly just democratizes the entry into programming.
It is my experience that most of these business domain experts snore the moment you talk about anything related to the difficulties of creating software.
Yeah, I think the issue has more to do with the curiosity level of the participant rather than whether they are a business domain expert or a software engineering expert.
There’s a requisite curiosity necessary to cross the discomfort boundary into how the sausage is made.
Until a few months ago, domain experts who ciuldn't code would "make do" with some sort of Microsoft Excel Spreadsheet From Hell (MESFH), an unholy beast that would usually start small and then always grow up to become a shadow ERP (at best) or even the actual ERP (at worst).
The best part, of course, is that this mostly works, most of the time, for most busineses.
Now, the same domain experts -who still cannot code- will do the exact same thing, but AI will make the spreadsheet more stable (actual data modelling), more resilient (backup infra), more powerful (connect from/to anything), more ergonomic (actual views/UI), and generally more easy to iterate upon (constructive yet adversarial approach to conflicting change requests).
> AI will make the spreadsheet more stable
Hallucinations sure make spreadsheets nice and stable.
Every single time I try to get a domain expert at $job to let me learn more about the domain it goes goes nowhere.
My belief is that engineers should be the prime candidates to be learning the domain, because it can positively influence product development. There’s too many layers between engineers and the the domain IME
I mostly agree, but I see programmers more as “language interpreters”. They can speak the computer’s language fluently and know enough about the domain to be able to explain it in some abstractions.
The beauty of LLMs is that they can quickly gather and distill the knowledge on both sides of that relationship.
In practice, does that happen? Usually companies try to bring the best of both and build from there.
I wouldn’t argue how things historically worked, but rather where the LLM innovations suggest the trajectory will go.
This is interesting. Do you know of any examples of successful tech companies built by non-technical founders?
I think a more appropriate question would be:
"Are there more or less examples of successful companies in a given domain that leverage software to increase productivity than software companies which find success in said domain?"
Eh, this is the kind of pithy soundbite that sounds vaguely deep and intelligent but doesn't hold up.
In what domains have you had experience taking non programmers with domain knowledge and making them programmers?
That doesn't track at all IME.
Programming is not something you can teach to people who are not interested in it in the first place. This is why campaigns like "Learn to code" are doomed to fail.
Whereas (good) programmers strive to understand the domain of whatever problem they're solving. They're comfortable with the unknown, and know how to ask the right questions and gather requirements. They might not become domain experts, but can certainly learn enough to write software within that domain.
Generative "AI" tools can now certainly help domain experts turn their requirements into software without learning how to program, but the tech is not there yet to make them entirely self-sufficient.
So we'll continue to need both roles collaborating as they always have for quite a while still.
Conversely, good developers can now leverage LLM’s to master any domain.
Hhmm I think that's more difficult than using these tools for creating software. If generated software doesn't compile, or does the wrong thing, you know there's an issue. Whereas if the LLM gives you seemingly accurate information that is actually wrong, you have no way of verifying it, other than with a human domain expert. The tech is not reliable enough for either task yet, but software is easy to verify, whereas general information is not.
I want to make a business, but what is the business
It's way easier to raise for dev tools than domain tools right now.
Pretty much. I’m working on a few things with several people and I’m now constrained by their ability to find stuff to build.
I’ve been a year deep into my first job out of tech. There is a never ending slew of problems where being able to code, specially now with AI, means you have wizard-like powers to help your coworkers.
My codebase is full of one-offs that slowly but surely converge towards cohesive/well-defined/reusable capabilities based on ‘real’ needs.
I’m now starting to pitch consulting to a niche to see what sticks. If the dynamic from the office holds (as I help them, capabilities compound) then I’ll eventually find something to call ‘a product’.
That made me remember that one time many years ago, when I had a friend who literally called me a wizard.. He was working as a shift manager at a call center, and one of his most difficult tasks he kept ranting about was scheduling employees, who were not the most consistent bunch, and had varied skillset, yet he had to meet very strict support availability requirements.
He kept ranting about what a b*tch of a problem that was, every time we went out drinking, and one day, something got into me, and thought there must be some software that can help with this.
Surely there was, and I set up a server with an online web UI where every employee could put in when they were able to work, and the software figured out how to assign timeslots to cover requirements.
I thought it was a nice exercise for me in learning to admininster a linux server, but when I showed it to my friend, he looked me in the eye and told me I a saved him a day of work every week, and called me a wizard :D
It occured to me, how naturally part of the programming profession is to make things in fixed amounts of time, that turn difficult and time consuming tasks a human needed to do into something that essentially just happens on its own.
The problem we have as software engineers (from an entrepreneur's pov) is that we mostly struggle with stuff that's removed from the client's problem.
I mean it in terms of owning the solution to a problem, being accountable/responsible for something working e2e not just the software or even the product - the service/experience of the customer that makes them want to give you money. Once you put on another hat - guess what - you'd probably be the star of some operations team or a great supervisor of some department. You would automate everything around you to a point others think you're the most capable person they've ever seen in that role.
Can I ask what do you do now?
In macroeconomic, you have an aggregate production functions that represents output for a country or something. In many of these function you'll have a parameter for technology, it acts as a multiplier over inputs, so the greater the measure of technology the greater the output. Quite a few of these also exhibt a characteristic where output drops if technology increases too fast. To illustrate this, imagine a scenario in real life that kind of looks like a rapid evolution of some kind of technology of home phones, to cell phones, to smart phones at a rate faster than people know how to make use of them, while also spending money adoption making the intermediary adoptions quite wasteful.
I think we see an aspect of this here, a lot of things we took for granted are changing, shared assumptions are being challenged and it's a period we're all relearning new things. To some extent spending too much time diving on the current iteration of AI tooling might be for nothing if gets invalidated by another sudden jump.
With all these new tools people are building, I can't help but feel they are building foundations on moving soil.
With the industrial revolution extra demand for industrial overcapacity was created in the form of war.
After the war the US created extra demand in the form of consumerism.
China is creating extra demand for infrastructure overcapacity with its belt and road initiative.
I wouldnt underestimate the abililty of the country to creatively create demand to counter oversupply.
Shovel market
https://substack-post-media.s3.amazonaws.com/public/images/6...
Talk to people.
There are an infinite amount of problems to solve.
Deciding whether they’re worth solving is the hard part.
Are any of these people willing to fund an answer to these problems?
I am in the same boat but I recently found I could also use these tool so reverse engineer stuff as well. For example I purchased this label printer from china and was unsatisfied with the printing quality under Linux. So I "coded" a go script to print via BLE instead of CUPS [1]. To do this I de-compiled the android app that comes with the printer and instead of spending hours going through it I just told an Agentic AI to do this for me.
I am now so deep into the rabbit hole that I have made a version that runs entirely in the browser and an ESP32 version. I have now also taken the printer apart to find that the built in BLE is an external module and I could interface directly with the printer by replacing it with my own custom PCB...
[1] https://sschueller.github.io/posts/making-a-label-printer-wo...
I’m really enjoying these LLMs for making ad-hoc tooling / apps for myself. Things that I only need for a day or a week, that don’t need to work perfectly (I can work around bugs).
It’s really liberating. Instead of saying “gosh I wish there was an app that…” I just make the app and use it and move on.
Maybe have it build some toy apps just for fun! My wife and I were talking once about typing speed and challenged each other to a typing competition. the existing ones I found weren't very good and were riddled with ads, so I had Claude build one for us to use.
Or maybe ask yourself what do you like to do outside of work? maybe build an app or claude skill to help with that.
If you like to cook, maybe try building a recipe manager for yourself. I set up a repo to store all of my recipes in cooklang (similar to markdown), and set up claude skills to find/create/evaluate new recipes.
Building the toy apps might help you come up with ideas for larger things too.
I feel the same, it's like there's more offer than demand somehow
Everybody wants to build infra. Automate something which is known and well understood. Hoping someone else will use it to solve end user's problem which is hard to understand, messy and often highly contextual.
To summarize: Everyone wants to automate stuff. Most people do not want to touch boring, large problems.
I find myself building fun tools for myself and things that help with quality of life slightly, but I don’t need all this extra enterprise stuff for that. I actually find myself more likely to use something I built because I am proud of it, even if there is already something on the market that addresses my need.
Nailed it!
This is not even AI - it's pre-AI, and everyone has continued to try to create things that other people can use as a dependency, just on a much higher pace.
I've found writing simulations that my childhood brain would have LOVED to see run fun and fulfilling.
When there’s a gold rush, sell shovels.
The point is that there's no gold. It's just a shovel rush.
build us a way out
building is the easy bit, more than ever.
selling it is the hard part, nothing new there
Someone on HN pointed out how all the LLM companies are basically going “we made this thing, can y'all please find the billion dollar application for it?” and that really made a lot of things - namely why I’m frequently raising an eyebrow at these tools and the vague promises/demand that we use them - click into place.
Don’t get me wrong, I have found uses for various AI tools. But nothing consistent and daily yet, aside from AI audio repair tools and that’s not really the same thing.
Sell the shovels!!
Side note, been watching gold prospecting channels lately, there will be these dig sites/claims people go to, they'll do their thing, dig a hole, run it through some angled ramp water contraption... they get like nothing, it's the experience I suppose. But I was wondering what the owner gets from all these people showing up.
They'll work for hours and end up with $4 of gold
Another option is to bring your coding skills to a industry not particularly known for using tech.
Steve Jobs used to say every product needs a killer feature
AI is a product in search of a killer feature
First AGI was anyday going to come. Gpt5 had showed intelligence apparently
Then got started adult chat with paying customers
Isn't AGI Adult Group Interaction?
Yes, I didn't think ofthat. See he is right. They did achieve AGI, just not the one he wanted
If infirmation arbitrage is the game then it's now a race to distribution channels and trust.
Also what does society need? Smart workers and people who believe in the system... so where does that leave us? We need to make something that would better enable children to want to grow up in the world and participate. Otherwise were doing nothing of value and in a death spiral
Ask an LLM for suggestions on what to build
There are companies making a lot of money directly from software largely written by LLMs especially since Claude Code was released, but they aren't mentioning LLMs or AI in any marketing, client communications, or public releases. I'm at least very aware that we need to be able to retire before LLMs swamp or obsolete our niche, and don't want to invite competition.
Outside of tech companies, I think this is extremely common.
This type of software is mainly created to gain brand recognition, influence, or valuation, not to solve problems for humans. Its value is indirect and speculative.
These are the pets.com of the current bubble, and we'll be flooded by them before the damn thing finally pops.
Speak for yourself. I’ve been using Claude Code to build lots of customer facing things.
I’m on the other hand, I have a million ideas and AI has allowed me to implement so many of them.
The reason:
> Safety. I didn't want CC to SSH into a prod machine
The call to action:
> curl -fsSL https://fluid.sh/install.sh | bash
The reason this is ironic: https://x.com/sheeki03/status/2018382483465867444
One just needs to put enough poison on the internet to get the malicious URL suggested by LLMs. What a time to be alive!
Honestly, I may be an accelerationist in terms of poisoning the LLM well if it gets us sooner to an industry-wide consensus that LLM output is a significant security risk.
Hey HN, My name is Collin and I'm working on fluid.sh (https://fluid.sh) the Claude Code for Infrastructure.
What does that mean?
Fluid is a terminal agent that do work on production infrastructure like VMs/K8s cluster/etc. by making sandbox clones of the infrastructure for AI agents to work on, allowing the agents to run commands, test connections, edit files, and then generate Infra-as-code like an Ansible Playbook to be applied on production.
Why not just use an LLM to generate IaC?
LLMs are great at generating Terraform, OpenTofu, Ansible, etc. but bad at guessing how production systems work. By giving access to a clone of the infrastructure, agents can explore, run commands, test things before writing the IaC, giving them better context and a place to test ideas and changes before deploying.
I got the idea after seeing how much Claude Code has helped me work on code, I thought "I wish there was something like that for infrastructure", and here we are.
Why not just provide tools, skills, MCP server to Claude Code?
Mainly safety. I didn't want CC to SSH into a prod machine from where it is running locally (real problem!). I wanted to lock down the tools it can run to be only on sandboxes while also giving it autonomy to create sandboxes and not have access to anything else.
Fluid gives access to a live output of commands run (it's pretty cool) and does this by ephemeral SSH Certificates. Fluid gives tools for creating IaC and requires human approval for creating sandboxes on hosts with low memory/CPU and for accessing the internet or installing packages.
I greatly appreciate any feedback or thoughts you have, and I hope you get the chance to try out Fluid!
Why would you not put a description like this on your actual website? Your homepage does not explain anything about what this actually does. Are you really expecting infrastructure engineers to install your app with a bash command after only providing the following information?
True. Tried to make it simpler but clearly not a good enough job!
It reads like a blog post, not a landing page
> By giving access to a clone of the infrastructure, agents can explore, run commands, test things before writing the IaC, giving them better context and a place to test ideas and changes before deploying.
And you thought the costs for burning tokens was high... let's amp it up by spinning up a bunch of cloud infra and let the agents fumble about.
DevOps is my gig, I use agents extensively, I would never do this. This is so wasteful
An agent that runs things in remote sandboxes to set things up doesn’t really fit with Infrastructure as Code.
Lately I have been setting up Pulumi stacks in ephemeral AWS accounts managed by AWS Organizations and working on a Kubernetes cluster locally with Tilt. So far, Claude is pretty good with those things. It seems to have pretty good knowledge of Pulumi, basic knowledge of Tilt, and good knowledge of Kubernetes. It’s a little out of date on some things and needs reminding to RTFM, but it can get a lot done by itself. If it were a real point of friction, a cheat sheet (sorry, “skill”) would be enough to solve the majority of issues.
The example you provide seems to be more along the lines of SSHing into remote boxes and setting things up manually. That’s not really helpful when you want to work on repeatable infra. You try to distinguish yourself from generating Terraform etc., but actually that’s what’s valuable in my experience.
So how is this different from deploying claude code on a VM and letting it run? You can sandbox it in any of the dozen ways already available.
What’s the differentiator?
This allows the agent to make any changes in a production clone vs agents running on a production VM. For example, you wouldn't want claude editing crucial config on the chance it brings everything down vs letting it do in a cloned environment where it can test whatever.
One allows middleman rent-seeking and the other does not so much.
And how is this different than just pointing Terraformer at your existing infrastructure and rebuilding it in another account? That is assuming your company is standing complicated infra up by hand and if they are, your entire “DevOps” team or who ever is responsible needs to be fired
This is exciting. But I had to read and check everything twice to figure it out, as some already commented. Strong Feedback loop is an ultimate unlock for AI agents and having twins is exactly the right approach.
YOOO thanks niko! Currently reworking lots of wording to make it easier to understand!
You might want to remove that `.DS_Store` from the root of the repo and add `.DS_Store` to your global git ignore.
> I didn't want CC to SSH into a prod machine from where it is running locally (real problem!). I wanted to lock down the tools it can run to be only on sandboxes while also giving it autonomy to create sandboxes and not have access to anything else.
This is already the modern way to run infra. If your running simple apps, why are you even spinning up vms? Container running platforms make this so easy.
So... I already tell Claude Code to do this. Just run kubectl for me please and figure out why my helm chart is broken.
Scary? A little but it's doing great. Not entirely sure why a specialized tool is needed when the general purpose CLI is working.
I've noticed a lot of LLM-based tools that are essentially this sort of thing. Just a slightly more specific prompt wrapper around the core capability that can already do the thing. It's so bad.
That has been the case this entire time. The "ChatGPT-wrapper" startups were little more than a webapp frontend for ChatGPT with a clever prompt.
Lol, that does sounds a little scary but if it works it works. Mainly I built this to prevent there being a chance that changes affect production. This is meant to be used with scale (say hundreds of VMs) vs 1. From a safety perspective running Claude Code with just a watchful eye would not fly in my environment, which is why I built something like this.
More power to you! Good luck!
Same. I’ve had good results with read only accounts / tokens and let the agent have at it. Also works with terraform, aws cli, etc.
One does not need a new/separate tool to do any of this, just include it in your agents instructions.
Yeah. The times I have let claude off the read-only leash, it's gone fine for me too (with stern warnings not to do anything stupid, and a close eye). But that's not really solving the same problem as this project, I guess. From what I can see this is using a safer and more reproducible method (and not k8s native, so it feels a little foreign to me).
Opus 4.5 is pretty good about following instructions to not do anything destructive, but Gemini 3 Flash actively disregards my advice and just starts running commands. Definitely recommend setting up default-readonly access for stuff like this and requiring some kind of out-of-band escalation process for when you need to do writes/destroys.
In Zed I just have it auto approve everything, macOS will scream if "Zed" tries to escape the folder its in anyway.
I do this but make sure to only have readonly/nondestructive access. It's extremely cool how well it works.
I do the same. I was thinking about creating read-only kubeconfigs for him to make sure it can't do bad stuff but with a good SKILL.md, it works perfectly.
Him! That settles the Turing test debate.
I let it read-only and gitops driven and find it's really good and feels pretty safe to get it to PR fixes. Run it with no permission checks
Yeah, I'm telling it to use aws cli to spin up instances, configure them, start servers, read cw logs etc.
Making clones of production isn't trivial. Is your app server clone going to connect to your production database? It is going to spin up your whole stack? Seems a bit naive.
A better approach is to have AI understand how prod is built and make the changes there instead of having AI inspect it and figure out how to apply one off changes.
Models are already very good at writing IaaC.
Clever solution. I think ops (like this) and observability will be pretty hot markets for a while soon. The code is quite cheap now, but actually running it and keeping it running still requires some amount of background. I've had a number of acquaintances ask me how they can get their vibe coded app available for others to use.
I really like this idea. I do a lot of kubernetes ops with workloads I'm unfamiliar with (and not directly responsible for) and often give claude read access in order to help me debug things, including with things like a grafana skill in order to access the same monitoring tools humans have. It's saved me dozens of hours in the last months - and my job is significantly less frustrating now.
Your method of creating ansible playbooks makes _tons_ of sense for this kind of work. I typically create documentation (with claude) for things after I've worked through them (with claude) but playbooks is a very, very clever move.
I would say something similar but as an auditable, controllable kubernetes operator would be pretty welcome.
The real problem is just the volatility for the employees. Unless Board of Directors/Owners punish downtime, you risk a dark pattern of uptime just being a nice-to-have when I can just replace any expertise with the next kid out of college + Claude.
So you really need customers to react. And this isn't theoretical - people have already lost their jobs and there's really, really good people in the market available right now.
Thanks! Kubernetes is the next infrastructure primitive that I want to support but I'm glad you like. If you have any questions or ideas, lmk!
Is this a real product? This is a solved problem.
First I’m personally never going to create infrastructure in the console. I’m going to use IAC from the get go. That means I can reproduce my infra on another account easily.
Second if I did come across an environment where this was already the case, there are tools for both Terraform and CloudFormation where you can reverse your infra to reproducible IAC.
After that, let Claude go wild in my sandbox account with a reasonably scoped IAM role with temporary credentials
> LLMs are great at generating Terraform, OpenTofu, Ansible, etc. but bad at guessing how production systems work.
Sorry, that last part is absolutely not the case from my experience. IaC also uses the API to inquire about the infrastructure, and there are existing import/export tools around it, so I’m not exactly sure what you are gaining by insisting on abandoning it. IaC also has the benefit of being reusable and commitable.
People try to write plausible copy, then come to HN to learn it's not often reality
It's largely because every devops situation is a snowflake and humans love to generalize. Turns out we don't all have the same problems. I haven't seen a startup that's been successful in devops at a level above the HCL / yaml
It always makes me smile when you get some random domain with a good looking CSS telling you:
here... Just curl this script and execute it :)So this is a client/server thing to control KVM via libvert and provision SSH keys to allow LLM agent access to the VMs?
How does the Ansible export work? Do the agents hack around inside the VM and then write a playbook from memory, or are all changes made via Ansible?
If Ansible playbooks are the artifact, what does features does Fluid offer over just having agents iterate on an Ansible codebase and having Ansible drive provisioning?
Hey Collin!
Interesting idea, few things:
- The website tells less than your comment here. I want to try but have no idea how destructive it can be.
- You need to add / mention how to do things in the RO mode only.
- Always explain destructive actions.
Few weeks ago I had to debug K8S on the GCP GDC metal, Claude Code helped me tons, but... I had to recreate whole cluster next day because agent ran too fast deleted things it should not delete or at least tell me the full impact. So some harness would be nice.
Hey! Yes I updated the website with some more of my comments. - RO mode would be a good idea - Agreed on explaining destructive actions. The only (possibly) destructive action is creating the sanbox on the host, but that asks the user's permission if the host doesn't have enough resources. Right now it supports VMs with KVM. It will not let you create a sandbox if the host doesn't have enough ram or cpus.
- The kubernetes example is exactly what this is built for, giving AI access is dangerous but there is always a chance of it messing something. Thanks for the comment!
agreed, the repo readme is far more informative than the website
Ops person here.
I'm already using LLM to generate things and I'm not sure what this adds. The Demo isn't really doing it for me but maybe I'm wrong target for it. (What is running on that server? You don't know. Build your cattle properly!)
Maybe this is better for one man band devs trying to get something running without caring beyond, it's running.
Hey no problem! I'll work on the demo more. I discuss this in my comment here: https://news.ycombinator.com/reply?id=46889704&goto=item%3Fi...
and on the website: https://fluid.sh
But fluid lets AI investigate, explore, run commands, and edit files in a production-cloned sandbox. LLMs are great at writing IaC, but the LLMs won't get the right context from just generating an Ansible Playbook. They need a place to run commands safely and test changes before writing the IaC. Much like a human, hence the sandbox.
Every product needs a killer feature
This sounds like a uniquely good way to accidentally spend infinity money on AWS
I use Pulumi for work, and their AI solution (Pulumi Neo) works amazingly well in troubleshooting cloud issues. It's informed of the cloud state and recent changes right from their platform, which is pretty amazing. Compared to using Azure CoPilot for the same purposes, Pulumi Neo was faster in generating responses, and these responses were actionable and solved my issues. CoPilot was laughably useless comparably.
This general idea is exactly why I love nix. The immutability of it is powerful. It can be useful for both running your agents in a certain environment AND your agents are useful at writing your nix config. I expand on this in a blog post here https://jamesst.one/posts/agents-nix
> curl -fsSL https://fluid.sh/install.sh | bash
what could go wrong..
Great idea! A few weeks ago a non-technical client of mine decided to optimize his AWS infra bill with the help of AI. The costs went down significantly along with the application.
This is really cool, I don't want to think about infra tbh just want to build. Is there a wold where an on-prem version of this exists? I buy a box, install shell script, and it just works?
Yo, fluid is built with on-prem in mind, specifically VMs. This is my initial use case for it. I am currently working on a remote version of fluid, where instead of CLI tool, it would be more of a codex/claude code app with a UI where you can install a server and then command hundreds of agents at once to work on infrastructure. Is this what you had in mind?
A small suggestion: All those 'v run_command' blocks in the example flow could show you the command that was run.
It's pretty cool. What would be cooler is to have it as a MCP server... and then use claude code
Profile and hooks + skills for cc will solve concerns . cicd with manual approve + cc will work even better . Infra is a code same as anything else .
Isn't Claude Code for Infrastructure just...Claude Code?
Hey, thanks for the comment. I answer this question in more depth on the website https://fluid.sh or this comment: https://news.ycombinator.com/reply?id=46889704&goto=item%3Fi...
This lets AI work on cloned production sandboxes vs running on production instances. Yes you can sandbox Claude Code on a production box, but it cannot test changes like it would for production-breaking changes. Sandboxes give AI this flexibility allowing it to safely test changes and reproduce things via IaC like Ansible playbooks.
Can't we just use Claude Code straight up?
This will make some amazing memes. 'Sorry I caused a $100,000 bill. I've made the right changes this time to scale appropriately.'
Next month - 'Sorry I caused a $200,000 bill...'
An infrastructure tool's primary installation method should NOT be curl | sh
It should be. This is the least friction way to do so as server Linux operating systems still have not agreed on a common application format / package manager.
> It should be. This is the least friction way to do so as server Linux operating systems still have not agreed on a common application format / package manager.
Nowhere in your response did you mention security.
Unfortunately there is not a standardized way to securely install something.
or reproducibility
And after all that, the shell script only does
go install github.com/aspectrr/fluid.sh/fluid/cmd/fluid@latest
!
Please at least write the README.md by yourself. It's excessively lengthy.
Whats wrong with just using claude code for infrastructure? Works great tbh.
I wish, for my work it would be a safety nightmare. I left a comment on this topic. https://news.ycombinator.com/reply?id=46889704&goto=item%3Fi...
this makes sense. like giving AI a lab bench instead of just asking it to guess
This is the most plausible tool for vibe infra I can think of
I love how the landing page is straight to the point and has zero marketing BS. It achieves the opposite of AI-written text, while still being polished.
I'm working towards this for actual infrastructure, for serving up AI compute.
"install kimi 2.5 on a 4x mi300x vm and connect the endpoint to opencode, shut it down in 4 hours"
We're getting close.
this is not the way to do devops, we have IaC, reviews, and promotion for a reason
it's clear infra level decisions are well beyond what LLMs / agents are capable of today, this is area is too high risk, devops is slow to adopt new tooling because of its role and nature
wow, you downvoted me.
this is still devops. we use cloud-init to setup the vm.
i run the underlying hardware infrastructure and we've automated the provisioning such that we have an api that can start/stop compute at will. even bare metal.
the point of this is that the current $/token model is awful, especially if you're using a lot of tokens. it should be $/minute. pay for what you use.
tokens are a rough proxy for usage over time, so I am paying for what I use, less than running a TPU pod myself, required for the models I use, i.e. I don't saturate the compute so it's cheaper to pay-go
> wow, you downvoted me.
1. No, I commented, it is not even possible to downvote a reply to your own comment, seems other people must disagree with what you said or how you said it
2. It's against HN guidelines to talk about your downvotes, especially making claims about who has done it
The most likely reason for your downvotes is promoting your own (incomplete) project under someone else's. What did you hope to bring to the conversation?
About 90% of HN is now AI shit at any given time. I can't fucking take this shit. Can you losers talk about anything else.
As are 95% of YC funded companies
https://docs.google.com/spreadsheets/d/1Uy2aWoeRZopMIaXXxY2E...
I don’t remember where I got this link from
Huge conflict of interest there huh
[dead]
FUCK NO. Who in their right mind would let an LLM connect to prod?
Maybe at a greenfield startup. Where I work this idea wouldn't be entertained for a millisecond.
Many places have "dev", "test" "prod"... but IMHO you need "sandpit" as well.
From an ops point of view as orgs get big enough, dev wraps around to being prod-like... in the sense that it has the property that there's going to be a lot of annoyed people whose time you're wasting if you break things.
You can take the approach of having more guard rails and controls to stop people breaking things but personally I prefer the "sandpit" approach, where you have accounts / environments where anything goes. Like, if anyone is allowed to complain it's broken, it's not sandpit anymore. That makes them an ok place to let agents loose for "whole system" work.
I see tools like this as a sort of alternative / workaround.
Sandpit should be a personal (often local, if possible) dev environment. The reason people get mad about dev being broken for long periods of time is that they cannot use dev to test their changes if your code (that they depend on) is broken in dev for long periods of time.
Agreed on all points. Local loops are faster and safer wherever possible.
But particularly for devops / systems focused work, you lose too much "test fidelity" if you're not integrating against real services / cloud.
There’s no sandpit, only prod and dev, and you’re not allowed to break prod. Your developers work in partitions of prod. Dev is used for DR and other infra testing.
Well that’s just - dumb
Wanna elaborate?
Account vending machines where every dev can spin up thier own account is a thing and still under the control of some type of guardrails.
Hey, I get it. I don't want LLMs on prod at all. I made this to let agents connect to production cloned sandboxes, not production itself. I hope this helps your concerns, but I understand either way. Lmk with any other questions.
What’s a production cloned sandbox? Take my comment as feedback that the landing page is anaemic
For example, if you had an on-prem footprint with thousands of VMs, a production cloned sandbox would be a clone of a VM to let AI safely make changes, install packages, etc.
Yeah, working on the landing page. Feel free to ask any other questions!
why does it have to connect to prod in order to be useful?
I think you would be very surprised at a) how useful it would be and b) how lax prod can be depending on the company culture and stakes.
Commercial Carpet Cleaning London is a trusted specialist in professional cleaning solutions for businesses across the capital. We deliver high-quality commercial carpet cleaning designed to revitalise workspaces, extend the life of flooring and create healthier environments for staff and visitors. Our skilled team provides expert office sofa and carpet cleaning, ensuring upholstery looks pristine and fresh. We also offer comprehensive end of tenancy cleaning for offices, retail units and commercial premises, helping landlords and tenants meet the highest standards. Our services include Reliable, efficient and detail-focused, we support London businesses with tailored cleaning solutions. Wood floor and hard floor cleaning, using industry-approved methods to restore shine and remove deep-set dirt. We are experienced in Persian rug cleaning, providing delicate care for valuable textiles, and our advanced stain removal techniques tackle even the toughest marks. Additionally, we offer professional jet washing for outdoor areas, ensuring entrances, pathways and façades remain spotless.