NathanFlurry a day ago

The #1 problem with Kubernetes is it's not something that "Just Works." There's a very small subset of engineers who can stand up services on Kubernetes without having it fall over in production – not to mention actually running & maintaining a Kubernetes cluster on your own VMs.

In response, there's been a wave of "serverless" startups because the idea of running anything yourself has become understood as (a) a time sink, (b) incredibly error prone, and (c) very likely to fail in production.

I think a Kubernetes 2.0 should consider what it would look like to have a deployment platform that engineers can easily adopt and feel confident running themselves – while still maintaining itself as a small-ish core orchestrator with strong primitives.

I've been spending a lot of time building Rivet to itch my own itch of an orchestrator & deployment platform that I can self-host and scale trivially: https://github.com/rivet-gg/rivet

We currently advertise as the "open-source serverless platform," but I often think of the problem as "what does Kubernetes 2.0 look like." People are already adopting it to push the limits into things that Kubernetes would traditionally be good at. We've found the biggest strong point is that you're able to build roughly the equivalent of a Kubernetes controller trivially. This unlocks features more complex workload orchestration (game servers, per-tenant deploys), multitenancy (vibe coding per-tenant backends, LLM code interpreters), metered billing per-tenant, more powerful operators, etc.

  • stuff4ben a day ago

    I really dislike this take and I see it all the time. Also I'm old and I'm jaded, so it is what it is...

    Someone decides X technology is too heavy-weight and wants to just run things simply on their laptop because "I don't need all that cruft". They spend time and resources inventing technology Y to suit their needs. Technology Y gets popular and people add to it so it can scale, because no one runs shit in production off their laptops. Someone else comes along and says, "damn, technology Y is too heavyweight, I don't need all this cruft..."

    "There are neither beginnings nor endings to the Wheel of Time. But it was a beginning.”

    • rfrey a day ago

      I'm a fellow geriatric and have seen the wheel turn many times. I would like to point out, though, that most cycles bring something new, or at least make different mistakes, which is a kind of progress.

      A lot of times stuff gets added to simple systems because they're thought to be necessary for production systems, but as our experience grows we realize those additions were not necessary, were in the right direction but not quite right, were leaky abstractions, etc. Then when the 4-year-experienced Senior Developers reinvent the simple solution, they get stripped out - which is a good thing. When the new simple system inevitably starts to grow in complexity, it won't include the cruft that we now know was bad cruft.

      Freeing the new system to discover its own bad cruft, of course. But maybe also some new good additions, which we didn't think of the first time around.

      I'm not a Kubernetes expert, or even a novice, so I have no opinions on necessary and unnecessary bits and bobs in the system. But I have to think that container orchestration is a new enough domain that it must have some stuff that seemed like a good idea but wasn't, some stuff that seemed like a good idea and was, and lacks some things that seem like a good idea after 10 years of working with containers.

      • motorest 20 hours ago

        > I'm not a Kubernetes expert, or even a novice, so I have no opinions on necessary and unnecessary bits and bobs in the system. But I have to think that container orchestration is a new enough domain that it must have some stuff that seemed like a good idea but wasn't, some stuff that seemed like a good idea and was, and lacks some things that seem like a good idea after 10 years of working with containers.

        I've grown to learn that the bulk of the criticism directed at Kubernetes in reality does not reflect problems with Kubernetes. Instead, it underlined that the critics are actually the problem, not Kubernetes. I mean,they mindlessly decided to use Kubernetes for tasks and purposes that made no sense, proceeded to be frustrated due to the way they misuse it, and blame Kubernetes as the scapegoat.

        Think about it for a second. Kubernetes is awesome in the following scenario:

        - you have a mix of COTS bare metal servers and/or vCPUs that you have lying around and you want to use it as infrastructure to run your jobs and services,

        - you want to simplify the job of deploying said jobs and services to your heterogeneous ad-hoc cluster including support for rollbacks and blue-green deployments,

        - you don't want developers to worry about details such as DNS and networking and topologies.

        - you want to automatically scale up and down your services anywhere in your ad-hoc cluster without having anyone click a button or worry too much if a box dies.

        - you don't want to be tied to a specific app framework.

        If you take ad-hoc cluster of COTS hardware out of the mix, odds are Kubernetes is not what you want. It's fine if you still want to use it, but odds are you have a far better fit elsewhere.

        • darkwater 18 hours ago

          > - you don't want developers to worry about details such as DNS and networking and topologies.

          Did they need to know this before Kubernetes? I've been in the trade for over 20 years and the typical product developer never cared a bit about it anyway.

          > - you don't want to be tied to a specific app framework.

          Yes and no. K8s (and docker images) indeed helps you in deploying more consistently different languages/frameworks but the biggest factor against this is in the end still organizational rather than purely technical. (This in an average product company with average developers, not super-duper SV startup with world-class top-notch talent where each dev is fluent in at least 4 different languages and stacks).

          • motorest 18 hours ago

            > Did they need to know this before Kubernetes?

            Yes? How do you plan to configure an instance of an internal service to call another service?

            > I've been in the trade for over 20 years and the typical product developer never cared a bit about it anyway.

            Do you work with web services? How do you plan to get a service to send requests to, say, a database?

            This is a very basic and recurrent usecase. I mean, one of the primary selling points of tools such as Docker compose is how they handle networking. Things like Microsoft's Aspire were developed specifically to mitigate the pain points of this usecase. How come you believe that this is not an issue?

            • darkwater 16 hours ago

              You just call some DNS that is provided by sysadmins/ops. The devs don't know anything about it.

              • haiku2077 14 hours ago

                I used to be that sysadmin, writing config to set that all up. It was far more labor intensive than today where as a dev I can write a single manifest and have the cluster take care of everything for me, including stuff like configuring a load balancer with probes and managing TLS certificates.

                • darkwater 14 hours ago

                  Nobody is denying that. But GP was saying that now with k8s developers don't need to know about the network. My rebuttal is that devs never had to do that. Now maybe even Ops people can ignore some of that part because many more things are automated or work out of the box. But the inner complexity of SDNs inside k8s in my opinion is higher than managing your typical star topology + L4 routing + L7 proxies you had to manage yourself back in the days.

                  • dilyevsky 4 hours ago

                    Devs never had to do that because Moore's Law was still working, the internet was relatively small, and so most never had to run their software on more than one machine outside of some scientific use-cases. Different story now.

                  • motorest 14 hours ago

                    > But GP was saying that now with k8s developers don't need to know about the network. My rebuttal is that devs never had to do that.

                    The only developers who never had to know about the network are those who do not work with networks.

                  • otterley 12 hours ago

                    I think a phone call analogy is apt here. Callers don’t have to understand the network. But they do have to understand that there is a network; they need to know to whom to address a call (i.e., what number to dial); and they need to know what to do when the call doesn’t go through.

        • ethbr1 15 hours ago

          > Kubernetes is awesome in the following scenario: [...]

          Ironically, that looks a lot like when k8s is managed by a dedicated infra team / cloud provider.

          Whereas in most smaller shops that erroneously used k8s, management fell back on the same dev team also trying to ship a product.

          Which I guess is reasonable: if you have a powerful enough generic container orchestration system, it's going to have enough configuration complexity to need specialists.

          (Hence why the first wave of not-k8s were simplified k8s-on-rails, for common use cases)

        • louiskottmann 20 hours ago

          I suspect it's a learning thing.

          Which is a shame really because if you want something simple, learning Service, Ingress and Deployment is really not that hard and rewards years of benefits.

          Plenty of PaaS who will run your cluster for cheap so you don't have to maintain it yourself, like OVH.

          It really is an imaginary issue with terrible solutions.

        • mgkimsal 16 hours ago

          > they mindlessly decided to use Kubernetes for tasks and purposes that made no sense...

          Or... were instructed that they had to use it, regardless of the appropriateness of it, because a company was 'standardizing' all their infrastructure on k8s.

      • overfeed 21 hours ago

        > it won't include the cruft that we now know was bad cruft.

        There's no such thing as "bad cruft" - all cruft is are features you don't use but are (or were) in all likelihood critical to someone else's workflow. Projects transform from minimal and lightning fast, to bloated, one well-reasoned-PR at a time; someone will try to use a popular project and figure "this would be perfect, if only it had feature X or supported scenario Y", multiplied by a few thousand PRs.

    • NathanFlurry a day ago

      I hope this isn't the case here with Rivet. I genuinely believe that Kubernetes does a good job for what's on the tin (i.e. container orchestration at scale), but there's an evolution that needs to happen.

      If you'll entertain my argument for a second:

      The job of someone designing systems like this is to decide what are the correct primitives and invest in building a simple + flexible platform around those.

      The original cloud primitives were VMs, block devices, LBs, and VPCs.

      Kubernetes became popular because it standardized primitives (pods, PVCs, services, RBAC) that containerized applications needed.

      Rivet's taking a different approach of investing in different three primitives based on how most organizations deploy their applications today:

      - Stateless Functions (a la Fluid Compute)

      - Stateful Workers (a la Cloudflare Durable Objects)

      - Containers (a la Fly.io)

      I fully expect to raise a few hackles claiming these are the "new primitives" for modern applications, but our experience shows it's solving real problems for real applications today.

      Edit: Clarified "original cloud primitives"

      • motorest a day ago

        > Rivet's taking a different approach of investing in different three primitives based on how most organizations deploy their applications today:

        I think your take only reflects buzzword-driven development, and makes no sense beyond that point. A "stateless function" is at best a constrained service which supports a single event handler. What does that buy you over Kubernetes plain old vanilla deployments? Nothing.

        To make matters worse, it doesn't seem that your concept was thought all the way through. I mean, two of your concepts (stateless functions and stateful workers) have no relationship with containers. Cloudflare has been for years telling everyone who listens that they based their whole operation on tweaking the V8 engine to let multiple tenants run their code in how many V8 isolates they want and need. Why do you think you need containers to run a handler? Why do you think you need a full blown cluster orchestrating containers just to run a function? Does that make sense to you? It sounds like you're desperate to shoehorn the buzzword "Kubernetes" next to "serverless" in a way it serves absolutely no purpose beyond being able to ride a buzzword.

        • eddythompson80 21 hours ago

          I don't disagree with the overall point you're trying to make. However, I'm very familiar with the type of project that is (seeing as I have implemented a similar one at work 5 years ago) so I can answer some of your questions regarding "How does one arrive at such architecture".

          > Why do you think you need containers to run a handler?

          You don't, but plenty of people don't care and ask for this shit. This is probably another way of saying "buzzword-driven" as people ask for "buzzwords". I've heard plenty of people say things like

                 We're looking for a container native platform
                 We're not using containers yet though.
                 We were hoping we can start now, and slowly containerize as we go
          
          or

                 I want the option to use containers, but there is no business value in containers for me today. So I would rather have my team focus on the code now, and do containers later
          
          
          These are actual real positions by actual real CTOs commanding millions of dollars in potential contracts if you just say "ummm, sure.. I guess I'll write a Dockerfile template for you??"

          > Why do you think you need a full blown cluster orchestrating containers just to run a function?

          To scale. You need to solve the multi-machine story. Your system can't be a single node system. So how do you solve that? You either roll up your sleeves and go learn how Kafka or Postgres does it for their clusters or you let Kubernetes most of that hardwork and deploy your "handlers" on it.

          > Does that make sense to you?

          Well... I don't know. These types of systems (of which I have built 2) are extremely wasteful and bullshit by design. A design that there will never be a shortage of demand for.

          It's a really strange pattern too. It has so many gotchas on cost, waste, efficiency, performance, code organization, etc. You always look and whoever built these things either has a very limited system in functionality, or they have slowly reimplemented what a "Dockerfile" is, but "simpler" you know. it's "simple" because they know the ins and outs of it.

          • motorest 20 hours ago

            > You don't, but plenty of people don't care and ask for this shit. This is probably another way of saying "buzzword-driven" as people ask for "buzzwords".

            That's a fundamental problem with the approach OP is trying to sell. It's not solving any problem. It tries to sell a concept that is disconnected from technologies and real-world practices, requires layers of tech that solve no problem nor have any purpose, and doesn't even simplify anything at all.

            I recommend OP puts aside 5 minutes to go through Cloudflare's docs on Cloudflare Workers that they released around a decade ago, and get up to speed on what it actually takes to put together stateless functions and durable objects. Dragging Kubernetes to the problem makes absolutely no sense.

            • kentonv 14 hours ago

              Where did Nathan say he's using Kubernetes? I think I missed something. His comment describes a new alternative to Kubernetes. He's presenting stateless functions and stateful actors as supplementing containers. He knows all about Cloudflare Workers -- Rivet is explicitly marketed as an alternative to it.

              It feels like you didn't really read his comment yet are responding with an awful lot of hostility.

        • conradev 20 hours ago

          I currently need a container if I need to handle literally anything besides HTTP

          • motorest 18 hours ago

            > I currently need a container if I need to handle literally anything besides HTTP

            You don't. A container only handles concerns such as deployment and configuration. Containers don't speak HTTP either: they open ports and route traffic at a OSI layer lower than HTTP's.

            • conradev 11 hours ago

              Yes! All I was trying to say:

              Containers can contain code which open arbitrary ports using the provided kernel interface whereas serverless workers cannot. Workers can only handle HTTP using the provided HTTP interface.

              I don’t need a container, sure, I need a system with a network sockets API.

              • mdaniel 8 hours ago

                FWIW, Lambda takes the opposite of your assertion: there are function entrypoints and the HTTP or gRPC or Stdin is an implementation detail; one can see that in practice via the golang lambda "bootstrap" shim <https://pkg.go.dev/github.com/aws/aws-lambda-go@v1.49.0/lamb...> which is invoked by the Runtime Interface Emulator <https://github.com/aws/aws-lambda-runtime-interface-emulator...>

                I don't have the links to Azure's or GCP's function emulation framework, but my recollection is that they behave similarly, for similar reasons

                • conradev 6 hours ago

                  Oh yes! I was thinking about the V8 isolate flavor of stateless functions (Cloudflare, Fastly, etc). I had forgotten about the containerized Linux microVM stateless functions (Lambda, Cloud Run, etc). They have everything, and my favorite use is https://github.com/stanfordsnr/gg

                  Funnily, enough, the V8 isolates support stdio via WASM now

      • lovehashbrowns a day ago

        While I think it’s great that kubernetes standardized primitives and I do love that IMO its best “feature” is its declarative nature and how easy it is to figure out what other devs did for an app without digging through documentation. It’s the easiest thing to go through a cluster and “reverse engineer” what’s going on. One of the legacy apps I’m migrating to kubernetes right now has like 20 different deployment scripts that all do different things to get a bunch of drupal multi sites up and running correctly whereas the kubernetes equivalent is a simple deployment helm chart where the most complicated component is the dockerfile. How does Rivet handle this? If I give 100 devs the task of deploying an app there do they get kinda fenced into a style of development that’s then simple to “reverse engineer” by someone familiar with the platform?

    • adrianmsmith a day ago

      It’s also possible for things to just be too complex.

      Just because something’s complex doesn’t necessarily mean it has to be that complex.

      • mdaniel a day ago

        IMHO, the rest of that sentence is "be too complex for some metric within some audience"

        I can assure you that trying to reproduce kubernetes with a shitload of shell scripts, autoscaling groups, cloudwatch metrics, and hopes-and-prayers is too complex for my metric within the audience of people who know Kubernetes

      • wongarsu a day ago

        Or too generic. A lot of the complexity if from trying to support all use cases. For each new feature there is a clear case of "we have X happy users, and Y people who would start using it if we just added Z". But repeat that often enough and the whole things becomes so complex and abstract that you lose those happy users.

        The tools I've most enjoyed (including deployment tools) are those with a clear target group and vision, along with leadership that rejects anything that falls too far outside of it. Yes, it usually doesn't have all the features I want, but it also doesn't have a myriad of features I don't need

      • motorest a day ago

        > It’s also possible for things to just be too complex.

        I don't think so. The original problem that the likes of Kubernetes solves is still the same: setup a heterogeneous cluster of COTS hardware and random cloud VMs to run and automatically manage the deployment of services.

        The problem, if there is any, is that some people adopt Kubernetes for something Kubernetes was not designed to do. For example, do you need to deploy and run a service in multiple regions? That's not the problem that Kubernetes solves. Do you want to autoscale your services? Kubernetes might support that, but there are far easier ways to do that.

        So people start to complain about Kubernetes because they end up having to use it for simple applications such as running a single service in a single region from a single cloud provider. The problem is not Kubernetes, but the decision to use Kubernetes for an application where running a single app service would do the same job.

      • supportengineer a day ago

        Because of promo-driven, resume-driven culture, engineers are constantly creating complexity. No one EVER got a promotion for creating LESS.

    • colechristensen a day ago

      I've also been through the wheel of complexity a few times and I think the problem is different: coming up with the right abstraction is hard and generations of people repeatedly make the same mistakes even though a good abstraction is possible.

      Part of it comes from new generations not understanding the old technology well enough.

      Part of it comes from the need to remake some of the most base assumptions, but nobody has the guts to redo Posix or change the abstractions available in libc. Everything these days is a layer or three of abstractions on top of unix primitives coming up with their own set of primitives.

    • naikrovek a day ago

      Just write the stuff you need for the situation you’re in.

      This stupid need we have to create general purpose platforms is going to be the end of progress in this industry.

      Just write what you need for the situation you’re in. Don’t use kubernetes and helm, use your own small thing that was written specifically to solve the problem you have; not a future problem you might not have, and not someone else’s problem. The problem that you have right now.

      It takes much less code than you think it will, and after you’ve done it a few times, all other solutions look like enormous Rube Goldberg machines (because that’s what they are, really).

      It is 1/100th of the complexity to just write your own little thing and maintain it than it is to run things in Kubernetes and to maintain that monster.

      I’m not talking about writing monoliths again. I’m talking about writing only the tiny little bits of kubernetes that you really need to do what you need done, then deploying to that.

      • eddythompson80 20 hours ago

        > I’m not talking about writing monoliths again. I’m talking about writing only the tiny little bits of kubernetes that you really need to do what you need done, then deploying to that.

        Don't limit yourself like that. A journey of a thousand miles begins with a single step. You will have your monolith in no time

        Re-implement the little bits of kubernetes you need here and there. A script here, an env var there, a cron or a daemon to handle tings. You'll have your very own marvelous creation in no time. Which is usually the perfect time to jump to a different company, or replace your 1.5 year old "legacy system". Best thing about it? no one really understands it but you, which is really all that matters.

        • naikrovek 9 hours ago

          You and I do things differently I guess.

          The things like this that I write stay small. It is when others take them over from me that those things immediately bloat and people start extending them with crap they don’t need because they are so used to it that they don’t see it as a problem.

          I am allergic to unnecessary complexity and I don’t think anyone else that I have ever worked with is. They seem drawn to it.

    • RattlesnakeJake a day ago

      See also: JavaScript frameworks

      • danudey a day ago

        Rivet may be the best of both worlds, because it's not only another complex project built to simplify the complexity of doing complex things, but you also get to have to write all your management in TypeScript.

  • themgt a day ago

    The problem Kubernetes solves is "how do I deploy this" ... so I go to Rivet (which does look cool) docs, and the options are:

    * single container

    * docker compose

    * manual deployment (with docker run commands)

    But erm, realistically how is this a viable way to deploy a "serverless infrastructure platform" at any real scale?

    My gut response would be ... how can I deploy Rivet on Kubernetes, either in containers or something like kube-virt to run this serverless platform across a bunch of physical/virtual machines? How is docker compose a better more reliable/scalable alternative to Kubernetes? So alternately then you sell a cloud service, but ... that's not a Kubernetes 2.0. If I was going to self-host Rivet I'd convert your docs so I could run it on Kubernetes.

    • NathanFlurry a day ago

      Our self-hosting docs are very rough right now – I'm fully aware of the irony given my comment. It's on our roadmap to get them up to snuff within the next few weeks.

      If you're curious on the details, we've put a lot of work to make sure that there's as few moving parts as possible:

      We have our own cloud VM-level autoscaler that's integrated with the core Rivet platofrm – no k8s or other orchestrators in between. You can see the meat of it here: https://github.com/rivet-gg/rivet/blob/335088d0e7b38be5d029d...

      For example, Rivet has an API to dynamically spin up a cluster on demand: https://github.com/rivet-gg/rivet/blob/335088d0e7b38be5d029d...

      Once you start the Rivet "seed" process with your API key, everything from there is automatic.

      Therefore, self-hosted deployments usually look like one of:

      - Plugging in your cloud API token in to Rivet for autoscaling (recommended)

      - Fixed # of servers (hobbyist deployments that were manually set up, simple Terraform deployments, or bare metal)

      - Running within Kubernetes (usually because it depends on existing services)

  • kachapopopow a day ago

    This is simply not true, maintaining a k3s(k8s has a few gotchas) cluster is very easy especially with k3s auto upgrade as long as you have proper eviction rules (maybe pod distruption). Ceph can be tricky, but you can always opt for lemon or longhorn which are nearly zero maintenance.

    There's thousands of helm charts available that allow you to deploy even the most complicated databases within a minute.

    Deploying your own service is also very easy as long as you use one of the popular helm templates.

    Helm is by no means perfect, but it's great when you set it up the way you want. For example I have full code completion for values.yaml by simply having "deployment" charts which bundle the application database(s) and application itself into a single helm chart.

    You can't just "jump into" kubernetes like you can with many serverless platforms, but spending a week banging your head to possibly save hundreds of thousands in a real production environment is a no-brainer to me.

    • darqis 17 hours ago

      But this also isn't true. Everything k8s has caveats. You can't deploy a Percona MySQL DB on ARM for instance. Various operators have various issues which require manual intervention. It's all pretty much a clusterfuck. Then debugging reasons why this service works locally with a systemd service but not on k8s is also time intensive and difficult. And the steadily changing features and frequent version bumps. It's a full time job. And many helm charts aren't that great to begin with.

      And what when someone deployment hangs and can't be deleted but still allocates resources? This is a common issue.

      • kachapopopow 9 hours ago

        I use bitnami helm charts exclusively and they have never failed me. ARM is a caviant in itself, but I had a really good experience with k3s.

    • nabeards 17 hours ago

      Per the K3s web site and their docs, they don’t call out that it’s good for bare metal production. That tells me it’s not built for heavy workloads, and is instead for dev environments and edge compute.

      • kachapopopow 9 hours ago

        The default k3s configuration is not scalable which is especially bad for etcd.

        Etcd can be replaced with postgres which solves majority of issues, but does require self-management. I've seen k3s clusters with 94 nodes chug away just fine in the default configuration thought.

    • anentropic 16 hours ago

      I think all the caveats and choices in your opening sentence rather undercut the idea that the parent comment's point "simply" isn't true...

      • kachapopopow 9 hours ago

        My point being is that you can choose to overcomplicate it.

  • jeswin a day ago

    If you're using k8s as a service where you've outsourced all of the cluster maintenance (from any of the cloud vendors), which part due you see as super complex? The configuration and specs are vast, but you might only need to know a very small subset of it to put services into production.

    You can deploy to k8s with configuration that's just a little more complex than docker-compose. Having said that, perhaps for a majority of apps even that "a little more" is unnecessary - and docker-compose was what the long tail actually wanted. That docker-compose didn't get sustained traction is unfortunate.

    • motorest a day ago

      > If you're using k8s as a service where you've outsourced all of the cluster maintenance (from any of the cloud vendors), which part due you see as super complex?

      This point bears repeating. OP's opinion and baseless assertions contrast with reality. Even with COTS hardware nowadays we have Kubernetes distributions which are trivial to setup and run, to the point that they only require a package being installed to get a node running, and running a single command to register the newly provisioned node in a cluster.

      I wonder what's OP's first-hand experience with Kubernetes.

      • 15155 a day ago

        When Kubernetes was in its infancy, it was quite difficult to do the PKI and etcd setup required to run a cluster - and managed services didn't exist.

        Now, in every thread, people replay arguments from close to a decade ago that reflect conditions 90% of companies wouldn't face today.

        • motorest 20 hours ago

          > When Kubernetes was in its infancy, it was quite difficult to do the PKI and etcd setup required to run a cluster - and managed services didn't exist.

          I agree, but it's important to stress the fact that things like etcd are only a pivotal component if you're putting together an ad-hoc cluster of unreliable nodes.

          Let that sink in.

          Kubernetes is a distributed system designed for high reliability, and depends on consensus algorithms to self-heal. That's a plausible scenario if you're running a cluster of cheap unreliable COTS hardware.

          Is that what most people use? Absolutely not.

          • DasIch 19 hours ago

            All hardware is unreliable and it's only a question of scale whether the probability of failure gets high enough that you need self-healing.

            • motorest 17 hours ago

              > All hardware is unreliable (...)

              The whole point is that none of the main cloud providers runs containers directly on bare metal servers, and even their own VPS are resilient to their own hardware failures.

              Then there's the whole debate on whether it's a good idea to put together a Kubernetes cluster running on, say, EC2 instances.

        • eddythompson80 20 hours ago

          Yeah, it's as if someone is saying they can't run linux because compiling takes forever, dealing with header files and dependencies is a pain, plus I use bluetooth headphones.

  • hosh a day ago

    It's been my experience that nothing in infra and ops will ever "just work". Even something like Heroku will run into scaling issues, and how much you are willing to pay for it.

    If people's concerns is that they want a deployment platform that can be easily adopted and used, it's better to understand Kubernetes as the primitives on which the PaaS that people want can be built on top of it.

    Having said all that, Rivet looks interesting. I recognize some of the ideas from the BEAM ecosystem. Some of the appeal to me has less to do with deploying at scale, and more to do with resiliency and local-first.

  • jekwoooooe a day ago

    This is just so not true. You can literally launch talos and have an N node cluster in about 10 minutes 9 of which is waiting for downloads, configs, etc. 1 min of configuration. It’s just that easy.

    • motorest a day ago

      > This is just so not true. You can literally launch talos and have an N node cluster in about 10 minutes 9 of which is waiting for downloads, configs, etc. 1 min of configuration. It’s just that easy.

      I agree, OP's points contrast with reality. Anyone can whip out a Kubernetes cluster with COTS hardware or cheap VMs within minutes. Take Canonical's microK8s distribution. Do a snap install to get a Kubernetes node up and running, and run a command to register it in a cluster. Does this pass nowadays as rocket science?

      With established cloud providers its even easier.

      • andreasmetsala 17 hours ago

        > Take Canonical's microK8s distribution. Do a snap install to get a Kubernetes node up and running, and run a command to register it in a cluster. Does this pass nowadays as rocket science?

        That’s only after you compile your Ubuntu kernel and all the software. Don’t even get me started on bad sectors on the floppy discs they give out at conferences!

        • motorest 14 hours ago

          > That’s only after you compile your Ubuntu kernel and all the software.

          No, that's simply wrong at so many levels.

    • darqis 17 hours ago

      and how many cloud or VPS providers allow you to bring your own OS? And also split vda into multiple disks? I'll tell you how many, zero, unless you're willing to pay premium and I mean vastly overpriced

      • vbezhenar 14 hours ago

        Every cloud provider allows you to bring your own OS and configure disks in any way you want.

        May be you're thinking about cheapest VPS possible, driven by something like cpanel. Those are not cloud providers. But usually you wouldn't choose them for reliable service, because their whole model is overselling.

      • jekwoooooe 14 hours ago

        Vultr gcp aws.. what are you talking about

  • wvh 17 hours ago

    My take is that Kubernetes is too simple. It leaves too much open and tries to keep everything pluggable, in an attempt to be all things to all people and, more importantly, to all companies so it can be the de facto standard. So typically when you set up Kubernetes, you are left with figuring out how to get metrics, logging, alerting and whatever else on top of the core you have. There are a lot of extra decisions to make, and none of this stuff is as interoperable and fool-proof as it should be.

    Maybe the problem (and strength) of Kubernetes is that it's design by committee or at least common-denominator agreement so everybody stays on board.

    Any more clearly defined project would likely not become a de facto industry standard.

    • davewritescode 16 hours ago

      I agree with you 100% as someone who's developed on and run many Kubernetes clusters over the years. Picking a choosing what to use can be daunting.

      That said, the path to ease of use usually involves making sound decisions ahead of time for users and assuming 99% will stay on that paved path. This is how frameworks like Spring and Rails because ubiquitous.

  • firesteelrain a day ago

    We run a few AKS clusters and they do just work. We rarely do any maintenance on them other than the automatic updates. The containers run 24/7 with no issues. It’s pretty amazing

  • arkh 20 hours ago

    What I'd like to see is something for which adding capacity is plug&play: connect your machine to some network, boot it and automagically it will appear in your farm's UI to become a node. Like how pluging a USB stick in your PC gives you some GB of storage.

  • teaearlgraycold a day ago

    For my needs as an engineer working on your standard simple monolithic-ish app I want a system that can’t do everything. Start with the ability to host a few services as simple as possible. I want a Heroku/Render level of complexity here. Spin up a db, web workers with simple scaling rules, background workers etc. Once you’ve perfected the design see if you can add anything else in to expand the capabilities. If you can’t do that without making it high maintenance and incomprehensible then don’t. It will forever be loved as the best way to host simple apps.

fideloper a day ago

"Low maintenance", welp.

I suppose that's true in one sense - in that I'm using EKS heavily, and don't maintain cluster health myself (other than all the creative ways I find to fuck up a node). And perhaps in another sense: It'll try its hardest to run some containers so matter how many times I make it OOMkill itself.

Buttttttttt Kubernetes is almost pure maintenance in reality. Don't get me wrong, it's amazing to just submit some yaml and get my software out into the world. But the trade off is pure maintenance.

The workflows to setup a cluster, decide which chicken-egg trade-off you want to get ArgoCD running, register other clusters if you're doing a hub-and-spoke model ... is just, like, one single act in the circus.

Then there's installing all the operators of choice from https://landscape.cncf.io/. I mean that page is a meme, but how many of us run k8s clusters without at least 30 pods running "ancillary" tooling? (Is "ancillary" the right word? It's stuff we need, but it's not our primary workloads).

A repeat circus is spending hours figuring out just the right values.yaml (or, more likely, hours templating it, since we're ArgoCD'ing it all, right?)

> As an side, I once spent HORUS figuring out to (incorrectly) pass boolean values around from a Secrets Manager Secret, to a k8s secret - via External Secrets, another operator! - to an ArgoCD ApplicationSet definition, to another values.yaml file.

And then you have to operationalize updating your clusters - and all the operators you installed/painstakingly configured. Given the pace of releases, this is literally, pure maintenance that is always present.

Finally, if you're autoscaling (Karpenter in our case), there's a whole other act in the circus (wait, am I still using that analogy?) of replacing your nodes "often" without downtime, which gets fun in a myriad of interesting ways (running apps with state is fun in kubernetes!)

So anyway, there's my rant. Low fucking maintenance!

  • aljgz a day ago

    "Low Maintenance" is relative to alternatives. In my experience, any time I was dealing with K8s I needed much lower maintenance to get the same quality of service (everything from [auto]scaling, to faileover, deployment, rollback, disaster recovery, DevOps, ease of spinning up a completely independent cluster) compared to not using it. YMMV.

    • SOLAR_FIELDS a day ago

      It's yet another argument, like many arguments against Kubernetes, that essentially boils down to "Kubernetes is complicated!"

      No, deployment of a distributed system itself is complicated, regardless of the platform you deploy it on. Kubernetes is only "complicated" because it can do all of the things you need to do to deploy software, in a standard way. You can simplify by not using Kube, but then you have to hand roll all of the capabilities that Kube just gives you for free. If you don't need a good hunk of those capabilities, you probably don't need Kube.

      • bigstrat2003 a day ago

        I think the reason why people (correctly) point out that Kubernetes is complicated is because most people do not need a distributed system. People reach for k8s because it's trendy, but in truth a lot of users would be better off with a VM that gets configured with Chef/etc and just runs your software the old fashioned way.

        • zomiaen a day ago

          K8s starts to make sense when you want to provide a common platform for a multitude of application developers to work on. Once you can understand it was born from Google's Borg and what problems they were trying to solve with both, the complexity behind it makes a lot more sense.

        • vbezhenar 14 hours ago

          Most people actually do need a distributed system.

          They want their system to be reliable to hardware failures. So when the server inevitably goes down some day, they want their website to continue to work. Very few people wants their website to go down.

          They want their system to scale. So when the sudden rise of popularity hits the load balancer, they want their website to continue to work.

          In the past, the price to run a distributed system was too high, so most people accepted the downsides of running a simple system.

          Nowadays the price to run a distributed system is so low, that it makes little sense to avoid it anymore, for almost any website, if you can afford more than $50/month.

          • aljgz 11 hours ago

            Very well put. I add to that: with an E2E solution, you need to learn things before you deploy your system (still not that you do everything properly), but without that, it's possible to deploy the system, then learn things can go wrong when they actually happen. Now if you ask someone who has a half baked distributed system, they still don't know all the failure modes of their system. I've seen this in mission-critical systems.

            But in a company that had properly reliable infrastructure, any system that moved to the new infra based on K8s needed much less maintenance, had much more standardized DevOps (which allowed people from other teams to chime in when needed), and had much fewer mistakes. There was no disagreement that K8s stramlined everything.

      • danpalmer a day ago

        Exactly, there are a lot of comparisons that aren't apples to apples. If you're comparing kubernetes to a fixed size pool of resources running a fixed set of applications each with their own fixed resources, who cares? That's not how most deployments today.

        One could make the argument that deployments today that necessitate K8s are too complex, I think there's a more convincing argument there, but my previous company was extremely risk averse in architecture (no resumé driven development) and eventually moved on to K8s, and systems at my current company often end up being way simpler than anyone would expect, but at scale, the coordination without a K8s equivalent would be too much work.

      • bigfatkitten a day ago

        Kubernetes is often chosen because the requirement for resume and promo pack padding has taken precedence over engineering requirements.

        Organisations with needs big and complex enough for something like Kubernetes are big enough to have dedicated people looking after it.

      • nostrebored a day ago

        When choosing distributed systems platforms to work with, k8s vs. rolling your own orchestration isn’t the decision anyone is making. It’s k8s vs cloud vendors that want your money in exchange for the headaches.

        • SOLAR_FIELDS a day ago

          Honestly running your own control plane is not that much harder than using something like EKS or GKE. The real complexity that the grandparent was talking about is all the tweaking and configuration you have to do outside of the control plane. Eg the infrastructure and deployments you’re building on top of Kubernetes and all of the associated configuration around that. In other words, whether you use EKS or hand roll your own kube you still have to solve node auto scaling. Load balancing. Metrics/observability. DNS and networking. Ingress. Etc etc etc.

  • ljm a day ago

    I’ve been running k3s on hetzner for over 2 years now with 100% uptime.

    In fact, it was so low maintenance that I lost my SSH key for the master node and I had to reprovision the entire cluster. Took about 90 mins including the time spent updating my docs. If it was critical I could have got that down to 15 mins tops.

    20€/mo for a k8s cluster using k3s, exclusively on ARM, 3 nodes 1 master, some storage, and a load balancer with automatic dns on cloudflare.

    • verst a day ago

      How often do you perform version upgrades? Patching of the operation system of the nodes or control plane etc? Things quickly get complex if application uptime / availability is critical.

    • Bombthecat a day ago

      Yeah, as soon as you got your helm charts and node installers.

      Installing is super fast.

      We don't do backup of the cluster for example for that reason ( except databases etc) we just reprovision the whole cluster.

      • motorest a day ago

        > Yeah, as soon as you got your helm charts and node installers.

        I believe there's no need to go that path for most applications. A simple kustomize script already handles most of the non-esoteric needs.

        Sometimes things are simple of that's how we want them to be.

    • busterarm a day ago

      People talking about the maintenance pain of running Kubernetes have actual customers and are not running their whole infrastructure on 20€/mo.

      Anecdotes like these are not helpful.

      I have thousands of services I'm running on ~10^5 hosts and all kinds of compliance and contractual requirements to how I maintain my systems. Maintenance pain is a very real table-stakes conversation for people like us.

      Your comment is like pure noise in our space.

      • 15155 a day ago

        > People talking about the maintenance pain of running Kubernetes have actual customers and are not running their whole infrastructure on 20€/mo.

        You're right: they're using managed Kubernetes instead - which covers 90% of the maintenance complexity.

        • busterarm a day ago

          We are, and it's not 90%. Maybe 75%.

          But nowhere near the reductive bullshit comment of "there's no maintenance because my $20/mo stack says so".

      • abrookewood a day ago

        That's an incredibly self-centred comment. People use Kubernetes for a whole range of reasons - not all of them will align with your own needs.

  • zzyzxd a day ago

    But you are not talking about maintaining Kubernetes, you are talking about maintaining a CI/CD system, a secret management system, some automation to operate databases, and so on.

    Instead of editing some YAML files, in the "old" days these software vendors would've asked you to maintain a cronjob, ansible playbooks, systemd unit, bash scripts...

    • eddythompson80 20 hours ago

      Yeah, they are basically DIY-ing their own "cloud" in a way, which is what kuberetes was designed for.

      It's indeed a lot of maintenance to run things thins way. You're no longer operationalizing your own code, you're also operating (as you mentioned) a CI/CD, secret management, logging, analytics, storage, databases, cron tasks, message brokers, etc. You're doing everything.

      On one (if you're not doing anything super esoteric or super cloud specific) migrating kubernetes based deployments between clouds has always been super easy for me. I'm currently managing a k3s cluster that's running a nodepool on AWS and a nodepool on Azure.

      • otterley 12 hours ago

        I’m a little confused by the first paragraph of this comment. Kubernetes wasn’t designed to be an end-to-end solution for everything needed to support a full production distributed stack. It manages a lot of tasks, to be sure, but it doesn’t orchestrate everything that you mentioned in the second paragraph.

        • eddythompson80 8 hours ago

          > Kubernetes wasn’t designed to be an end-to-end solution for everything needed to support a full production distributed stack.

          I'll admit I know very little about the history of kubernetes before ~2017, BUT 2017-present kubernetes is absolutely designed/capable of being your end to end solution for everything.

          Take the random list I made and the meme page at:

          - CI/CD [github, gitlab, circleci]: https://landscape.cncf.io/guide#app-definition-and-developme...

          - secret management [IAM, SecretsManager, KeyVault]: https://landscape.cncf.io/guide#provisioning--key-management

          - logging & analytics [CloudWatch, AppInsights, Splunk, Tablue, PowerBI] : https://landscape.cncf.io/guide#observability-and-analysis--...

          - storage [S3, disks, NFS/SMB shares]: https://landscape.cncf.io/guide#runtime--cloud-native-storag...

          - databases: https://landscape.cncf.io/guide#app-definition-and-developme...

          - cron tasks: [Built-in]

          - message brokers: https://landscape.cncf.io/guide#app-definition-and-developme...

          The idea is wrap cloud provider resources in CRDs. So instead of creating an AWS ELB or an Azure SLB, you create a Kubernetes service of type LoadBalancer. Then kubernetes is extensible enough for each cloud provider to swap what "service of type LoadBalancer" means for them.

          For higher abstraction services (SaaS like ones mentioned above) the idea is similar. Instead of creating an S3 bucket, or an Azure Storage Account, you provision CubeFS on your cluster (So now you have your own S3 service) then you create a CubeFS Bucket.

          You can replace all the services listed above, with free and open source (under a foundation) alternatives. As long as you can satisfy the requirements of CubeFS, you can have your own S3 service.

          Of course you're now maintaining the equivalent of github, circleci, S3, ....

          Kubernetes gives you a unified way of deploying all these things regardless of the cloud provider. Your target is Kubernetes, not AWS, Microsoft or Google.

          The main benefit (to me) is with Kubernetes you get to choose where YOU want to draw the line of lock-in vs value. We all have different judgements after all

          Do you see no value in running and managing kafka? maybe SQS is simple enough and cheap enough that you just use it. Replacing it with a compatible endpoint is cheap.

          Are you terrified of building your entire event based application on top of SQS and Lambda? How about Kafka and ArkFlow?

          Now you obviously trade one risk for another. You're trading the risk of vendor lock-in with AWS, but at the same time just because ArkFlow is free and open source, doesn't mean that it'll be as maintained in 8 years as AWS Lambda is gonna be. Maybe maybe not. You might have to migrate to another service.

          • otterley 8 hours ago

            > Of course you're now maintaining the equivalent of github, circleci, S3, ....

            On this we agree. That's a nontrivial amount of undifferentiated heavy lifting--and none is a core feature of K8S. You are absolutely right that you can use K8S CRDs to use K8S as the control plane and reduce the number of idioms you have to think about, but the dirty details are in the data plane.

            • eddythompson80 7 hours ago

              Yeah, but you significantly increase your changes of getting the data plane working if you are always using the same control plane. The control plane is setting up an S3 bucket for you. That bucket could come from AWS, CubeFS, Backblaze, you don't care. S3 is a simple protocol but same goes for more complex ones.

              > and none is a core feature of K8S

              The core feature of k8s is "container orchestration" which is extremely broad. Whatever you can run by orchestrating containers which is everything. The other core feature is extensibility and abstraction. So to me CRDs are as core to kubernetes as anything else really. They are such a simple concept, that custom vs built-in is only a matter of availability and quality sometimes.

              > That's a nontrivial amount of undifferentiated heavy lifting

              Yes it is. Like I said, the benefit of kubernetes is it gives you the choice of where you wanna draw that line. Running and maintaining GitHub, CircleCI and S3 is a "nontrivial amount of undifferentiated heavy lifting" to you. The equation might be different to another business or organization. There is a popular "anti-corporation, pro big-government" sentemnt on the internet today, right? would it make sense for say an organization like the EU to take hard dependency on GitHub or CircleCI? or should they contract OVH and run their own Github, CircleCI instances?

              People always complain about vendor-lock in, closed source services, bait and switch with services, etc. with Kubernetes, you get to choose what your anxieties are, and manage them yourself.

              • otterley 2 hours ago

                > You significantly increase your changes of getting the data plane working if you are always using the same control plane.

                That is 100% not true and why different foundational services have (often vastly) different control planes. The Kubernetes control plane is very good for a lot of things, but not everything.

                > People always complain about vendor-lock in, closed source services, bait and switch with services, etc. with Kubernetes, you get to choose what your anxieties are, and manage them yourself.

                There is no such thing as zero switching costs (even if you are 100% on premise). Using Kubernetes can help reduce some of it, but you can't take a mature complicated stack running on AWS in EKS and port it to AKS or GKE or vice versa without a significant amount of effort.

                • eddythompson80 17 minutes ago

                  Well you know, we went from not knowing that kubernetes can orchestrate everything, to arguing "k8s best practices" for portability so there is room for progress.

                  The reality is yes, noting is zero switching costs. There are plenty of best practices to how to utilize k8s for least headache migrations. It's very doable and I see it all done all the time.

  • turtlebits a day ago

    Sounds self inflicted. Stop installing so much shit. Everything you add is just tech debt and has a cost associated, even if the product is free.

    If autoscaling doesn't save more $$ than the tech debt/maintenance burden, turn it off.

    • ozim a day ago

      I agree with your take.

      But I think a lot of people are in state where they need to run stuff the way it is because “just turn it off” won’t work.

      Like system after years on k8s coupled to its quirks. People not knowing how to setup and run stuff without k8s.

  • pnathan a day ago

    vis-a-vis running a roughly equivalent set of services cobbled together, its wildly low maintenance to the point of fire and forget.

    you do have to know what you're doing and not fall prey to the "install the cool widget" trap.

  • globular-toast 21 hours ago

    It all depends what you are doing. If you just want to run simple webservers then it's certainly lower maintenance than having a fleet of named/Simpson servers to run them.

    The trouble is you then start doing more. You start going way beyond what you were doing before. Like you ditch RDS and just run your DBs in cluster. You stop checking your pipelines manually because you implement auto-scaling etc.

    It's not free, nobody ever said it was, but could you do all the stuff you mentioned on another system with a lower maintenance burden? I doubt it.

    What it boils down to is running services has maintenance still, but it's hopefully lower than before and much of the burden is amortized across many services.

    But you definitely need to keep an eye on things. Don't implement auto-scaling unless you're spending a lot of your time manually scaling. Otherwise you've now got something new to maintain without any payoff.

  • Spivak a day ago

    The fundamental tension is that there is real complexity to running services in production that you can't avoid/hide with a better abstraction. So your options are to deal with the complexity (the k8s case) or pay someone else to do it (one of the many severless "just upload your code" platforms).

    You can hide the ops person but it but you can't remove them from the equation which is what people seem to want.

otterley a day ago

First, K8S doesn't force anyone to use YAML. It might be idiomatic, but it's certainly not required. `kubectl apply` has supported JSON since the beginning, IIRC. The endpoints themselves speak JSON and grpc. And you can produce JSON or YAML from whatever language you prefer. Jsonnet is quite nice, for example.

Second, I'm curious as to why dependencies are a thing in Helm charts and why dependency ordering is being advocated, as though we're still living in a world of dependency ordering and service-start blocking on Linux or Windows. One of the primary idioms in Kubernetes is looping: if the dependency's not available, your app is supposed to treat that is a recoverable error and try again until the dependency becomes available. Or, crash, in which case, the ReplicaSet controller will restart the app for you.

You can't have dependency conflicts in charts if you don't have dependencies (cue "think about it" meme here), and you install each chart separately. Helm does let you install multiple versions of a chart if you must, but woe be unto those who do that in a single namespace.

If an app truly depends on another app, one option is to include the dependency in the same Helm chart! Helm charts have always allowed you to have multiple application and service resources.

  • vbezhenar 14 hours ago

    > Or, crash, in which case, the ReplicaSet controller will restart the app for you.

    This does not work good enough. Right now I have an issue, where keycloak takes a minute to start and dependent service which crashes on start without keycloak, takes like 5-10 minutes to start, because repicaset controller starts to throttle it and it'll wait for minutes for nothing, even after keycloak started. Eventually it'll work, but I don't want to wait 10 minutes, if I could wait 1 minute.

    I think that proper way to solve this issue is to develop an init container which would wait for dependent service to be up before passing control to the main container. But I'd prefer for Kubernertes to allow me to explicitly declare start dependencies. My service WILL crash, if that dependency is not up, what's the point to even try to start it, just to throttle it few tries later.

    Dependency is dependency. You can't just close your eyes, pretending it does not exist.

    • otterley 13 hours ago

      I’d contend that you’re optimizing for initial deployment speed rather than overall resilience. Backing off with increasing delays before retrying a dependent service call is a best practice for production services. The fact that you’re seeing this behavior on initial rollout is inconvenient, but it’s also self healing. It might take a bit longer than you like, but if you’re really that impatient, there are workarounds like the one you described.

  • cbarrick a day ago

    Big +1 to dependency failure should be recoverable.

    I was part of an outage caused by a fail-closed behavior on a dependency that wasn't actually used and was being turned down.

    Dependencies among servers are almost always soft. Just return a 500 if you can't talk to your downstream dependency. Let your load balancer route around unhealthy servers.

  • Arrowmaster a day ago

    You say supposed to. That's great when building your own software stack in house but how much software is available that can run on kubenetes but was created before it existed. But somebody figured out it could run in docker and then later someone realized it's not that hard to make it run in kubenetes because it already runs in docker.

    You can make an opinionated platform that does things how you think is the best way to do them, and people will do it how they want anyway with bad results. Or you can add the features to make it work multiple ways and let people choose how to use it.

    • otterley a day ago

      The counter argument is that footguns and attractive nuisances are antithetical to resilience. People will use features incorrectly that they may never have needed in the first place; and every new feature is a new opportunity to introduce bugs and ambiguous behaviors.

  • delusional a day ago

    > One of the primary idioms in Kubernetes is looping

    Indeed, working with kubernetes I would argue that the primary architectural feature of kubernetes is the "reconciliation loop". Observe the current state, diff a desired state, apply the diff. Over and over again. There is no "fail" or "success" state, only what we can observe and what we wish to observe. Any difference between the two is iterated away.

    I think it's interesting that the dominant "good enough technology" of mechanical control, the PID feedback loop, is quite analogous to this core component of kubernetes.

    • p_l a day ago

      PID feedback loop, OODA loop, and blackboard systems (AI design model) are all valid metaphors that k8s embodies, with first two being well known enough that they were common in presentations/talks about K8s around 1.0

    • globular-toast 19 hours ago

      What you're describing is a Controller[0]. I love the example they give of a room thermostat.

      But the principle applies to other things that aren't controllers. For example a common pattern is a workload which waits for a resource (e.g. a database) to be ready before becoming ready itself. In a webserver Pod, for example, you might wait for the db to become available, then check that the required migrations have been applied, then finally start serving requests.

      So you're basically progressing from a "wait for db loop" to a "wait for migrations" loop then a "wait for web requests" loop. The final loop will cause the cluster to consider the Pod "ready" which will then progress the Deployment rollout etc.

      [0] https://kubernetes.io/docs/concepts/architecture/controller/

    • tguvot a day ago

      i developed a system like this (with reconciliation loop, as you call it) some years ago. there is most definitely failed state (for multiple reasons). but as part of "loop" you can have logic to fix it up in order to bring it to desired state.

      we had integrated monitoring/log analysis to correlate failures with "things that happen"

  • eddythompson80 20 hours ago

    > One of the primary idioms in Kubernetes is looping: if the dependency's not available, your app is supposed to treat that is a recoverable error and try again until the dependency becomes available.

    This is gonna sound stupid, but people see the initial error in their logs and freak out. Or your division's head sees your demo and says "Absolutely love it. Before I demo it though, get rid of that error". Then what are you gonna do? Or people keep opening support tickets saying "I didn't get any errors when I submitted the deployment, but it's not working. If it wasn't gonna work, why did you accept it"

    You either do what one of my collogues does, add some weird ass logic of "store error logs and only display them if they fire twice, (no three, 4? scratch that 5 times) with 3 second delay in between except for the last one, that can take up to 10 seconds, after that, if this was a network error, sleep for another 2 minutes and at the very end make sure to have a `logger.Info("test1")`

    Or you say "fuck it" and introduce a dependency order. We know that it's stupid, but...

    • otterley 13 hours ago

      This sounds like an opportunity to educate your colleagues and introduce a higher level of functionality to your deployment mechanisms. There’s a difference between Kubernetes stating a deployment of a given component is successful and the CI/CD pipeline confirming the entire application’s deployment is successful. The former frequently happens before—sometimes long before—the latter does. If the boss is seeing errors, it’s because the deployment hasn’t finished and someone or something is falsely suggesting to them that it is.

  • LudwigNagasena a day ago

    YAML is a JSON superset. Of course anything that supports YAML supports JSON.

    • bboreham a day ago

      But it’s also true on the output side. Kubectl doesn’t output yaml unless you ask it to. The APIs only support json and protobuf, not yaml.

  • darqis 17 hours ago

    OMFG really "don't have to use yaml, json also works" are you seriously bringing that argument right now?

    You're sniping at some absolutely irrelevant detail no one, absolutely no one cares about at all. Unbelievable

    • otterley 12 hours ago

      Nearly 1/5th of the article is dedicated to criticizing YAML as the de facto language people use to work with it, and implicitly blaming Kubernetes for this fault.

    • mardifoufs 10 hours ago

      What? It's one of the most often repeated arguments against kubernetes. Even in the article, that this entire thread is about, yaml is mentioned repeatedly.

pm90 2 days ago

Hard disagree with replacing yaml with HCL. Developers find HCL very confusing. It can be hard to read. Does it support imports now? Errors can be confusing to debug.

Why not use protobuf, or similar interface definition languages? Then let users specify the config in whatever language they are comfortable with.

  • vanillax a day ago

    Agree HCL is terrible. K8s YAML is fine. I have yet to hit a use case that cant be solved with its types. If you are doing too much perhaps a config map is the wrong choice.

    • ofrzeta a day ago

      It's just easier to shoot yourself in the foot with no proper type support (or enforcement) in YAML. I've seen Kubernetes updates fail when the version field was set to 1.30 and it got interpreted as a float 1.3. Sure, someone made a mistakes but the config language should/could stop you from making them.

      • XorNot a day ago

        Yeah but that's on the application schema. If it's version string why are accepting floats for it?

      • ikiris a day ago

        so use json?

        • Too a day ago

          It allows the exact same problem. version: 1.30 vs version: "1.30".

          • Sayrus 17 hours ago

            Helm supports typing the values passed to the Chart but most charts aren't providing it. Most fields within out-of-the-box Kubernetes resources are using well-defined typed.

            As much as Helm is prone to some type of errors, it will validate the schema before doing an install or upgrade so `version: 1.30` won't apply but `version: "1.30"` will.

  • geoctl 2 days ago

    You can very easily build and serialize/deserialize HCL, JSON, YAML or whatever you can come up with outside Kubernetes from the client-side itself (e.g. kubectl). This has actually nothing to do with Kubernetes itself at all.

    • Kwpolska a day ago

      There aren’t that many HCL serialization/deserialization tools. Especially if you aren’t using Go.

      • demosthanos a day ago

        Given that we're apparently discussing an entire k8s 2.0 based on HCL that hardly seems like a barrier. You'd have needed to write the HCL tooling to get the 2.0 working anyway.

  • dilyevsky a day ago

    Maybe you know this but Kubernetes interface definitions are already protobufs (except for crds)

    • cmckn a day ago

      Sort of. The hand-written go types are the source of truth and the proto definitions are generated from there, solely for the purpose of generating protobuf serializers for the hand-written go types. The proto definition is used more as an intermediate representation than an “API spec”. Still useful, but the ecosystem remains centered on the go types and their associated machinery.

      • dilyevsky a day ago

        Given that i can just take generated.proto and ingest it in my software then marshal any built-in type and apply it via standard k8s api, why would I even need all the boilerplate crap from apimachinery? Perfectly happy with existing rest-y semantics - full grpc would be going too far

  • dochne a day ago

    My main beef with HCL is a hatred for how it implemented for loops.

    Absolutely loathsome syntax IMO

    • mdaniel a day ago

      It was mentioned pretty deep in another thread, but this is just straight up user hostile

        variable "my_list" { default = ["a", "b"] }
        resource whatever something {
          for_each = var.my_list
        }
      
        The given "for_each" argument value is unsuitable: the "for_each" argument must be a map, or set of strings, and you have provided a value of type tuple.
      • Spivak a day ago

        This is what you get if you want a declarative syntax. HCL doesn't have any concept of a for loop, and can't have any concept of a for loop— it's a data serialization format. Terraform can be transformed to and from JSON losslessly. The looping feature is implemented in Terraform.

        Ansible's with_items: and loop: do the exact same thing with YAML.

        • garethrowlands 18 hours ago

          Declarative languages can absolutely loop. Otherwise functional languages wouldn't be a thing.

          HCL's looping and conditionals are a mess but they're wonderful in comparison to its facilities for defining and calling functions.

          • ziggure an hour ago

            The concept of repeated operations executing with respect to some condition is always going to be a bit different in declarative / functional constructs than in imperative ones. A purely functional language is never going to have the equivalent of: for (...) a++;

            Good functional languages like Clojure make something like this awkward and painful to do, because it doesn't really make sense in a functional context.

        • mdaniel 11 hours ago

          I think you missed the error message; I wasn't whining about for_each syntax masquerading as if it was a property of the resource, I was pointing out the moronic behavior of attempting to iterate over a list of strings and being given the finger

          It is claiming the provided thing is not a set of strings it's a tuple

          The fix is

              for_each = toset(var.my_list)
          
          but who in their right mind would say "oh, I see you have a list of things, but I only accept unordered things"
          • Spivak 11 hours ago

            It's not about them being ordered it's about them being unique. You're generating a resource for each element and that resource has a unique name. The stackoverflow way to iterate over a list is to use toset() but more often what people want is

                zipmap(range(length(var.my_list)), var.my_list)
            
            where you get {0 => item, 1=>item} and your resources will be named .0, .1, .2. I get the annoyance about the type naming but in HCL lists and tuples are the same thing.
            • mdaniel 8 hours ago

              > It's not about them being ordered it's about them being unique

              Ok, so which one of ["a", "b"] in my example isn't unique?

              And I definitely, unquestionably, never want resources named foo.0 and foo.1 when the input was foo.alpha and foo.beta

              • Spivak 5 hours ago

                I'm confused by your question, lists don't enforce uniqueness—you can do ["a", "a"] and it's perfectly valid. Sets and the key values of hash maps can't have repeating elements. You can for_each over lists that come from a data block so you can't statically know if a list is effectively a set.

                If you don't want .0, .1 then you agree with Terraform's design because that's the only way you could for_each over an arbitrary list that might contain duplicate values. Terraform making you turn your lists into sets gets you what you want.

    • SOLAR_FIELDS a day ago

      I just tell people to avoid writing loops in Terraform if it could be at all avoided. And if you iterate make sure it’s a keyed dictionary instead of array/list like data structure

  • acdha a day ago

    > Developers find HCL very confusing. It can be hard to read. Does it support imports now? Errors can be confusing to debug.

    This sounds a lot more like “I resented learning something new” than anything about HCL, or possibly confusing learning HCL simultaneously with something complex as a problem with the config language rather than the problem domain being configured.

    • aduwah a day ago

      Issue is that you do not want a dev learning hcl. Same as you don't want your SRE team learning next and react out of necessity.

      The ideal solution would be to have an abstraction that is easy to use and does not require learning a whole new concept (especially an ugly one as hcl). Also learning hcl is simply just the tip of the iceberg, with sinking into the dependencies between components and outputs read from a bunch of workspaces etc. It is simply wasted time to have the devs keeping up with all the terraform heap that SREs manage and keep evolving under the hood. The same dev time better spent creating features.

      • acdha a day ago

        Why? If they can’t learn HCL, they’re not going to be a successful developer.

        If your argument is instead that they shouldn’t learn infrastructure, then the point is moot because that applies equally to every choice (knowing how to indent YAML doesn’t mean they know what to write). That’s also wrong as an absolute position but for different reasons.

        • aduwah a day ago

          You don't see my point. The ideal solution would be something that can be learned easily by both the dev and infra side without having to boil the ocean on one side or the other. Something like protobuf was mentioned above which is a better idea than hcl imho

          • acdha a day ago

            I understand but do not agree with your point. I think two things are being conflated: the difficulty of learning HCL itself and the domain complexity of what almost everyone is learning HCL to do. I’ve seen people bounce off of Terraform and make lengthy rants about how HCL is hard, write thousands of lines of code in their preferred Real Programming Language™, and only eventually realize that the problem was that they thought they were learning one thing (IaC) but were actually learning two things at the same time (IaC and AWS/GCP/etc.) without taking time to formalize their understanding of both.

            I have no objections to protobufs, but I think that once you’re past a basic level of improvements over YAML (real typing, no magic, good validation and non-corrupting formatters) this matters less than managing the domain complexity. YAML is a bad choice for anything but text documents read by humans because it requires you to know how its magic works to avoid correct inputs producing unexpected outputs (Norway, string/float confusion, punctuation in string values, etc.) and every tool has to invent conventions for templating, flow control, etc. I think HCL is better across the board but am not dogmatic about it - I just want to avoid spending more time out of people’s lives where they realize that they just spent two hours chasing an extra space or not quoting the one value in a list which absolutely must be quoted to avoid misinterpretation.

            • garethrowlands 19 hours ago

              Terraform is good overall but HCL just isn't a very good language.

              Given that its main purpose is to describe objects of various types, its facilities for describing types are pretty weak and its facilities for describing data are oddly inconsistent. Its features for loops and conditionals are weak and ad-hoc compared with a regular programming language. Identifier scope / imports are a disaster and defining a function is woeful.

              Some of this is influenced by Terraform's problem domain but most of it isn't required by that (if it was required, people wouldn't have such an easy time doing their Terraform in python/go/rust).

              Despite this, Terraform is overall good. But there's a reason there's a small industry of alternatives to HCL.

          • slyall a day ago

            I can't understand how somebody would learn maintain a config written in protobuf. At least hcl is actually readable.

          • eddythompson80 19 hours ago

            In what universe is protobuf simple? Also what do you even mean by protobuf here?

            The actual format is binary and I'm not expecting people to pass around binary blobs that describe their deployment configuration.

            to serialize it, you're writing code in some language... then what? you just want deployment as code? because that seems like a separate paradigm.

          • Too a day ago

            Say what? Protobuf may be type safe but it is far from a configuration language, can you even refer to variables or do loops. Unless you are thinking about serializing it with builders in code, which takes away the benefit of declarative configuration. It's also not readable without tooling. As parent said, this really sounds like refusal to learn something new.

    • lmm a day ago

      > This sounds a lot more like “I resented learning something new” than anything about HCL, or possibly confusing learning HCL simultaneously with something complex as a problem with the config language rather than the problem domain being configured.

      What would you expect to be hearing different if HCL really was just an objectively bad language design?

    • XorNot a day ago

      HCL can be represented as JSON and thus can be represented as YAML. Why did we need another configuration language which I'll also simply deal with with a proper programming language at any level serious usage?

      • danudey a day ago

        The example in the blog post shows full-on logic structures (if this then that else this), which... no thank you.

        I don't think I can take a JSON or YAML document and export it to an HCL that contains logic like that, but at the same time we shouldn't be doing that either IMHO. I very much do not want my configuration files to have side effects or behave differently based on the environment. "Where did this random string come from?" "It was created dynamically by the config file because this environment variable wasn't set" sounds like a nightmare to debug.

        YAML is bad for a great many reasons and I agree with the author that we should get rid of it, but HCL is bad for this use case for the same reasons - it does too much. Unfortunately, I'm not sure if there's another configuration language that makes real honest sense in this use case that isn't full of nightmarish cruft also.

        • garethrowlands 19 hours ago

          Any of the modern configuration languages would be better. They're all very sensitive to the problems of side effects and environment-specific behaviour - it's a defining characteristic. For example, pkl.

  • programd a day ago

    Let me throw this out as a provocative take:

    In the age of AI arguments about a configuration language are moot. Nobody is going to hand craft those deployment configs anymore. The AIs can generate whatever weird syntax the underlying K8s machinery needs and do it more reliably than any human. If not today then probably in 3 months or something.

    If you want to dream big for K8s 2.0 let AI parse human intent to generate the deployments and the cluster admin. Call it vibe-config or something. Describe what you want in plain language. A good model will think about the edge cases and ask you some questions to clarify, then generate a config for your review. Or apply it automatically if you're an edgelord ops operator.

    Let's face it, modern code generation is already heading mostly in that direction. You're interacting with an AI to create whatever application you have in your imagination. You still have to guide it to not make stupid choices, but you're not going to hand craft every function. Tell the AI "I want 4 pods with anti-affinity, rolling deployment, connected to the existing Postgres pod and autoscaling based on CPU usage. Think hard about any edge cases I missed." and get on with your life. That's where we're heading.

    I'm emotionally prepared for the roasting this comment will elicit so go for it. I genuinely want to hear pro and con arguments on this.

    • lmm a day ago

      All the more reason you need a concise, strict representation of your actual config. If you're trusting an AI to translate fuzzy human intention into a machine protocol, there's less need for fuzzy human affordances in that machine protocol, and you should use one that plays to the strengths of a machine representation - one that's canonical and unambiguous.

    • eddythompson80 19 hours ago

      If you want to dream big, why not give that post to AI and let it parse it to generate K8s 2.0, then you can get it to generate the application first, then the deployment. You can even have it feature-ask itself in a continuous loop.

  • zoobab 19 hours ago

    Please not HCL. I have headaches from using HCL for Terraform.

  • znpy a day ago

    > Hard disagree with replacing yaml with HCL.

    I see some value instead. Lately I've been working on Terraform code to bring up a whole platform in half a day (aws sub-account, eks cluster, a managed nodegroup for karpenter, karpenter deployment, ingress controllers, LGTM stack, public/private dns zones, cert-manager and a lot more) and I did everything in Terraform, including Kubernetes resources.

    What I appreciated about creating Kubernetes resources (and helm deployments) in HCL is that it's typed and has a schema, so any ide capable of talking to an LSP (language server protocol - I'm using GNU Emacs with terraform-ls) can provide meaningful auto-completion as well proper syntax checking (I don't need to apply something to see it fail, emacs (via the language server) can already tell me what I'm writing is wrong).

    I really don't miss having to switch between my ide and the Kubernetes API reference to make sure I'm filling each field correctly.

    • NewJazz a day ago

      I do something similar except with pulumi, and as a result I don't need to learn HCL, and I can rely on the excellent language servers for e.g. Typescript or python.

      • garethrowlands 18 hours ago

        It's disappointing that your terraform experience with Typescript or python is better than your experience with HCL. HCL should really be better than it is.

    • wredcoll a day ago

      But... so do yaml and json documents?

      • lmm a day ago

        > so do yaml and json documents?

        Not really. There are optional ways to add schema but nothing that's established enough to rely on.

        • wredcoll an hour ago

          I mean, I don't want to get into a debate about semantics, but when I open config.json files or kubernetes.yaml files in neovim, i get tab completed fields and values as well as inline errors about unsupported entries. That seems to qualify in my book.

  • s-zeng 17 hours ago

    Obligatory mentions of cuelang and dhall here. I've used both for kubernetes and other large configs in multiple software teams and both are significantly more of a joy to work with than HCL or yaml

  • dangus a day ago

    Confusing? Here I am working on the infrastructure side thinking that I’m working with a a baby configuration language for dummies who can’t code when I use HCL/Terraform.

    The idea that someone who works with JavaScript all day might find HCL confusing seems hard to imagine to me.

    To be clear, I am talking about the syntax and data types in HCL, not necessarily the way Terraform processes it, which I admit can be confusing/frustrating. But Kubernetes wouldn’t have those pitfalls.

    • icedchai a day ago

      Paradoxically, the simplicity itself can be part of the confusion: the anemic "for loop" syntax, crazy conditional expressions to work around that lack of "if" statements, combine this with "count" and you can get some weird stuff. It becomes a flavor all its own.

    • mdaniel a day ago

      orly, what structure does this represent?

        outer {
          example {
            foo = "bar"
          }
          example {
            foo = "baz"
          }
        }
      
      it reminds me of the insanity of toml

        [lol]
        [[whut]]
        foo = "bar"
        [[whut]]
        foo = "baz"
      
      only at least with toml I can $(python3.13 -c 'import tomllib, sys; print(tomllib.loads(sys.stdin.read()))') to find out, but with hcl too bad
      • eddythompson80 19 hours ago

        The block config syntax isn't that confusing. Nginx had it for years. this is the same as

            server {
                location / { 
                  return 200 "OK"; 
                }
                location /error {
                  return 404 "Error";
                }
            }
            server {
                location / {
                  return 200 "OK";
                }
                location /error {
                  return 404 "Error";
                }
            }
        
        But because we know that nginx is a configuration language for a server where we define the behavior we want through creating a set of resources (servers in this example, but "example"s or "whut"s in yours)

            {
              "servers": [
                {
                  "locations": [
                    {
                      "key": "/",
                      "return": {
                         "status": 200,
                         "content": "OK"
                       }
                    },
                    {
                      "key": "/error",
                      "return": {
                         "status": 404,
                         "content": "error"
                      }
                    }
                  ]
                },
                {
                  "locations": [
                    {
                      "key": "/",
                      "return": {
                         "status": 200,
                         "content": "OK"
                       }
                    },
                    {
                      "key": "/error",
                      "return": {
                         "status": 404,
                         "content": "error"
                      }
                    }
                  ]
                }
              ]
            }
        • mdaniel 10 hours ago

          I am glad that you don't find the syntax confusing, but in my 25 years of monkeying with computer stuff, [] have been the canonical "I have a list of things" characters for a long as I can remember, so I sure am glad that Hashicorp decided that just repeating the key name is how one indicates list membership

          I did see the "inspired by nginx" on hcl's readme, but that must be why you and Mitchell like hcl so much because I loathe nginx's config with all my heart

          • eddythompson80 10 hours ago

            I was just saying HCL didn't invent block config. Perhaps nginx, Caddy, Lisp, F#'s Computation Expressions all primed me for HCL style syntax so it looks pretty clear to me. The key is understanding the value of that "repeating the key name", because it doesn't just become a keyname, it becomes a constructor and makes the syntax much simpler.

            At the end of the day HCL isn't that much more complex than something like yaml. Saying "You don't want your developers learning yaml" is like saying "You don't want your developers learning Yaml. They send an email to the SRE guys, and the SRE guys are the Yaml experts"

      • XorNot a day ago

        Oh thank god I'm not the only one who sees it.

        People complain about YAML then go and use TOML.

      • dangus 12 hours ago

        That isn’t even valid HCL.

        • mdaniel 11 hours ago

          Uh-huh, I think you are secretly supporting my argument of "who can possibly decipher this insanity?!" because https://github.com/tmccombs/hcl2json believes that it is valid

              $ hcl2json <<FOO | jq -r '"has \(length) keys"'
                outer {
                  example {
                    foo = "bar"
                  }
                  example {
                    foo = "baz"
                  }
                }
              FOO
              has 1 keys
          • dangus 11 hours ago

            Why do I care what some random guy’s HCL to JSON conversion tool thinks is valid?

            Terraform converts HCL to JSON in a different way that makes a lot more sense, and hcldec provided by Hashicorp lets you define your own specification for how you want to use it.

            I would like to point out that perfect compatibility with JSON is not a goal nor a requirement of a decent configuration language.

ziggure 2 hours ago

What I've learned comments here is that k8s is essentially already perfect, YAML is actually awesome, and that any criticism of k8s just proves the ignorance of the critic. I guess this means k8s 2.0 should look exactly like k8s 1.0.

mountainriver a day ago

We have started working on a sort of Kubernetes 2.0 with https://github.com/agentsea/nebulous -- still pre-alpha

Things we are aiming to improve:

* Globally distributed * Lightweight, can easily run as a single binary on your laptop while still scaling to thousands of nodes in the cloud. * Tailnet as the default network stack * Bittorrent as the default storage stack * Multi-tenant from the ground up * Live migration as a first class citizen

Most of these needs were born out of building modern machine learning products, and the subsequent GPU scarcity. With ML taking over the world though this may be the norm soon.

  • hhh a day ago

    Wow… Cool stuff, the live migration is very interesting. We do autoscaling across clusters across clouds right now based on pricing, but actual live migration is a different beast

  • Thaxll a day ago

    This is not Kubernetes, this a custom made solution to run GPU.

    • nine_k a day ago

      Since it still can consume Kubernetes manifests, it's of interest for k8s practitioners.

      Since k8s manifests are a language, there can be multiple implementations of it, and multiple dialects will necessarily spring up.

    • mountainriver a day ago

      Which is the future of everything and Kuberentes does a very bad job at

      • mdaniel a day ago

        You stopped typing; what does Kubernetes do a bad job at with relation to scheduling workloads that declare they need at least 1 GPU resource but should be limited to no more than 4 GPU resources on a given Node? https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus...

        • mountainriver 9 hours ago

          Because they aren't consistently available in one region, and Kubernetes can't scale globally. If you don't believe me, go try to train LLMs on Kubernetes with on demand resources, good luck!

          • mdaniel 8 hours ago

            Oh, ok, sorry, I didn't realize that we were conflating Kubernetes, a container orchestration control plane, with your favorite cloud provider not putting in enough GPU hardware orders per region. My fault

  • znpy a day ago

    > * Globally distributed

    Non-requirement?

    > * Tailnet as the default network stack

    That would probably be the first thing I look to rip out if I ever was to use that.

    Kubernetes assuming the underlying host only has a single NIC has been a plague for the industry, setting it back ~20 years and penalizing everyone that's not running on the cloud. Thank god there are multiple CNI implementation.

    Only recently with Multus (https://www.redhat.com/en/blog/demystifying-multus) some sense seem to be coming back into that part of the infrastructure.

    > * Multi-tenant from the ground up

    How would this be any different from kubernetes?

    > * Bittorrent as the default storage stack

    Might be interesting, unless you also mean seeding public container images. Egress traffic is crazy expensive.

    • mountainriver a day ago

      >> * Globally distributed >Non-requirement?

      It is a requirement because you can't find GPUs in a single region reliably and Kubernetes doesn't run on multiple regions.

      >> * Tailnet as the default network stack

      > That would probably be the first thing I look to rip out if I ever was to use that.

      This is fair, we find it very useful because it easily scales cross clouds and even bridges them locally. It was the simplest solution we could implement to get those properties, but in no way would we need to be married to it.

      >> * Multi-tenant from the ground up

      > How would this be any different from kubernetes?

      Kuberentes is deeply not multi-tenant, anyone who has tried to make a multi-tenant solution over kube has dealt with this. I've done it at multiple companies now, its a mess.

      >> * Bittorrent as the default storage stack

      > Might be interesting, unless you also mean seeding public container images. Egress traffic is crazy expensive.

      Yeah egress cost is a concern here, but its lazy so you don't pay for it unless you need it. This seemed like the lightest solution to sync data when you do live migrations cross cloud. For instance, I need to move my dataset and ML model to another cloud, or just replicate it there.

      • senorrib a day ago

        > Kubernetes doesn't run on multiple regions

        How so? You can definitely use annotations in nodes and provision them in different regions.

        • mountainriver 9 hours ago

          It's not really built to be globally scalable, Mesos was built to be globally scalable. I don't know of any major hosted kubernetes provider that does this.

        • znpy 9 hours ago

          The user you're responding to probably doesn't really understand how kubernetes works, and the trade-offs that the etcd has taken to allow in order to be "distributed".

          But you're right. You launch node pretty much anywhere, as long as you have network connectivity (and you don't even need full network connectivity, a couple of tcp ports open are enough).

          It's not really recommended (due to latency), but you can also run the control-plane across different regions.

    • stackskipton a day ago

      What is use case for multiple NICs outside bonding for hardware failure?

      Every time I’ve had multiple NICs on a server with different IPs, I’ve regretted it.

      • mdaniel a day ago

        I'd guess management access, or the old school way of doing vLANs. Kubernetes offers Network Policies to solve the risk of untrusted workloads in the cluster accessing both pods and ports on pods that they shouldn't https://kubernetes.io/docs/concepts/services-networking/netw...

        Network Policies are also defense in depth, since another Pod would need to know its sibling Pod's name or IP to reach it directly, the correct boundary for such things is not to expose management toys in the workload's Service, rather create a separate Service that just exposes those management ports

        Akin to:

          interface Awesome { String getFavoriteColor(); }
          interface Management { void setFavoriteColor(String value); }
          class MyPod implements Awesome, Management {}
        
        but then only make either Awesome, or Management, available to the consumers of each behavior
      • znpy a day ago

        A nic dedicated to SAN traffic, for example. People being serious about networked storage don’t run their storage network i/o on the same nic where they serve traffic.

    • nine_k a day ago

      > Non-requirement

      > the first thing I look to rip out

      This only shows how varied the requirements are across the industry. One size does not fit all, hence multiple materially different solutions spring up. This is only good.

      • znpy a day ago

        > One size does not fit all, hence multiple materially different solutions spring up.

        Sooo… like what kubernetes does today?

  • mdaniel a day ago

    heh, I think you didn't read the room given this directory https://github.com/agentsea/nebulous/blob/v0.1.88/deploy/cha...

    Also, ohgawd please never ever do this ever ohgawd https://github.com/agentsea/nebulous/blob/v0.1.88/deploy/cha...

    • mountainriver a day ago

      Why not? We can run on Kube and extend it to multi-region when needed, or we can run on any VM as a single binary, or just your laptop.

      If you mean Helm, yeah I hate it but it is the most common standard. Also not sure what you mean by the secret, that is secure.

      • mdaniel a day ago

        Secure from what, friend? It's a credential leak waiting to happen, to say nothing of the need to now manage IAM Users in AWS. That is the 2000s way of authenticating with AWS, and reminds me of people who still use passwords for ssh. Sure, it works fine, until some employee leaves and takes the root password with them

        • mountainriver 9 hours ago

          Secure credentials are a leak waiting to happen? Scope them, encrypt them and they are secure. We also support workload identity in a lot of places but often just using API keys is simpler

mrweasel 2 days ago

What I would add is "sane defaults", as in unless you pick something different, you get a good enough load balancer/network/persistent storage/whatever.

I'd agree that YAML isn't a good choice, but neither is HCL. Ever tried reading Terraform, yeah, that's bad too. Inherently we need a better way to configure Kubernetes clusters and changing out the language only does so much.

IPv6, YES, absolutely. Everything Docker, container and Kubernetes should have been IPv6 only internal from the start. Want IPv4? That should be handle by a special case ingress controller.

  • zdw 2 days ago

    Sane defaults is in conflict with "turning you into a customer of cloud provider managed services".

    The longer I look at k8s, the more I see it "batteries not included" around storage, networking, etc, with the result being that the batteries come with a bill attached from AWS, GCP, etc. K8s is less of an open source project, and more as a way encourage dependency on these extremely lucrative gap filler services from the cloud providers.

    • JeffMcCune a day ago

      Except you can easily install calico, istio, and ceph on used hardware in your garage and get an experience nearly identical to every hyper scaler using entirely free open source software.

      • zdw a day ago

        Having worked on on-prem K8s deployments, yes, you can do this. But getting it to production grade is very different than a garage-quality proof of concept.

        • mdaniel a day ago

          I think OP's point was: but how much of that production grade woe is the fault of Kubernetes versus, sure, turns out booting up an PaaS from scratch is hard as nails. I think that k8s pluggable design also blurs that boundary in most people's heads. I can't think of the last time the control plane shit itself, versus everyone and their cousin has a CLBO story for the component controllers installed on top of k8s

          • zdw a day ago

            CLBO?

            • mdaniel a day ago

              Crash Loop Back Off

    • Too a day ago

      You could have sane defaults for something running locally. Take Jenkins as example, all you do is start it up and give your agents an SSH key. That's it, you get a distributed execution environment. Not saying Jenkins does what K8s does, but from an operational standpoint, it shouldn't have to be much different.

      I invite anyone to try reading https://kubernetes.io/docs/setup/production-environment/. It starts with "there are many options". Inside every option there are even more options and considerations, half of which nobody cares about. Why would i ever care which container runtime or network stack that is used. It's unnecessary complexity. Those are edge cases, give something sane out of the box.

  • ChocolateGod a day ago

    I find it easier to read Terraform/HCL over YAML for the simple fact that it does't rely me trying to process invisible characters.

  • doctoboggan a day ago

    > What I would add is "sane defaults"

    This is exactly what I love about k3s.

nunez a day ago

I _still_ think Kubernetes is insanely complex, despite all that it does. It seems less complex these days because it's so pervasive, but complex it remains.

I'd like to see more emphasis on UX for v2 for the most common operations, like deploying an app and exposing it, then doing things like changing service accounts or images without having to drop into kubectl edit.

Given that LLMs are it right now, this probably won't happen, but no harm in dreaming, right?

  • Pet_Ant a day ago

    Kubernetes itself contains so many layers of abstraction. There are pods, which is the core new idea, and it's great. But now there are deployments, and rep sets, and namespaces... and it makes me wish we could just use Docker Swarm.

    Even Terraform seems to live on just a single-layer and was relatively straight-forward to learn.

    Yes, I am in the middle of learning K8s so I know exactly how steep the curve is.

    • jakewins a day ago

      The core idea isn’t pods. The core idea is reconciliation loops: you have some desired state - a picture of how you’d like a resource to look or be - and little controller loops that indefinitely compare that to the world, and update the world.

      Much of the complexity then comes from the enormous amount of resource types - including all the custom ones. But the basic idea is really pretty small.

      I find terraform much more confusing - there’s a spec, and the real world.. and then an opaque blob of something I don’t understand that terraform sticks in S3 or your file system and then.. presumably something similar to a one-shot reconciler that wires that all together each time you plan and apply?

      • vrosas a day ago

        Someone saying "This is complex but I think I have the core idea" and someone to responding "That's not the core idea at all" is hilarious and sad. BUT ironically what you just laid out about TF is exactly the same - you just manually trigger the loop (via CI/CD) instead of a thing waiting for new configs to be loaded. The state file you're referencing is just a cache of the current state and TF reconciles the old and new state.

        • jauco a day ago

          Always had the conceptual model that terraform executes something that resembles a merge using a three way diff.

          There’s the state file (base commit, what the system looked like the last time terraform succesfully executed). The current system (the main branch, which might have changed since you “branched off”) and the terraform files (your branch)

          Running terraform then merges your branch into main.

          Now that I’m writing this down, I realize I never really checked if this is accurate, tf apply works regardless of course.

          • mdaniel a day ago

            and then the rest of the owl is working out the merge conflicts :-D

            I don't know how to have a cute git analogy for "but first, git deletes your production database, and then recreates it, because some attribute changed that made the provider angry"

      • mdaniel a day ago

        > a one-shot reconciler that wires that all together each time you plan and apply?

        You skipped the { while true; do tofu plan; tofu apply; echo "well shit"; patch; done; } part since the providers do fuck-all about actually, no kidding, saying whether the plan could succeed

      • jonenst a day ago

        To me the core of k8s is pod scheduling on nodes, networking ingress (e.g. nodeport service), networking between pods (everything addressable directly), and colocated containers inside pods.

        Declarative reconciliation is (very) nice but not irreplaceable (and actually not mandatory, e.g. kubectl run xyz)

        • jakewins a day ago

          After you’ve run kubectl run, and it’s created the pod resource for you, what are you imagining will happen without the reconciliation system?

          You can invent a new resource type that spawns raw processes if you like, and then use k8s without pods or nodes, but if you take away the reconciliation system then k8s is just an idle etcd instance

  • throwaway5752 a day ago

    I've come to think that it is a case of "the distinctions between types of computer programs are a human construct" problem.

    I agree with you on a human level. Operators and controllers remind me of COM and CORBA, in a sense. They are hightly abstract things, that are intrinsically so flexible that they allow judgement (and misjudgement) in design.

    For simple implementations, I'd want k8s-lite, that was more opinionated and less flexible. Something which doesn't allow for as much shooting ones' self in the foot. For very complex implementations, though, I've felt existing abstractions to be limiting. There is a reason why a single cluster is sometimes the basis for cell boundaries in cellular architectures.

    I sometimes wonder if one single system - kubernetes 2.0 or anything else - can encompass the full complexity of the problem space while being tractable to work with by human architects and programmers.

  • NathanFlurry a day ago

    [flagged]

    • stackskipton a day ago

      Ops type here, after looking at Rivet, I've started doing The Office "Dear god no, PLEASE NO"

      Most people are looking for Container Management runtime with HTTP(S) frontend that will handle automatic certificate from Let's Encrypt.

      I don't want Functions/Actors or require this massive suite:

      FoundationDB: Actor state

      CockroachDB: OLTP

      ClickHouse: Developer-facing monitoring

      Valkey: Caching

      NATS: Pub/sub

      Traefik: Load balancers & tunnels

      This is just switching Kubernetes cloud lock in with KEDA and some other more esoteric operators to Rivet Cloud lock in. At least Kubernetes is slightly more portable than this.

      Oh yea, I don't know what Clickhouse is doing with monitoring but Prometheus/Grafana suite called, said they would love for you to come home.

    • coderatlarge a day ago

      where is that in the design space relative to where goog internal cluster management has converged to after the many years and the tens of thousands of engineers who have sanded it down under heavy fire since the original borg?

johngossman 2 days ago

Not a very ambitious wishlist for a 2.0 release. Everyone I talk to complains about the complexity of k8s in production, so I think the big question is could you do a 2.0 with sufficient backward compatibility that it could be adopted incrementally and make it simpler. Back compat almost always mean complexity increases as the new system does its new things and all the old ones.

  • herval a day ago

    The question is always what part of that complexity can be eliminated. Every “k8s abstraction” I’ve seen to date either only works for a very small subset of stuff (eg the heroku-like wrappers) or eventually develops a full blown dsl that’s as complex as k8s (and now you have to learn that job-specific dsl)

    • mdaniel a day ago

      Relevant: Show HN: Canine – A Heroku alternative built on Kubernetes - https://news.ycombinator.com/item?id=44292103 - June, 2025 (125 comments)

      • herval a day ago

        yep, that's the latest of a long lineage of such projects (one of which I worked on myself). Ohers include kubero, dokku, porter, kr0, etc. There was a moment back in 2019 where every big tech company was trying to roll out their own K8s DSL (I know of Twitter, Airbnb, WeWork, etc).

        For me, the only thing that really changed was LLMs - chatgpt is exceptional at understanding and generating valid k8s configs (much more accurately than it can do coding). It's still complex, but it feels I have a second brain to look at it now

        • programd a day ago

          Maybe that should be the future of K8s 2.0. Instead of changing the core of the beast tweak it minimally to get rid of whatever limitations are annoying and instead allocate resources to put a hefty AI in front of it so that human toil is reduced.

          At some point you won't need a fully dedicated ops team. I think a lot of people in this discussion are oblivious to where this is heading.

          • mdaniel a day ago

            > At some point you won't need a fully dedicated ops team

            I think programmers are more likely to go extinct before that version of reality materializes. That's my secret plan on how to survive the alleged AI apocalypse: AI ain't running its own data flow pipelines into its own training job clusters. As a non-confabulation, I give you https://status.openai.com. They have one of every color, they're collecting them!

jitl a day ago

I feel like I’m already living in the Kubernetes 2.0 world because I manage my clusters & its applications with Terraform.

- I get HCL, types, resource dependencies, data structure manipulation for free

- I use a single `tf apply` to create the cluster, its underlying compute nodes, related cloud stuff like S3 buckets, etc; as well as all the stuff running on the cluster

- We use terraform modules for re-use and de-duplication, including integration with non-K8s infrastructure. For example, we have a module that sets up a Cloudflare ZeroTrust tunnel to a K8s service, so with 5 lines of code I can get a unique public HTTPS endpoint protected by SSO for whatever running in K8s. The module creates a Deployment running cloudflared as well as configures the tunnel in the Cloudflare API.

- Many infrastructure providers ship signed well documented Terraform modules, and Terraform does reasonable dependency management for the modules & providers themselves with lockfiles.

- I can compose Helm charts just fine via the Helm terraform provider if necessary. Many times I see Helm charts that are just “create namespace, create foo-operator deployment, create custom resource from chart values” (like Datadog). For these I opt to just install the operator & manage the CRD from terraform directly or via a thin Helm pass through chat that just echos whatever HCL/YAML I put in from Terraform values.

Terraform’s main weakness is orchestrating the apply process itself, similar to k8s with YAML or whatever else. We use Spacelift for this.

  • ofrzeta a day ago

    In a way it's redundant to have the state twice: once in Kubernetes itself and once in the Terraform state. This can lead to problems when resources are modified through mutating webhooks or similar. Then you need to mark your properties as "computed fields" or something like that. So I am not a fan of managing applications through TF. Managing clusters might be fine, though.

    • jitl 8 hours ago

      That's fair, and I think why more people don't go the Terraform route. Our setup is pretty simple which helps. We treat entire clusters more like namespaces, where a cluster will run a single main application and its support services as a form of "application level availability zone". We still get bin packing for all that stuff with each cluster, but maybe not MAXIMUM BIN PACKING that we'd get if we ran all the applications in a single big cluster, and there's some EKS cost paying for more clusters.

      We do sometimes have the mutating webhook stuff, for example when running 3rd party JVM stuff we tell the Datadog operator to inject JMX agents into applications using a mutating webhook. For those kinds of things we manage the application using the Helm provider pass-through I mentioned, so what Terraform stores in its state, diffs, and manages on change is the input manifests passed through Helm; for those resources it never inspects the Kubernetes API objects directly - it will just trigger a new helm release if the inputs change on the Terraform side or delete the helm release if removed.

      This is not a beautiful solution but it works well in practice with minimal fuss when we hit those Kubernetes provider annoyances.

    • theyinwhy 9 hours ago

      Exactly our experience hence we moved to plain K8s objects 6 years ago and never looked back.

benced a day ago

I found Kubernetes insanely intuitive coming from the frontend world. I was used to writing code that took in data and made the UI react to that - now I write code that the control panel uses reconciles resources with config.

bigcat12345678 a day ago

》Ditch YAML for HCL

I maintained borgcfg 2015-2019

The biggest lesson k8s drew from borg is to replace bcl (borgcfg config language) with yaml (by Brian Grant)

Then this article suggests to reverse

Yep, knowledge not experienced is just fantasy

  • jitl 7 hours ago

    Just because borgcfg is painful (?) and YAML is a better option for the public, doesn't mean we can't continue and improve on YAML with more limited expression syntax. Terraform/HCL is often annoying because it doesn't give users tools like easy function declarations for re-use. You get one re-use abstraction: create a module, which is heavy-weight - needs a new directory, "provider version" declarations, a lot of boilerplate per input and output. In return this means the amount of creative shenanigans one needs to understand is typically pretty low.

  • Too 21 hours ago

    Interested to hear more about this. Care to elaborate more? Maybe there are already articles written about this?

  • simoncion a day ago

    I've spent many, many, many years working with YAML at $DAYJOB. YAML is just a really bad choice for configuration files that are longer than a handful of lines.

    The fact that YAML encourages you to never put double-quotes around strings means that (based on my experience at $DAYJOB) one gets turbofucked by YAML mangling input by converting something that was intended to be a string into another datatype at least once a year. On top of that, the whitespace-sensitive mode makes long documents extremely easy to get lost in, and hard to figure out how to correctly edit. On top of that, the fact that the default mode of operation for every YAML parser I'm aware of emits YAML in the whitespace-sensitive mode means that approximately zero of the YAML you will ever encounter is written in the (more sane) whitespace-insensitive mode. [0]

    It may be that bcl is even worse than YAML. I don't know, as I've never worked with bcl. YAML might be manna from heaven in comparison... but that doesn't mean that it's actually good.

    [0] And adding to the mess is the fact that there are certain constructions (like '|') that you can't use in the whitespace-insensitive mode... so some config files simply can't be straightforwardly expressed in the other mode. "Good" job there, YAML standard authors.

geoctl 2 days ago

I would say k8s 2.0 needs: 1. gRPC/proto3-based APIs to make controlling k8s clusters easier using any programming language not just practically Golang as is the case currently and this can even make dealing with k8s controllers easier and more manageable, even though it admittedly might actually complicates things at the API server-side when it comes CRDs. 2. PostgreSQL or pluggable storage backend by default instead of etcd. 3. Clear identity-based, L7-aware ABAC-based access control interface that can be implemented by CNIs for example. 4. Applying userns by default 5. Easier pluggable per-pod CRI system where microVMs and container-based runtimes can easily co-exist based on the workload type.

  • jitl a day ago

    All the APIs, including CRDs, already have a well described public & introspectable OpenAPI schema you can use to generate clients. I use the TypeScript client generated and maintained by Kubernetes organization. I don’t see what advantage adding a binary serialization wire format has. I think gRPC makes sense when there’s some savings to be had with latency, multiplexing, streams etc but control plane things like Kubernetes don’t seem to me like it’s necessary.

    • geoctl a day ago

      I haven't used CRDs myself for a few years now (probably since 2021), but I still remember developing CRDs was an ugly and hairy experience to say the least, partly due to the flaws of Golang itself (e.g. no traits like in Rust, no macros, no enums, etc...). With protobuf you can easily compile your definitions to any language with clear enum, oneof implementations, you can use the standard protobuf libraries to do deepCopy, merge, etc... for you and you can also add basic validations in the protobuf definitions and so on. gRPC/protobuf will basically allow you to develop k8s controllers very easily in any language.

      • mdaniel a day ago

        CRDs are not tied to golang in any way whatsoever; <https://www.redhat.com/en/blog/writing-custom-controller-pyt...> and <https://metacontroller.github.io/metacontroller/guide/create...> are two concrete counter-examples, with the latter being the most "microservices" extreme. You can almost certainly implement them in bash if you're trying to make the front page of HN

        • geoctl a day ago

          I never said that CRDs are tied to Golang, I said that the experience of compiling CRDs, back then gen-controller or whatever is being used these days, to Golang types was simply ugly partly due to the flaws of the language itself. What I mean is that gRPC can standardize the process of compiling both k8s own resource definitions as well as CRDs to make the process of developing k8s controllers in any language simply much easier. However this will probably complicate the logic of the API server trying to understand and decode the binary-based protobuf resource serialized representations compared to the current text-based JSON representations.

    • ofrzeta a day ago

      I think the HTTP API with OpenAPI schema is part of what's so great about Kubernetes and also a reason for its success.

    • znpy a day ago

      > have a well described public & introspectable OpenAPI schema you can use to generate clients.

      Last time I tried loading the openapi schema in the swagger ui on my work laptop (this was ~3-4 years ago, and i had an 8th gen core i7 with 16gb ram) it hang my browser, leading to tab crash.

      • mdaniel a day ago

        Loading it in what? I just slurped the 1.8MB openapi.json for v1.31 into Mockroon and it fired right up instantly

  • dilyevsky a day ago

    1. The built-in types are already protos. Imo gRPC wouldn't be a good fit - actually will make the system harder to use. 2. Already can be achieved today via kine[0] 3. Couldn't you build this today via regular CNI? Cilium NetworkPolicies and others basically do this already

    4,5 probably don't require 2.0 - can be easily added within existing API via KEP (cri-o already does userns configuration based on annotations)

    [0] - https://github.com/k3s-io/kine

    • geoctl a day ago

      Apart from 1 and 3, probably everything else can be added today if the people in charge have the will to do that, and that's assuming that I am right and these points are actually that important to be standardized. However the big enterprise-tier money in Kubernetes is made from dumbing down the official k8s interfaces especially those related to access control (e.g. k8s own NetworkPolicy compared to Istio's access control related resources).

liampulles 18 hours ago

The whole reason that Kubernetes is driven off YAML (or JSON) is that the YAML is a source of truth of what the user intention is. Piping HCL, which has dynamically determined values, directly to the k8s API would make it harder to figure out what the desired state was at apply time when you are troubleshooting issues later.

The easy solution here is to generate the YAML from the HCL (or from helm, or whatever other abstraction you choose) and to commit and apply the YAML.

More broadly, I think Kubernetes has a bit of a marketing problem. There is a core 20% of the k8s API which is really good and then a remaining 80% of niche stuff which only big orgs with really complex deployments need to worry about. You likely don't need (and should not use) that cloud native database that works off CRDs. But if you acknowledge this aspect of its API and just use the 20%, then you will be happy.

jillesvangurp 21 hours ago

> What would a Kubernetes 2.0 look like

A lot simpler hopefully. It never really took off but docker swarm had a nice simplicity to it. Right idea, but Docker Inc. mismanaged it.

Unfortunately, Kubernetes evolved into a bit of a monster. Designed to be super complicated. Full of pitfalls. In need of vast amounts of documentation, training, certification, etc. Layers and layers of complexity, hopelessly overengineered. I.e. lots of expensive hand holding. My mode with technology is that if the likes of Red Hat, IBM, etc. get really excited: run away. Because they are seeing lots of dollars for exactly the kind of stuff I don't want in my life.

Leaving Kubernetes 2.0 to the people that did 1.0 is probably just going to lead to more of the same. The people behind this require this to be a combination of convoluted and hard to use. That's how they make money. If it was easy, they'd be out of business.

darkwater 2 days ago

I totally dig the HCL request. To be honest I'm still mad at Github that initially used HCL for Github Actions and then ditched it for YAML when they went stable.

  • carlhjerpe a day ago

    I detest HCL, the module system is pathetic. It's not composable at all and you keep doing gymnastics to make sure everything is known at plan time (like using lists where you should use dictionaries) and other anti-patterns.

    I use Terranix to make config.tf.json which means I have the NixOS module system that's composable enough to build a Linux distro at my fingertips to compose a great Terraform "state"/project/whatever.

    It's great to be able to run some Python to fetch some data, dump it in JSON, read it with Terranix, generate config.tf.json and then apply :)

    • jitl a day ago

      What’s the list vs dictionary issue in Terraform? I use a lot of dictionaries (maps in tf speak), terraform things like for_each expect a map and throw if handed a list.

      • carlhjerpe a day ago

        Internally a lot of modules cast dictionaries to lists of the same length because the keys of the dict might not be known at plan time or something. The "Terraform AWS VPC module does this internally for many things.

        I couldn't tell you exactly, but modules always end up either not exposing enough or exposing too much. If I were to write my module with Terranix I can easily replace any value in any resource from the module I'm importing using "resource.type.name.parameter = lib.mkForce "overridenValue";" without having to expose that parameter in the module "API".

        The nice thing is that it generates "Terraform"(config.tf.json) so the supremely awesome state engine and all API domain knowledge bound in providers work just the same and I don't have to reach for something as involved as Pulumi.

        You can even mix Terranix with normal HCL since config.tf.json is valid in the same project as HCL. A great way to get started is to generate your provider config and other things where you'd reach to Terragrunt/friends. Then you can start making options that makes resources at your own pace.

        The terraform LSP sadly doesn't read config.tf.json yet so you'll get warnings regarding undeclared locals and such but for me it's worth it, I generally write tf/tfnix with the provider docs open and the language (Nix and HCL) are easy enough to write without full LSP.

        https://terranix.org/ says it better than me, but by doing it with Nix you get programatical access to the biggest package library in the world to use at your discretion (Build scripts to fetch values from weird places, run impure scripts with null_resource or it's replacements) and an expressive functional programming language where you can do recursion and stuff, you can use derivations to run any language to transform strings with ANY tool.

        It's like Terraform "unleashed" :) Forget "dynamic" blocks, bad module APIs and hacks (While still being able to use existing modules too if you feel the urge).

        • Groxx a day ago

          Internally... in what? Not HCL itself, I assume? Also I'm not seeing much that implies HCL has a "plan time"...

          I'm not familiar with HCL so I'm struggling to find much here that would be conclusive, but a lot of this thread sounds like "HCL's features that YAML does not have are sub-par and not sufficient to let me only use HCL" and... yeah, you usually can't use YAML that way either, so I'm not sure why that's all that much of a downside?

          I've been idly exploring config langs for a while now, and personally I tend to just lean towards JSON5 because comments are absolutely required... but support isn't anywhere near as good or automatic as YAML :/ HCL has been on my interest-list for a while, but I haven't gone deep enough into it to figure out any real opinion.

        • jitl a day ago

          I think Pulumi is in a similar spot, you get a real programming language (of your choice) and it gets to use the existing provider ecosystem. You can use the programming language composition facilities to work around the plan system if necessary, although their plans allow more dynamic stuff than Terraform.

          The setup with Terranix sounds cool! I am pretty interested in build system type things myself, I recently wrote a plan/apply system too that I use to manage SQL migrations.

          I want learn nix, but I think that like Rust, it's just a bit too wide/deep for me to approach on my own time without a tutor/co-worker or forcing function like a work project to push me through the initial barrier.

          • carlhjerpe a day ago

            Yep it's similar, but you bring all your dependencies with you through Nix rather than a language specific package manager.

            Try using something like devenv.sh initially just to bring tools into $PATH in a distro agnostic & mostly-ish MacOS compatible way (so you can guarantee everyone has the same versions of EVERYTHING you need to build your thing).

            Learn the language basics after it brings you value already, then learn about derivations and then the module system which is this crazy composable multilayer recursive magic merging type system implemented on top of Nix, don't be afraid to clone nixpkgs and look inside.

            Nix derivations are essentially Dockerfiles on steroids, but Nix language brings /nix/store paths into the container, sets environment variables for you and runs some scripts, and all these things are hashed so if any input changes it triggers automatic cascading rebuilds, but also means you can use a binary cache as a kind of "memoization" caching thingy which is nice.

            It's a very useful tool, it's very non-invasive on your system (other than disk space if you're not managing garbage collection) and you can use it in combination with other tools.

            Makes it very easy to guarantee your DevOps scripts runs exactly your versions of all CLI tools and build systems and whatever even if the final piece isn't through Nix.

            Look at "pgroll" for Postgres migrations :)

            • jitl a day ago

              pgroll seems neat but I ended up writing my own tools for this one because I need to do somewhat unique shenanigans like testing different sharding and resource allocation schemes in Materialize.com (self hosted). I have 480 source input schemas (postgres input schemas described here if you're curious, the materialize stuff is brand new https://www.notion.com/blog/the-great-re-shard) and manage a bunch of different views & indexes built on top of those; create a bunch of different copies of the views/indexes striped across compute nodes, like right now I'm testing 20 schemas per whole-aws-instance node, versus 4 schemas per quarter-aws-node, M/N*Y with different permutations of N and Y. With the plan/apply model I just need to change a few lines in TypeScript and get the minimal changes to all downstream dependencies needed to roll it out.

        • mdaniel a day ago

          Sounds like the kustomize mental model: take code you potentially don't control, apply patches to it until it behaves like you wish, apply

          If the documentation and IDE story for kustomize was better, I'd be its biggest champion

          • carlhjerpe a day ago

            You can run Kustomize in a Nix derivation with inputs from Nix and apply the output using Terranix and the kubectl provider, gives you a very nice reproducible way to apply Kubernetes resources with the Terraform state engine, I like how Terraform makes managing the lifecycle of CRUD with cascading changes and replacements which often is pretty optimal-ish at least.

            And since it's Terraform you can create resources using any provider in the registry to create resources according to your Kubernetes objects too, it can technically replace things like external-dns and similar controllers that create stuff in other clouds, but in a more "static configuration" way.

            Edit: This works nicely with Gitlab Terraform state hosting thingy as well.

          • darkwater 21 hours ago

            Well kustomize is IMO where using YAML creates the biggest pain. The patches thing is basically unreadable after you add more than 2-3 of it. I understand that you can also delete nodes, which is pretty powerful, but I really long for the "deep merge" Puppet days.

    • Spivak a day ago

      HCL != Terraform

      HCL, like YAML, doesn't even have a module system. It's a data serialization format.

mdaniel 2 days ago

> Allow etcd swap-out

From your lips to God's ears. And, as they correctly pointed out, this work is already done, so I just do not understand the holdup. Folks can continue using etcd if it's their favorite, but mandating it is weird. And I can already hear the butwhataboutism yet there is already a CNCF certification process and a whole subproject just for testing Kubernetes itself, so do they believe in the tests or not?

> The Go templates are tricky to debug, often containing complex logic that results in really confusing error scenarios. The error messages you get from those scenarios are often gibberish

And they left off that it is crazypants to use a textual templating language for a whitespace sensitive, structured file format. But, just like the rest of the complaints, it's not like we don't already have replacements, but the network effect is very real and very hard to overcome

That barrier of "we have nicer things, but inertia is real" applies to so many domains, it just so happens that helm impacts a much larger audience

ra7 a day ago

The desired package management system they describe sounds a lot like Carvel's kapp-controller (https://carvel.dev/kapp-controller/). The Carvel ecosystem, which includes its own YAML templating tool called 'ytt', isn't the most user friendly in my experience and can feel a bit over-engineered. But it does get the idea of Kubernetes-native package management using CRDs mostly right.

bionhoward 12 hours ago

Would rust be a lot better than HCL to replace yaml? Just learning Kubernetes and want to say I hope Kubernetes 2.0 uses a “real” programming language with a decent type system.

A big benefit could be for the infrastructure language to match the developer language. However, knowing software, reinventing something like Kubernetes is a bottomless pit type of task, best off just dealing with it and focusing on the Real Work (TM), right?

akdor1154 a day ago

I think the yaml / HCL and package system overlap..

I wouldnt so much go HCL as something like JSONnet, Pkl, Dhall, or even (inspiration not recommendation) Nix - we need something that allows a schema for powering an LSP, with enough expressitivity to void the need for Helm's templating monstrosity, and ideally with the ability for users to override things that library/package authors haven't provided explicit hooks for.

Does that exist yet? Probably not, but the above languages are starting to approach it.

  • pas a day ago

    k8s could provide a strongly-typed DSL, like cdk8s+

    it's okay to be declarative (the foundation layer), but people usually think in terms of operations (deploy this, install that, upgrade this, assign some vhost, filter that, etc.)

    and even for the declarative stuff, a builder pattern works well, it can be super convenient, sufficient terse, and typed, and easily composed (and assembled via normál languages, not templates)

    ...

    well. anyway. maybe by the time k8s2 rolls around Go will have at least normal error handling.

rwmj 2 days ago

Make there be one, sane way to install it, and make that method work if you just want to try it on a single node or single VM running on a laptop.

  • mdaniel 2 days ago

    My day job makes this request of my team right now, and yet when trying to apply this logic to a container and cloud-native control plane, there are a lot more devils hiding in those details. Use MetalLB for everything, even if NLBs are available? Use Ceph for storage even if EBS is available? Definitely don't use Ceph on someone's 8GB laptop. I can keep listing "yes, but" items that make doing such a thing impossible to troubleshoot because there's not one consumer

    So, to circle back to your original point: rke2 (Apache 2) is a fantastic, airgap-friendly, intelligence community approved distribution, and pairs fantastic with rancher desktop (also Apache 2). It's not the kubernetes part of that story which is hard, it's the "yes, but" part of the lego build

    - https://github.com/rancher/rke2/tree/v1.33.1%2Brke2r1#quick-...

    - https://github.com/rancher-sandbox/rancher-desktop/releases

jcastro 2 days ago

For the confusion around verified publishing, this is something the CNCF encourages artifact authors and their projects to set up. Here are the instructions for verifying your artifact:

https://artifacthub.io/docs/topics/repositories/

You can do the same with just about any K8s related artifact. We always encourage projects to go through the process but sometimes they need help understanding that it exists in the first place.

Artifacthub is itself an incubating project in the CNCF, ideas around making this easier for everyone are always welcome, thanks!

(Disclaimer: CNCF Staff)

  • calcifer a day ago

    > We always encourage projects to go through the process but sometimes they need help understanding that it exists in the first place.

    Including ingress-nginx? Per OP, it's not marked as verified. If even the official components don't bother, it's hard to recommend it to third parties.

    • aritashion 19 hours ago

      As far as I'm aware, ingress-nginx is just the defacto ingress, but not actually an official/ reference implementation?

      • mdaniel 10 hours ago

        I think that's about 20% of the complaints in this thread: there isn't an official/reference implementation of just any of the components, because it matters a great deal what functionality you already have lying around versus needing to bring with you

dhorthy 11 hours ago

Overall I like this but confused about this part

“Yaml doesn’t enforce types but HCL does”

Is the same schema-based validation that is 1) possible client-side with HCL and 2) enforced server-side by k8s not also trivial to enforce client side in an ide?

aranw a day ago

YAML and Helm are my two biggest pain points with k8s and I would love to see them replaced with something else. CUE for YAML would be really nice. As for replacing Helm, I'm not too sure really. Perhaps with YAML being replaced by CUE maybe something more powerful and easy to understand could evolve from using CUE?

hosh a day ago

While we're speculating:

I disagree that YAML is so bad. I don't particularly like HCL. The tooling I use don't care though -- as long as I can stil specify things in JSON, then I can generate (not template) what I need. It would be more difficult to generate HCL.

I'm not a fan of Helm, but it is the de facto package manager. The main reason I don't like Helm has more to do with its templating system. Templated YAML is very limiting, when compared to using a full-fledged language platform to generate a datastructure that can be converted to JSON. There are some interesting things you can do with that. (cdk8s is like this, but it is not a good example of what you can do with a generator).

On the other hand, if HCL allows us to use modules, scoping, and composition, then maybe it is not so bad after all.

  • hamishwhc 16 hours ago

    > cdk8s is like this, but it is not a good example of what you can do with a generator

    As a die-hard fan of cdk8s (and the AWS CDK), I am curious to hear about this more. What do you feel is missing or could be done better?

    • hosh 9 hours ago

      I used Typescript cdk8s, or tried to. Manipulating the objects were unwieldy.

      I wrote https://github.com/matsuri-rb/matsuri ... I have not really promoted it. I tried cdk8s because the team I was working with used Typescript and not Ruby, and I thought cdk8s would have worked well, since it generates manifests instead of templates it.

      Matsuri takes advantage of language features in Ruby not found in Typescript (and probably not Python) that allows for being able to compose things together. Instead of trying to model objects, it is based around constructing a hash that is then converted to JSON. It uses fine-grained method overloading to allow for (1) mixins, and (2) configuration from default values. The result is that with very little ceremony, I can get something to construct the manifest I needed. There were a lot of extra ceremony and boiler plate needed to do anything in the Typescript cdk8s.

      While I can use class inheritance with Matsuri, over the years, I had moved away from it because it was not as robust as mixins (compositions). It was quite the shock to try to work with Typescript cdk8s and the limitations of that approach.

      The main reason I had not promoted Matsuri is because this tool is really made for people who know Ruby well ... but that might have been a career mistake not to try. Instead of having 10 years to slowly get enough support behind it, people want something better supported such as cdk8s or Helmfiles.

ExoticPearTree 18 hours ago

I wish for k8s 2.0 to be less verbose when it comes to deploying anything on it.

I want to be able to say in two or five lines of YAML:

- run this as 3 pods with a max of 5

- map port 80 to this load balancer

- use this environment variables

I don't really care if it's YAML or HCL. Moving from YAML to HCL it's going to be an endless issue of "I forgot to close a curly bracket somewhere" versus "I missed an ident somewhere".

  • kachapopopow 18 hours ago

    Helm does that, kinda. I use the bitnami base chart for my own deployments and it works pretty well.

  • edude03 14 hours ago

    knative might be what you're looking for

d4mi3n a day ago

I agree with the author that YAML as a configuration format leaves room for error, but please, for the love of whatever god or ideals you hold dear, do not adopt HCL as the configuration language of choice for k8s.

While I agree type safety in HCL beats that of YAML (a low bar), it still leaves a LOT to be desired. If you're going to go through the trouble of considering a different configuration language anyway, let's do ourselves a favor and consider things like CUE[1] or Starlark[2] that offer either better type safety or much richer methods of composition.

1. https://cuelang.org/docs/introduction/#philosophy-and-princi...

2. https://github.com/bazelbuild/starlark?tab=readme-ov-file#de...

  • mdaniel a day ago

    I repeatedly see this "yaml isn't typesafe" claim but have no idea where it's coming from since all the Kubernetes APIs are OpenAPI, and thus JSON Schema, and since YAML is a superset of JSON it is necessarily typesafe

    Every JSON Schema aware tool in the universe will instantly know this PodSpec is wrong:

      kind: 123
      metadata: [ {you: wish} ]
    
    I think what is very likely happening is that folks are -- rightfully! -- angry about using a text templating language to try and produce structured files. If they picked jinja2 they'd have the same problem -- it does not consider any textual output as "invalid" so jinja2 thinks this is a-ok

      jinja2.Template("kind: {{ youbet }}").render(youbet=True)
    
    I am aware that helm does *YAML* sanity checking, so one cannot just emit whatever crazypants yaml they wish, but it does not then go one step further to say "uh, your json schema is fubar friend"
nikisweeting a day ago

It should natively support running docker-compose.yml configs, essentially treating them like swarm configurations and "automagically" deploying them with sane defaults for storage and network. Right now the gap between compose and full-blown k8s is too big.

  • mdaniel a day ago

    So, what I'm hearing is that it should tie itself to a commercial company, who now have a private equity master to answer to, versus an open source technology run by a foundation

    Besides, easily half of this thread is whining about helm for which docker-compose has no answer whatsoever. There is no $(docker compose run oci://example.com/awesome --version 1.2.3 --set-string root-user=admin)

    • arccy 21 hours ago

      docker-compose -f $(curl example.com/v1.2.3/awesome-compose.yaml)

      helm is an anti pattern, nobody should touch it with a 10 foot pole

  • ethan_smith 21 hours ago

    The compose-to-k8s mapping is deceptively complex due to fundamental differences in networking models, volume handling, and lifecycle management - tools like Kompose exist but struggle with advanced compose features.

  • Too 21 hours ago

    Replace docker with podman and you can go the other way around, run a pod manifest locally.

smetj 16 hours ago

Kubernetes has all the capabilities to address the needs of pretty much every architecture/design out there. You only need one architecture/design for your particular use case. Nonetheless, you will have to cary all that weight with you, although you will never use it.

moondev a day ago

I was all ready to complain about HCL because of the horrible ergonomics of multiline strings, which would be a deal breaker as a default config format. I just looked though and it seems they now support it in a much cleaner fashion.

https://developer.hashicorp.com/terraform/language/expressio...

This actually makes me want to give HCL another chance

lukaslalinsky a day ago

What I'd like to see the most is API stability. At small scale, it's extremely hard to catch up with Kubernetes releases and the whole ecosystem is paced around those. It's just not sustainable running Kubernetes unless you have someone constantly upgrading everything (or you pay someone/something to do it for you). By the years, we should have a good idea for a wide range of APIs that are useful and stick with those.

solatic a day ago

I don't get the etcd hate. You can run single-node etcd in simple setups. You can't easily replace it because so much of the Kubernetes API is a thin wrapper around etcd APIs like watch that are quite essential to writing controllers and don't map cleanly to most other databases, certainly not sqlite or frictionless hosted databases like DynamoDB.

What actually makes Kubernetes hard to set up by yourself are a) CNIs, in particular if you both intend to avoid cloud-provider specific CNIs, support all networking (and security!) features, and still have high performance; b) all the cluster PKI with all the certificates for all the different components, which Kubernetes made an absolute requirement because, well, prpduction-grade security.

So if you think you're going to make an "easier" Kubernetes, I mean, you're avoiding all the lessons learned and why we got here in the first place. CNI is hardly the naive approach to the problem.

Complaining about YAML and Helm are dumb. Kubernetes doesn't force you to use either. The API server anyway expects JSON at the end. Use whatever you like.

  • mdaniel a day ago

    > I don't get the etcd hate.

    I'm going out on a limb to say you've only ever used hosted Kubernetes, then. A sibling comment mentioned their need for vanity tooling to babysit etcd and my experience was similar.

    If you are running single node etcd, that would also explain why you don't get it: you've been very, very, very, very lucky never to have that one node fail, and you've never had to resolve the very real problem of ending up with just two etcd nodes running

    • solatic a day ago

      No, I ran three-node etcd clusters for years that Kops set up. Kops deployed an etcd-operator to babysit them and take care of backups and the like. It was set-and-forget. We had controlled disruptions all the time as part of Kubernetes control plane updates, no issues.

      And you know... etcd supports five-node clusters, precisely to help support people who are paranoid about extended single node failure.

mikeocool a day ago

How about release 2.0 and then don’t release 2.1 for a LONG time.

I get that in the early days such a fast paced release/EOL schedule made sense. But now something that operates at such a low level shouldn’t require non-security upgrades every 3 months and have breaking API changes at least once a year.

brikym a day ago

The bit about Helm templating resonated with me. Stringly typed indentation hell.

dzonga a day ago

I thought this would be written along the lines of an lllm going through your code - spinning up a railway file. then say have tf for few of the manual dependencies etc that can't be easily inferred.

& get automatic scaling out of the box etc. a more simplified flow rather than wrangling yaml or hcl

in short imaging if k8s was a 2-3 max 5 line docker compose like file

dijit 2 days ago

Honestly; make some blessed standards easier to use and maintain.

Right now running K8S on anything other than cloud providers and toys (k3s/minikube) is disaster waiting to happen unless you're a really seasoned infrastructure engineer.

Storage/state is decidedly not a solved problem, debugging performance issues in your longhorn/ceph deployment is just pain.

Also, I don't think we should be removing YAML, we should instead get better at using it as an ILR (intermediate language representation) and generating the YAML that we want instead of trying to do some weird in-place generation (Argo/Helm templating) - Kubernetes sacrificed a lot of simplicity to be eventually consistent with manifests, and our response was to ensure we use manifests as little as possible, which feels incredibly bizzare.

Also, the design of k8s networking feels like it fits ipv6 really well, but it seems like nobody has noticed somehow.

  • zdc1 2 days ago

    I like YAML since anything can be used to read/write it. Using Python / JS / yq to generate and patch YAML on-the-fly is quite nifty as part of a pipeline.

    My main pain point is, and always has been, helm templating. It's not aware of YAML or k8s schemas and puts the onus of managing whitespace and syntax onto the chart developer. It's pure insanity.

    At one point I used a local Ansible playbook for some templating. It was great: it could load resource template YAMLs into a dict, read separately defined resource configs, and then set deeply nested keys in said templates and spit them out as valid YAML. No helm `indent` required.

    • pm90 2 days ago

      yaml is just not maintainable if you’re managing lots of apps for eg a midsize company or larger. Upgrades become manual/painful.

      • p_l a day ago

        The secret is to not manually edit YAML, ever.

        It's the "make break glass situation little easier" option, not the main mechanism.

        Use a programming language, a dedicated DSL, hell a custom process involving CI/CD that generates json manifests. A bit of time with jsonnet gets you a template that people who never wrote jsonnet (and barely ever touched k8s manifests) can use to setup new applications or modify existing ones in moments.

  • lawn 2 days ago

    k3s isn't a toy though.

    • dijit 2 days ago

      * Uses Flannel bi-lateral NAT for SDN

      * Uses local-only storage provider by default for PVC

      * Requires entire cluster to be managed by k3s, meaning no freebsd/macos/windows node support

      * Master TLS/SSL Certs not rotated (and not talked about).

      k3s is very much a toy - a nice toy though, very fun to play with.

      • bigstrat2003 a day ago

        None of those things make it a toy. It is in fact a useful tool when you want to run kubernetes in a small environment.

        • dijit 21 hours ago

          Nothing wrong with toys, they’re meant to be played with.

          If you deployed this to production though, the problem is not that it’s a toy: its you not understanding what technical trade offs they are making to give you an easy environment.

          I can tell you’re defensive though, might be good to learn what I mean instead of trying to ram a round peg into a square hole.

          • lawn 16 hours ago

            You have a weird definition of a toy.

            k3s is definitely fine to run in production, hence it's not a toy. You just have to understand the tradeoffs they've made, when you should change the defaults, and where it's a reasonable choice compared to other alternatives.

            A toy implementation of Kubernetes would be something like a personal hobby project made for fun or to learn, that comes with critical bugs and drawbacks. That's not what k3s is.

            • dijit 15 hours ago

              k3s is not fine to run in production.

              Do not listen to this person, and do not buy any products of theirs where they have operational control of production platforms.

zug_zug 2 days ago

Meh, imo this is wrong.

What Kubernetes is missing most is a 10 year track record of simplicity/stability. What it needs most to thrive is a better reputation of being hard to foot-gun yourself with.

It's just not a compelling business case to say "Look at what you can do with kubernetes, and you only need a full-time team of 3 engineers dedicated to this technology at tho cost of a million a year to get bin-packing to the tune of $40k."

For the most part Kubernetes is becoming the common-tongue, despite all the chaotic plugins and customizations that interact with each other in a combinatoric explosion of complexity/risk/overhead. A 2.0 would be what I'd propose if I was trying to kill kuberenetes.

  • candiddevmike 2 days ago

    Kubernetes is what happens when you need to support everyone's wants and desires within the core platform. The abstraction facade breaks and ends up exposing all of the underlying pieces because someone needs feature X. So much of Kubernetes' complexity is YAGNI (for most users).

    Kubernetes 2.0 should be a boring pod scheduler with some RBAC around it. Let folks swap out the abstractions if they need it instead of having everything so tightly coupled within the core platform.

    • selcuka 2 days ago

      > Let folks swap out the abstractions if they need it instead of having everything so tightly coupled within the core platform.

      Sure, but then one of those third party products (say, X) will catch up, and everyone will start using it. Then job ads will start requiring "10 year of experience in X". Then X will replace the core orchestrator (K8s) with their own implementation. Then we'll start seeing comments like "X is a horribly complex, bloated platform which should have been just a boring orchestrator" on HN.

    • echelon 2 days ago

      Kubernetes doesn't need a flipping package manager or charts. It needs to do one single job well: workload scheduling.

      Kubernetes clusters shouldn't be bespoke and weird with behaviors that change based on what flavor of plugins you added. That is antithetical to the principal of the workloads you're trying to manage. You should be able to headshot the whole thing with ease.

      Service discovery is just one of many things that should be a different layer.

      • KaiserPro 2 days ago

        > Service discovery is just one of many things that should be a different layer.

        hard agree. Its like jenkins, good idea, but its not portable.

        • 12_throw_away a day ago

          > Its like jenkins

          Having regretfully operated both k8s and Jenkins, I fully agree with this, they have some deep DNA in common.

    • sitkack 2 days ago

      Kubernetes is when you want to sell complexity because complexity makes money and and naturally gets you vendor lockin even while being ostensibly vendor neutral. Never interrupt the customer while they are foot gunning themselves.

      Swiss Army Buggy Whips for Everyone!

      • wredcoll 2 days ago

        Not really. Kubernetes is still wildly simpler than what came before, especially accounting for the increased capabilities.

        • cogman10 2 days ago

          Yup. Having migrated from a puppet + custom scripts environment and terraform + custom scripts. K8S is a breath of fresh air.

          I get that it's not for everyone, I'd not recommend it for everyone. But once you start getting a pretty diverse ecosystem of services, k8s solves a lot of problems while being pretty cheap.

          Storage is a mess, though, and something that really needs to be addressed. I typically recommend people wanting persistence to not use k8s.

          • mdaniel 2 days ago

            > Storage is a mess, though, and something that really needs to be addressed. I typically recommend people wanting persistence to not use k8s.

            I have actually come to wonder if this is actually an AWS problem, and not a Kubernetes problem. I mention this because the CSI controllers seem to behave sanely, but they are only as good as the requests being fulfilled by the IaaS control plane. I secretly suspect that EBS just wasn't designed for such a hot-swap world

            Now, I posit this because I haven't had to run clusters in Azure nor GCP to know if my theory has legs

            I guess the counter-experiment would be to forego the AWS storage layer and try Ceph or Longhorn but no company I've ever worked at wants to blaze trails about that, so they just build up institutional tribal knowledge about treating PVCs with kid gloves

            • wredcoll a day ago

              Honestly this just feels like kubernetes just solving the easy problems and ignoring the hard bits. You notice the pattern a lot after a certain amount of time watching new software being built.

              • mdaniel a day ago

                Apologies, but what influence does Kubernetes have over the way AWS deals with attach and detach behavior of EBS?

                Or is your assertion that Kubernetes should be its own storage provider and EBS can eat dirt?

                • wredcoll a day ago

                  I was tangenting, but yes, kube providing no storage systems has led to a lot of annoying 3rd party ones

          • oceanplexian a day ago

            > Yup. Having migrated from a puppet + custom scripts environment and terraform + custom scripts. K8S is a breath of fresh air.

            Having experience in both the former and the latter (in big tech) and then going on to write my own controllers and deal with fabric and overlay networking problems, not sure I agree.

            In 2025 engineers need to deal with persistence, they need storage, they need high performance networking, they need HVM isolation and they need GPUs. When a philosophy starts to get in the way of solving real problems and your business falls behind, that philosophy will be left on the side of the road. IMHO it's destined to go the way as OpenStack when someone builds a simpler, cleaner abstraction, and it will take the egos of a lot of people with it when it does.

            • wredcoll a day ago

              > simpler, cleaner abstraction

              My life experience so far is that "simpler and cleaner" tends to be mostly achieved by ignoring the harder bits of actually dealing with the real world.

              I used kubernete's (lack of) support for storage as an example elsewhere, it's the same sort of thing where you can look really clever and elegant if you just ignore the 10% of the problem space that's actually hard.

        • KaiserPro 2 days ago

          the fuck it is.

          The problem is k8s is both a orchestration system and a service provider.

          Grid/batch/tractor/cube are all much much more simple to run at scale. More over they can support complex dependencies. (but mapping storage is harder)

          but k8s fucks around with DNS and networking, disables swap.

          Making a simple deployment is fairly simple.

          But if you want _any_ kind of ci/cd you need flux, any kind of config management you need helm.

          • JohnMakin 2 days ago

            > But if you want _any_ kind of ci/cd you need flux, any kind of config management you need helm.

            Absurdly wrong on both counts.

          • jitl a day ago

            K8s has swap now. I am managing a fleet of nodes with 12TB of NVMe swap each. Each container gets (memory limit / node memory) * (total swap) swap limit. No way to specify swap demand on the pod spec yet so needs to be managed “by hand” with taints or some other correlation.

            • wredcoll a day ago

              What does swap space get you? I always thought of it as a it of an anachronism to be honest.

              • mdaniel a day ago

                The comment you replied to cited 12TB of NVMe, and I can assure you that 12TB of ECC RAM costs way more than NVMe

woile a day ago

What bothers me:

- it requires too much RAM to run in small machines (1GB RAM). I want to start small but not have to worry about scalability. docker swarm was nice in this regard.

- use KCL lang or CUE lang to manage templates

nuker 13 hours ago

No one yet mentioned AWS ECS and Fargate?

  • mdaniel 10 hours ago

    Because I can't run AWS ECS nor Fargate on my laptop

    Also, unless something has radically changed recently, ECS == Amazon Managed Docker (one can see a lot of docker-compose inspired constructs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/... ) which, as this thread is continuously debating, reasonable people can differ on whether that's a limitation of a simplification

jerry1979 a day ago

Not sure where buildkit is at these days, but k8s should have reproducible builds.

fragmede a day ago

Instead of yaml, json, or HCL, how about starlark? It's a stripped down Python, used in production by bazel, so it's already got the go libraries.

  • mdaniel a day ago

    As the sibling comment points out, I think that would be a perfectly fine helm replacement, but I would never ever want to feed starlark into k8s apis directly

  • fjasdfwa a day ago

    kube-apiserver uses a JSON REST API. You can use whatever serializes to JSON. YAML is the most common and already works directly with kubectl.

    I personally use TypeScript since it has unions and structural typing with native JSON support but really anything can work.

    • mdaniel a day ago

      Fun fact, while digging into the sibling comment's complaint about the OpenAPI spec, I learned that it actually advertises multiple content-types:

        application/json
        application/json;stream=watch
        application/vnd.kubernetes.protobuf
        application/vnd.kubernetes.protobuf;stream=watch
        application/yaml
      
      which I presume all get coerced into protobuf before being actually interpreted
vbezhenar 16 hours ago

I have some experience with Kubernetes, both managed and self-maintained.

So here are my wishes:

1. Deliver Kubernetes as a complete immutable OS image. Something like Talos. It should auto-update itself.

2. Take opinionated approach. Do not allow for multiple implementations of everything, especially as basic as networking. There should be hooks for integration with underlying cloud platform, of course.

3. The system must work reliably out of the box. For example, kubeadm clusters are not set up properly, when it takes to memory limits. You could easily make node unresponsive by eating memory in your pod.

4. Implement built-in monitoring. Built-in centralized logs. Built-in UI. Right now, kubeadm cluster if not usable. You need to spend a lot of time, intalling prometheus, loki, grafana, configure dashboards, configure every piece of software. Those are very different softwares from different vendors. It's a mess. It requires a lot of processing power and RAM to work. It should not be like that.

5. Implement user management, with usernames and passwords. Right now you need to set up keycloak, configure oauth authentication, complex realm configuration. It's a mess. It requires a lot of RAM to work. It should not be like that.

6. Remove certificates, keys. Cluster should just work, no need to refresh anything. Join node and it stays there.

So basically I want something like Linux. Which just works. I don't need to set up Prometheus to look at my 15-min load average or CPU consumption. I don't need to set up Loki to look at logs, I have journald which is good enough for most tasks. I don't need to install CNI to connect to network. I don't need to install Keycloak to create user. It won't stop working, because some internal certificate has expired. I also want lower resources consumption. Right now Kubernetes is very hungry. I need to dedicate like 2 GiB RAM to master node, probably more. I don't even want to know about master nodes. Basic Linux system eats like 50 MiB RAM. I can dedicate another 50 MiB to Kubernetes, rest is for me, please.

Right now it feels that Kubernetes was created to create more jobs. It's a very necessary system, but it could be so much better.

Dedime a day ago

From someone who was recently tasked with "add service mesh" - make service mesh obsolete. I don't want to install a service mesh. mTLS or some other from of encryption between pods should just happen automatically. I don't want some janky ass sidecar being injected into my pod definition ala linkerd, and now I've got people complaining that cilium's god mode is too permissive. Just have something built-in, please.

  • ahmedtd a day ago

    Various pieces support pieces for pod to pod mTLS are slowly being brought into the main Kubernetes project.

    Take a look at https://github.com/kubernetes/enhancements/tree/master/keps/..., which is hopefully landing as alpha in Kubernetes 1.34. It lets you run a controller that issues certificates, and the certificates get automatically plumbed down into pod filesystems, and refresh is handled automatically.

    Together with ClusterTrustBundles (KEP 3257), these are all the pieces that are needed for someone to put together a controller that distributes certificates and trust anchors to every pod in the cluster.

  • mdaniel a day ago

    For my curiosity, what threat model is mTLS and encryption between pods driving down? Do you run untrusted workloads in your cluster and you're afraid they're going to exfil your ... I dunno, SQL login to the in-cluster Postgres?

    As someone who has the same experience you described with janky sidecars blowing up normal workloads, I'm violently anti service-mesh. But, cert expiry and subjectAltName management is already hard enough, and you would want that to happen for every pod? To say nothing of the TLS handshake for every connection?

darqis 17 hours ago

Sorry, but k8s is not low maintenance. You have to keep the host systems updated and you have to keep the containers updated. And then you have paradigm changes like ingress falloff and emergence of gateway API. k8s is very time intensive, that's why I am not using it. It adds complexity and overhead. That might be acceptable for large organizations, but not for the single dev or small company.

0xbadcafebee a day ago

> Ditch YAML for HCL

Hard pass. One of the big downsides to a DSL is it's linguistic rather than programmatic. It depends on a human to learn a language and figuring out how to apply it correctly.

I have written a metric shit-ton of terraform in HCL. Yet even I struggle to contort my brain into the shape it needs to think of how the fuck I can get Terraform to do what I want with its limiting logic and data structures. I have become almost completely reliant on saved snippet examples, Stackoverflow, and now ChatGPT, just to figure out how to deploy the right resources with DRY configuration in a multi-dimensional datastructure.

YAML isn't a configuration format (it's a data encoding format) but it does a decent job at not being a DSL, which makes things way easier. Rather than learn a language, you simply fill out a data structure with attributes. Any human can easily follow documentation to do that without learning a language, and any program can generate or parse it easily. (Now, the specific configuration schema of K8s does suck balls, but that's not YAML's fault)

> I still remember not believing what I was seeing the first time I saw the Norway Problem

It's not a "Norway Problem". It's a PEBKAC problem. The "problem" is literally that the user did not read the YAML spec, so they did not know what they were doing, then did the wrong thing, and blamed YAML. It's wandering into the forest at night, tripping over a stump, and then blaming the stump. Read the docs. YAML is not crazy, it's a pretty simple data format.

> Helm is a perfect example of a temporary hack that has grown to be a permanent dependency

Nobody's permanently dependent on Helm. Plenty of huge-ass companies don't use it at all. This is where you proved you really don't know what you're talking about. (besides the fact that helm is a joy to use compared to straight YAML or HCL)

AtlasBarfed 11 hours ago

In my opinion we should start at the command line.

If you want to run a program on a computer, the most basic way is to open a command line and invoke the program.

And that executes it on one computer number CPUs TBD.

But with modern networking primitives and foundations, why can I not open a command line and have a concise way of orchestrating and execution of a program across multiple machines?

I have tried several times to do this writing utility code for Cassandra. I got in my opinion very temptingly close to being able to do this.

Likewise with docker, vagrant, and yes, kubernetes, with their CLI interfaces for running commands on containers, the CLI fundamentals are also there.

Others taking a shot at this are salt, stack, ansible and those types of things, but they really seem to be concerned mostly about Enterprise contracts and at the core of pure CLI execution.

Security is really a pain in the ass when it comes to things like this. Your CLI prompt has a certain security assurance with it. You've already logged in.

That's a side note. One of the frustrations I started running to as I was doing this is the Enterprise obsession with getting a manual login / totp code still access anything. Holy hell do I have to jump through hopes in order to automate things across multiple machines when they have totp barriers for accessing them.

The original kubernetes kind of handwaved a lot of this away by forcing the removal, jump boxes, a flat network plane, etc.

recursivedoubts a day ago

please make it look like old heroku for us normies

brunoborges 21 hours ago

I'm not surprised, but somewhat disappointed that the author did not mention Java EE Application Servers. IMO one of the disadvantages of that solution compared to Kuberentes was that they were Java-specific and not polyglot. But everything else about installation, management, scalling, upgrading, etc, was quite well done especially for BEA/Oracle WebLogic.

smetj 17 hours ago

Hard no to HCL!

Yaml is simple. A ton of tools can parse and process it. I understand the author's gripes (long files, indentation, type confusions) but then I would even prefer JSON as an alternative.

Just use better tooling that helps you address your problems/gripes. Yaml is just fine.

Too a day ago

> "Kubernetes isn't opinionated enough"

yes please, and then later...

> "Allow etcd swap-out"

I don't have any strong opinions about etcd, but man... can we please just have one solution, neatly packaged and easy to deploy.

When your documentation is just a list of abstract interfaces, conditions or "please refer to your distribution", no wonder that nobody wants to maintain a cluster on-prem.

cyberax a day ago

I would love:

1. Instead of recreating the "gooey internal network" anti-pattern with CNI, provide strong zero-trust authentication for service-to-service calls.

2. Integrate with public networks. With IPv6, there's no _need_ for an overlay network.

3. Interoperability between several K8s clusters. I want to run a local k3s controller on my machine to develop a service, but this service still needs to call a production endpoint for a dependent service.

  • mdaniel a day ago

    To the best of my knowledge, nothing is stopping you from doing any of those things right now. Including, ironically, authentication for pod-to-pod calls, since that's how Service Accounts work today. That even crosses the Kubernetes API boundary thanks to IRSA and, if one were really advanced, any OIDC compliant provider that would trust the OIDC issuer in Kubernetes. The eks-anywhere distribution even shows how to pull off this stunt from your workstation via publishing the JWKS to S3 or some other publicly resolvable https endpoint

    I am not aware of any reason why you couldn't connect directly to any Pod, which necessarily includes the kube-apiserver's Pod, from your workstation except for your own company's networking policies

    • cyberax a day ago

      Service accounts are _close_ to what I want, but not quite. They are not quite seamless enough for service-to-service calls.

      > I am not aware of any reason why you couldn't connect directly to any Pod, which necessarily includes the kube-apiserver's Pod, from your workstation except for your own company's networking policies

      I don't think I can create a pod with EKS that doesn't have any private network?

      • p_l a day ago

        That's an EKS problem, not a k8s problem.

        K8s will happily let you run a pod with host network, and even the original "kubenet" network implementation allowed directly calling pods if your network wasn't more damaged than a brain after multiple strokes combined with headshot tank cannon (aka most enterprise networks in my experience)

      • mdaniel a day ago

        In addition to what the sibling comment pointed out about that being an EKS-ism, yes, I know for 100% certainty VPC-CNI will allocate Pod IP addresses from the Subnet's definition, which includes public IP addresses. We used that to circumvent the NAT GW tax since all the Pods had Internet access without themselves being Internet accessible. Last I heard, one cannot run Fargate workloads in a public subnet for who-knows-why reasons, but that's the only mandate that I'm aware of for the public/privet delineation

        And, if it wasn't obvious: VPC-CNI isn't the only CNI, nor even the best CNI, since the number of ENIs that one can attach to a Node vary based on its instance type, which is just stunningly dumb IMHO. Using an overlay network allows all Pods that can fit upon a Node to run there

znpy a day ago

I'd like to add my points of view:

1. Helm: make it official, ditch the text templating. The helm workflow is okay, but templating text is cumbersome and error-prone. What we should be doing instead is patching objects. I don't know how, but I should be setting fields, not making sure my values contain text that are correctly indented (how many spaces? 8? 12? 16?)

2. Can we get a rootless kubernetes already, as a first-class citizen? This opens a whole world of possibilities. I'd love to have a physical machine at home where I'm dedicating only an unprivileged user to it. It would have limitations, but I'd be okay with it. Maybe some setuid-binaries could be used to handle some limited privileged things.

tayo42 2 days ago

> where k8s is basically the only etcd customer left.

Is that true. No one is really using it?

I think one thing k8s would need is some obvious answer for stateful systems(at scale, not mysql at a startup). I think there are some ways to do it? Where I work there is basically everything on k8s, then all the databases on their own crazy special systems to support they insist its impossible and costs to much. I work in the worst of all worlds now supporting this.

re: comments about k8s should just schedule pods. mesos with aurora or marathon was basically that. If people wanted that those would have done better. The biggest users of mesos switched to k8s

  • haiku2077 2 days ago

    I had to go deep down the etcd rabbit hole several years ago. The problems I ran into:

    1. etcd did an fsync on every write and required all nodes to complete a write to report a write as successful. This was not configurable and far higher a guarantee than most use cases actually need - most Kubernetes users are fine with snapshot + restore an older version of the data. But it really severely impacts performance.

    2. At the time, etcd had a hard limit of 8GB. Not sure if this is still there.

    3. Vanilla etcd was overly cautious about what to do if a majority of nodes went down. I ended up writing a wrapper program to automatically recover from this in most cases, which worked well in practice.

    In conclusion there was no situation where I saw etcd used that I wouldn't have preferred a highly available SQL DB. Indeed, k3s got it right using sqlite for small deployments.

    • nh2 2 days ago

      For (1), I definitely want my production HA databases to fsync every write.

      Of course configurability is good (e.g. for automated fasts tests you don't need it), but safe is a good default here, and if somebody sets up a Kubernetes cluster, they can and should afford enterprise SSDs where fsync of small data is fast and reliable (e.g. 1000 fsyncs/second).

      • _bohm a day ago

        Most database systems are designed to amortize fsyncs when they have high write throughput. You want every write to be fsync'd, but you don't want to actually call fsync for each individual write operation.

      • haiku2077 a day ago

        > I definitely want my production HA databases to fsync every write.

        I didn't! Our business DR plan only called for us to restore to an older version with short downtime, so fsync on every write on every node was a reduction in performance for no actual business purpose or benefit. IIRC we modified our database to run off ramdisk and snapshot every few minutes which ran way better and had no impact on our production recovery strategy.

        > if somebody sets up a Kubernetes cluster, they can and should afford enterprise SSDs where fsync of small data is fast and reliable

        At the time one of the problems I ran into was that public cloud regions in southeast asia had significantly worse SSDs that couldn't keep up. This was on one of the big three cloud providers.

        1000 fsyncs/second is a tiny fraction of the real world production load we required. An API that only accepts 1000 writes a second is very slow!

        Also, plenty of people run k8s clusters on commodity hardware. I ran one on an old gaming PC with a budget SSD for a while in my basement. Great use case for k3s.

    • dilyevsky a day ago

      1 and 2 can be overridden via flag. 3 is practically the whole point of the software

      • haiku2077 a day ago

        With 3 I mean that in cases where there was an unambiguously correct way to recover from the situation, etcd did not automatically recover. My wrapper program would always recover from thise situations. (It's been a number of years and the exact details are hazy now, though.)

        • dilyevsky a day ago

          If the majority of quorum is truly down, then you’re down. That is by design. There’s no good way to recover from this without potentially losing state so the system correctly does nothing at this point. Sure you can force it into working state with external intervention but that’s up to you

          • haiku2077 a day ago

            Like I said I'm hazy on the details, this was a small thing I did a long time ago. But I do remember our on-call having to deal with a lot of manual repair of etcd quorum, and I noticed the runbook to fix it had no steps that needed any human decision making, so I made that wrapper program to automate the recovery. It wasn't complex either, IIRC it was about one or two pages of code, mostly logging.

  • dilyevsky a day ago

    That is decisively not true. A number of very large companies use etcd directly for various needs

jeffrallen a day ago

Simpler?

A guy can dream anyway.

fragmede a day ago

Kubernetes 2.0 would just be kubernetes with batteries included

anonfordays a day ago

So odd seeing all the HCL hate here. It's dead simple to read, much more so than YAML. It grew out of Mitchell's hatred for YAML. If your "developers" are having problems with HCL, it's likely a skills issue.

  • Too 21 hours ago

    Yeah. I can't get the amount of hate seen here either. It's super natural to read and has very few concepts to learn. Just resources, lists, sets, dictionaries, variables, ternary operator, function calls and templates. Paired with a LSP and syntax highlighting it gets even easier to see the boundaries between code and variables. The two different ways to write dicts are a bit strange and foreach has some gotchas but honestly not a big deal. Especially not when put in comparison to YAML which has nothing even close and even more invisible footguns. Seeing comments like "it does not have if-statements". Duh, it's not imperative, for many good reasons. Not understanding that yeah it's clearly a skills issue.

    • mystifyingpoi 15 hours ago

      > Duh, it's not imperative, for many good reasons

      You can be declarative and still have imperative constructs hidden behind pure functions let's say. That's how Ansible does it - it is super useful to define a custom filter that's called like "{{ myvariable | my_filter }}" but underneath there is a bunch of imperative Python datastructure wrangling (without visible side effects, of course - just in memory stuff), with arbitrary code. Doing the same in HCL is I believe impossible in general case.

jonenst a day ago

What about kustomize and kpt ? I'm using them (instead of helm) but but:

* kpt is still not 1.0

* both kustomize and kpt require complex setups to programatically generate configs (even for simple things like replicas = replicasx2)

fatbird 2 days ago

How many places are running k8s without OpenShift to wrap it and manage a lot of the complexity?

  • raincom a day ago

    Openshift, if IBM and Redhat want to milk the license and support contracts. There are other vendors that sell k8s: Rancher, for instance. SuSe bought Rancher.

  • jitl a day ago

    I’ve never used OpenShift nor do I know anyone irl who uses it. Sample from SF where most people I know are on AWS or GCP.

    • p_l a day ago

      OpenShift is somewhat popular as turnkey solution for on-premise kubernetes if you can stomach Red Hat pricing.

    • coredog64 a day ago

      You can always go for the double whammy and run ROSA: RedHat OpenShift on AWS

1oooqooq a day ago

systemd, but distributed. and with config files redone from scratch (and ideally not in yaml)

  • mdaniel a day ago

    I'll bite: what is the systemd primitive for secret injection? for declaring which EBS volume it needs mounted where? (I don't mean Mount= I mean a conceptual MountVolumeByTag[Name=my-pgdata-0]=/var/lib/postgres/data ) Oh, that's another great question: what is the systemd primitive for making 3 copies of a service if you find that one copy isn't getting it done?

    Maybe all of those "implementation details" are what you meant by "config files redone" and then perhaps by the follow-on "simple matter of programming" to teach C how to have new behaviors that implement those redone config files

    • arccy 21 hours ago

      secret injection: systemd-creds

      the other parts don't exist because that's really only useful for distributed systems, and systems is not that right now

    • 1oooqooq a day ago

      those are available in machine files.

  • 15155 a day ago

    How is a systemd service file functionally different from a Pod spec?

    • 1oooqooq a day ago

      exactly. now add the missing distributed part.

Bluecobra 2 days ago

[flagged]

  • esafak 2 days ago

    Where's the schwartz when you need it?

  • neepi 2 days ago

    Surely it’s

    Kubernetes 2.0: the search for more complexity by trying to get rid of it.

    • mulmen a day ago

      Bluecobra made a Spaceballs reference.

moomin a day ago

Let me add one more: give controllers/operators a defined execution order. Don’t let changes flow both ways. Provide better ways for building things that don’t step on everyone else’s toes. Make whatever replaces helm actually maintain stuff rather than just splatting it out.

  • clvx a day ago

    This is a hard no for me. This is the whole thing about reconciliation loop. You can just push something to the api/etcd and eventually it will become ready when all the dependencies exist. Now, rejecting manifests because crd’s don’t exist yet is a different discussion. I’m down to have a cache of manifests to be deployed waiting for crd but if the crd isn't deployed, then a garbage collection alike tool removes them from cache. This is what fluxcd and argocd already do in a way but I would like to Have it natively.

    • moomin 15 hours ago

      I don't think what I'm proposing would prevent that? It would mean you needed things to be DAGs, but it's cyclic dependencies that mess you up anyway.