dig1 4 hours ago

> The irony is that this may be a $0 revenue user for Grafana Labs.

Why is that ironic? Since Mimir is open-source, $0 revenue users are expected. AFAIK, Grafana Labs relies heavily on go, typescript, and linux, without necessarily being their top financial contributor. They could have kept Mimir proprietary like Splunk, but whether that would have attracted the same level of adoption or community contribution is another matter.

codeduck 3 hours ago

> given Prometheus’s widespread adoption and proven reliability in diverse environments.

I have used Prometheus a lot. Reliable is not a word I would associate with it.

  • hagen1778 3 hours ago

    What do you use instead of Prometheus?

    • codeduck 3 hours ago

      Given a choice, VictoriaMetrics. It has proven itself time and time again at scale, and requires a very low support investment.

  • pahae 2 hours ago

    I set up a fairly large Prom-based architecture which I later on migrated to VictoriaMetrics (VM) so I think I can chime in here.

    Both Prom and VM are exceptionally stable in my opinion, even on _very_ large scales. There were times when I had a single (Prom, later VM) and not-overly-large instances scrape 2Mio samples/s without any issues. In addition to fairly spiky query loads.

    However, if something does go wrong, the single most impactful difference between VM and Prom is simply the difference in startup time. Prometheus with 2TB of metrics takes _forever_ to start up. We're talking up to 2 hours on SSD while VM just... starts.

    • porridgeraisin 1 hour ago

      Yeah, at previous work we used both as well. The transition from prom to vm was "ongoing" and from the time I joined to the time I left we did parallel writes to both. Never faced issues with either. If I remember correctly, we wrote from services to a kafka queue first, and then a consumer took that and pushed it to (both) the metrics endpoint(s).

jameson 4 hours ago

Curious why the team choose Grafana Mirmir over VM cluster?

awoimbee 4 hours ago

Directly emitting metrics using OTLP instead of having the OTel receiver scrape the metrics endpoint is interesting. I never made that move because the Prometheus metrics endpoint works and is so simple, and it's what most projects (eg kubernetes) use.