lfittl 1 day ago

Its worth reading this follow-up LKML post by Andres Freund (who works on Postgres): https://lore.kernel.org/lkml/yr3inlzesdb45n6i6lpbimwr7b25kqk...

  • jeffbee 1 day ago

    Funny how "use hugepages" is right there on the table and 99% of users ignore it.

    • bombcar 1 day ago

      I’m absolutely flabbergasted by the performance left on the table; even by myself - just yesterday I learned Gentoo’s emerge can use git and be a billion times faster.

      • globular-toast 1 day ago

        The time spent by emerge is utterly dwarfed by the time spent to build the packages, so who cares? Maybe it's different if installing a binary system but don't think most people are doing that.

        • LtdJorge 1 day ago

          When using multiple overlays, emerge-webrsync is ungodly slower compared to git.

        • bombcar 22 hours ago

          If you can emerge in 2.86s user you can do it right before you emerge world, meaning it's all "done in one interaction" (even if the actual emerge takes an hour - you don't have to look at it.

          Whereas if emerge is taking 5-10 minutes, you have to remember to come back to it, or script it.

  • TacticalCoder 1 day ago

    AIUI in that thread they're saying "0.51x" the perf on a 96-core arm64 machine and they're also saying they cannot reproduce it on a 96-core amd64 machine.

    So it's not going to affect everybody both running PostgreSQL and upgrading to the latest kernel. Conditions seems to be: arm64, shitloads of core, kernel 7.0, current version of PostgreSQL.

    That is not going to be 100% of the installed PostgreSQL DBs out there in the wild when 7.0 lands in a few weeks.

    • master_crab 1 day ago

      For production Postgres, i would assume it’s close to almost no effect?

      If someone is running postgres in a serious backend environment, i doubt they are using Ubuntu or even touching 7.x for months (or years). It’ll be some flavor of Debian or Red Hat still on 6.x (maybe even 5?). Those same users won’t touch 7.x until there has been months of testing by distros.

      • crcastle 1 day ago

        Ubuntu is used in many serious backend environments. Heroku runs tens of thousands (if not more) instances of Ubuntu on its fleet. Or at least it did through the teens and early 2020s.

        https://devcenter.heroku.com/articles/stack

        • nine_k 1 day ago

          Do they upgrade to the new LTS the day it is released?

          • crcastle 1 day ago

            Not historically.

            • rvnx 1 day ago

              and they are right, this is because a lot of junior sysadmins believe that newer = better.

              But the reality:

                a) may get irreversible upgrades (e.g. new underlying database structure) 
                b) permanent worse performance / regression (e.g. iOS 26)
                c) added instability
                d) new security issues (litellm)
                e) time wasted migrating / debugging
                f) may need rewrite of consumers / users of APIs / sys calls
                g) potential new IP or licensing issues
              

              etc.

              A couple of the few reasons to upgrade something is:

                a) new features provide genuine comfort or performance upgrade (or... some revert)
                b) there is an extremely critical security issue
                c) you do not care about stability because reverting is uneventful and production impact is nil (e.g. Claude Code)
              

              but 99% of the time, if ain't broke, don't fix it.

              https://en.wikipedia.org/wiki/2024_CrowdStrike-related_IT_ou...

              • miki123211 1 day ago

                On the other hand, I suspect LLMs will dramatically decrease the window between a vulnerability being discovered and that vulnerability being exploited in the wild, especially for open-source projects.

                Even if the vulnerability itself is discovered through other means than by an LLM, it's trivial to ask a SOTA model to "monitor all new commits to project X and decide which ones are likely patching an exploitable vulnerability, and then write a PoC." That's a lot easier than finding the vulnerable itself.

                I won't be surprised if update windows (for open source networked services) shrink to ~10 minutes within a year or two. It's going to be a brutal world.

              • gjvc 1 day ago

                all fair points, on the other hand, as a general rule, isn't it important to stay on currently-supported versions of pieces of software that you run?

                ymmv, but in my experience projects like postgresql which have been reliable, tend to continue to be so.

              • mr_toad 1 day ago

                Too often I see IT departments use this as an excuse to only upgrade when they absolutely have to, usually with little to no testing in advance, which leaves them constantly being back-footed by incompatibility issues.

                The idea of advanced testing of new versions of software (that they’ll be forced to use eventually) never seems to occur, or they spend so much time fighting fires they never get around to it.

          • sakjur 1 day ago

            Ubuntu's upgrade tools wait until the .1 release for LTSes, so your typical installation would wait at least half a year.

        • rixed 1 day ago

          There is serious as in "corporate-serious" and serious as in "engineer-serious".

          • zbentley 22 hours ago

            I’ve seen more 5k+-core fleets running Ubuntu in prod than not, in my career. Industries include healthcare, US government, US government contractor, marketing, finance.

            • rixed 18 hours ago

              In other words, those industries that used to run windows before ?

              • zbentley 26 minutes ago

                I'd say about 2/3 of the places I've worked started on Linux without a Windows precedent other than workstations. I can't speak for the experience of the founding staff, though; they might have preferred Ubuntu due to Windows experience--if so, I'm curious as to why/what those have to do with each other.

                That said, Ubuntu in large production fleets isn't too bad. Sure, other distros are better, but Ubuntu's perfectly serviceable in that role. It needs talented SRE staff making sure automation, release engineering, monitoring, and de/provisioning behave well, but that's true of any you-run-the-underlying-VM large cloud deployment.

      • pmontra 1 day ago

        A customer of mine is running on Ubuntu 22.04 and the plan is to upgrade to 26.04 in Q1 2027. We'll have to add performance regression to the plan.

        • wongogue 1 day ago

          Are you running ARM servers?

    • MBCook 1 day ago

      So perhaps this is a regression specifically in the arm64 code, or said differently maybe it’s a performance bug that has been there for a long time but covered up by the scheduler part that was removed?

      • db48x 1 day ago

        Could be either of those, or something else entirely. Or even measurement error.

        • jeltz 1 day ago

          Turns out the amd machine had huge tables enabled and after disabling those the regression was there on and too. So arm vs amd was a red herring.

          Of course not a nice regression but you should not run PostgreSQL on large servers without huge pages enabled so thud regression will only hurt people who have a bad configuration. That said I think these bad configurations are common out there, especially in containerized environments where the one running PostgreSQL may not have the ability to enable huge pages.

          • whizzter 1 day ago

            Still that huge a regression that affects multiple platforms doesn't sound too neat, did they narrow down the root cause?

          • db48x 1 day ago

            Yes, I had a good laugh at that. It might technically be a regression, but not one that most people will see in practice. Pretty weird that someone at Amazon is bothering to run those tests without hugepages.

            • scottlamb 1 day ago

              I doubt they explicitly said "I'll run without huge pages, which is an important AWS configuration". They probably just forgot a step. And "someone at Amazon" describes a lot of people; multiply your mental probability tables accordingly.

              • db48x 22 hours ago

                The number of people at Amazon is pretty much irrelevant; the org is going to ensure that someone is keeping an eye on kernel performance, but also that the work isn’t duplicative.

                Surely they would be testing the configuration(s) that they use in production? They’re not running RDS without hugepages turned on, right?

                • scottlamb 11 hours ago

                  > The number of people at Amazon is pretty much irrelevant; the org is going to ensure that someone is keeping an eye on kernel performance, but also that the work isn’t duplicative.

                  I'd guess they have dozens of people across say a Linux kernel team, a Graviton hardware integration team, an EC2 team, and a Amazon RDS for PostgreSQL team who might at one point or another run a benchmark like this. They probably coordinate to an extent, but not so much that only one person would ever run this test. So yes it is duplicative. And they're likely intending to test the configurations they use in production, yes, but people just make mistakes.

                  • db48x 7 hours ago

                    True; to err is human. But it is weird that they didn’t just fire up a standard RDS instance of one or more sizes and test those. After all, it’s already automated; two clicks on the website gets you a standard configuration and a couple more get you a 96c graviton cpu. I just wonder how the mistake happened.

                    • menaerus 6 hours ago

                      You're assuming that they ran the workload with huge-pages disabled unintentionally.

                      • db48x 3 hours ago

                        No… I’m assuming that they didn’t use the same automation that creates RDS clusters for actual customers. No doubt that automation configures the EC2 nodes sanely, with hugepages turned on. Leaving them turned off in this benchmark could have been accidental, but some accident of that kind was bound to happen as soon as the tests use any kind of setup that is different from what customers actually get.

                        • menaerus 45 minutes ago

                          You're again assuming that having huge pages turned on always brings the net benefit, which it doesn't. I have at least one example where it didn't bring any observable benefit while at the same time it incurred extra code complexity, server administration overhead, and necessitated extra documentation.

      • adrian_b 1 day ago

        The following messages concluded that using huge pages mitigates the regression, while not using huge pages reproduces it.

    • zamalek 1 day ago

      It was later reproduced on the same machine without huge pages enabled. PICNIC?

      • anarazel 1 day ago

        Yes, I did reproduce it (to a much smaller degree, but it's just a 48c/96t machine). But it's an absurd workload in an insane configuration. Not using huge pages hurts way more than the regression due to PREEMPT_LAZY does.

        With what we know so far, I expect that there are just about no real world workloads that aren't already completely falling over that will be affected.

        • pgaddict 1 day ago

          So why does it happen only with hugepages? Is the extra overhead / TLB pressure enough to trigger the issue in some way? Of is it because the regular pages get swapped out (which hugepages can't be)?

          • anarazel 1 day ago

            I don't fully know, but I suspect it's just that due to the minor faults and tlb misses there is terrible contention with the spinlock, regardless of the PREEMPT_LAZY when using 4k pages (that easily reproducible). Which is then made worse by preempting more with the lock held.

    • torginus 1 day ago

      It's a huge issue of ARM based systems, that hardly anyone uses or tests things on them (in production).

      Yes, Macs going ARM has been a huge boon, but I've also seen crazy regressions on AWS Graviton (compared to how its supposed to perform), on .NET (and node as well), which frankly I have no expertise or time digging into.

      Which was the main reason we ultimately cancelled our migration.

      I'm sure this is the same reason why its important to AWS.

      • p_l 1 day ago

        Macs are actually part of pain point with ARM64 Linux, because the Linux arm set er tend to use 64 kB pages while Mac supports only 4 and 16, and it causes non trivial bugs at times (funnily enough, I first encountered that in a database company...)

  • justinclift 1 day ago

    Note that it's just not a single post, and there's additional further information in following the full thread. :)

    • adrian_b 1 day ago

      Yes, and in the following messages the conclusion was that the regression is mitigated when using huge pages.

      • jeltz 1 day ago

        Which you always should use anyway if you can.

        • justinclift 23 hours ago

          Hmmm, it's not always that clear cut.

          For example, Redis officially advised people to disable it due to a latency impact:

          https://redis.io/docs/latest/operate/oss_and_stack/managemen...

          Pretty sure Redis even outputs a warning to the logs upon startup when it detects hugepages are enabled.

          Note that I'm not a Redis expert, I just remember this from when I ran it as a dependency for other software I was using.

          • fabian2k 22 hours ago

            That's transparent huge pages, which are also not the setting recommended for PostgreSQL.

          • jeltz 13 hours ago

            1) That is about transparent huge pages which is a different thing and 2) it is always clear cut for PostgreSQL. If you can you should always use huge pages (the non-transparent kind).

      • devchix 16 hours ago

        This seems bad, Splunk advises you to turn off THP due its small read/write characteristics: https://help.splunk.com/en/splunk-enterprise/release-notes-a...

        Bad because as of Splunk 10.x, Splunk bundles postgres to integrate with their SOAR platform. Parenthetically, this practice of bundling stuff with Splunk is making vuln remediation a real pain. Splunk bundles its own python, mongod, and now postgres, instead of doing dependency checking. They're going to have to keep doing it as long as they release a .tgz and not just an RPM. The most recent postgres vuln is not fixed in Splunk.

        • menaerus 6 hours ago

          Huge pages and THP are not the same thing.

  • aftbit 1 day ago

    >If this somehow does end up being a reproducible performance issue (I still suspect something more complicated is going on), I don't see how userspace could be expected to mitigate a substantial perf regression in 7.0 that can only be mitigated by a default-off non-trivial functionality also introduced in 7.0.

    • cr125rider 1 day ago

      They said the magic words to get Linus to start flipping tables. Never break userspace. Unusably slow is broken

  • anal_reactor 1 day ago

    > Maybe we should, but requiring the use of a new low level facility that was introduced in the 7.0 kernel, to address a regression that exists only in 7.0+, seems not great.

    Completely right. This sounds like a communication failure. Maybe Linux maintainers should pick a few applications that have "priority support" and problems with these applications are also problems with Linux itself. Breaking Postgres is a serious regression.

    Reminds me of a situation where Fedora couldn't be updated if you had Wine installed and one side of the argument was "user applications are user problem" while the other was "it's Wine, like come on".

    • falcor84 1 day ago

      I for one liked the old and simple WE DO NOT BREAK USERSPACE attitude.

      https://linuxreviews.org/WE_DO_NOT_BREAK_USERSPACE

      • reisse 1 day ago

        Not sure it is true anymore. I've encountered few userspace breaks in io_uring, at least.

      • gcr 1 day ago

        Performance regressions are different from ABI incompatibilities. If the kernel refused to do any work that slowed down any userspace program, the pace would go a lot slower.

        • shadowgovt 1 day ago

          Or be a lot uglier. See: Microsoft replacing its own API surfaces with binary-compatible representations to workaround companies like Adobe adding perf improvements like bypassing the kernel-provided kernel object constructors because it saved them a few cycles to just hard-code the objects they wanted and memcpy them into existence.

          • cogman10 1 day ago

            Microsoft's whole "Let's just ship all the dlls" attitude is a big part of the reason a windows install is like 300GB now.

            Eventually you'd expect that something has to give.

  • fxtentacle 1 day ago

    .. which confirms all of my stereotypes. Looks like the AWS engineer who reported it used a m8g.24xlarge instance with 384 GB of RAM, but somehow didn't know or care to enable huge pages. And once enabling them, the performance regression disappears.

    • bushbaba 1 day ago

      Because such settings aren’t obvious to those not familiar with them. LLMs should make discoverability easier though

      • perrygeo 1 day ago

        Honest question: what's the value of running the benchmark and reporting a performance regression if the author is not familiar with basic operation of the software? I'd argue that not understanding those settings disqualifies you from making statements about it.

        • cogman10 23 hours ago

          The performance was reduced without a settings change. That is still a regression even if huge pages mitigates the problem.

          I'd be curious to know if there's still a regression with hugepages turned on in older kernels.

          If you are benchmarking something and the only changed variable between benchmarks is the kernel, that is useful information. Even if your environment isn't correctly setup.

monocasa 1 day ago

I feel like using spinlocks in user space at all without kernel support like rseq is just asking for weird performance degradations.

  • jcalvinowens 1 day ago

    > I feel like using spinlocks in user space at all without kernel support like rseq is just asking for weird performance degradations.

    Yeah, exactly. "Doctor, help, somebody replaced my wooden hammer with a metal one, and now I can't hit myself in the face with it as many times."

    If you use spinlocks in userspace, you're gonna have a bad time.

    • mgaunard 1 day ago

      Most people looking for performance will reach for the spinlock.

      The expectation is that the kernel should somehow detect applications that are spinning, and avoid preempting them early.

      • IshKebab 1 day ago

        Well that seems like an unreasonable expectation no? Also isn't the point of spinlocks that they get released before the kernel does anything? Otherwise you could just use a futex... Which maybe you should do anyway...

        https://matklad.github.io/2020/01/04/mutexes-are-faster-than...

        • mgaunard 17 hours ago

          The scheduling is based on how much the LWP made use of its previous time slices. A spinning program clearly is using every cycle it's given without yielding, and so you can clearly tell preemption should be minimized.

      • silon42 21 hours ago

        If you are spinning so long that it requires preemption, you're doing something wrong, no?

        • jcalvinowens 21 hours ago

          It doesn't matter, it's a long tail thing: on average user spinlocks can work, and even appear to be beneficial on benchmarks (for many reasons, Andy alludes to some above). But if you have enough users, some of them will experience the apocalyptic long tail, no matter what you do: that's why user spinlocks are unacceptable. RSEQ is the first real answer for this, but it's still not a guarantee: it is not possible to disable SCHED_OTHER preemption in userspace.

          If I make something 1% faster on average, but now a random 0.000001% of its users see a ten-second stall every day, I lose.

          It is tempting to think about it as a latency/throughput tradeoff. But it isn't that simple, the unbounded thrashing can be more like a crash in terms of impact to the system.

          • mgaunard 17 hours ago

            Well, you can always pin to a core and move other threads out of that core.

            That's what you'd do if manually scheduling. Ideally the dynamic scheduler would do that on its own.

            • jcalvinowens 15 hours ago

              Sure. But if you squint even that isn't good enough, you'll still take interrupts on that core in the critical section sometimes when somebody else wants the lock.

              The other problem with spin-wait is that it overshoots, especially with an increasing backoff. Part of the overhead of sleeping is paid back by being woken up immediately.

              When it's made to work, the backoff is often "overfit" in that very slight random differences in kernel scheduler behavior can cause huge apparent regressions.

  • jeltz 1 day ago

    PostgreSQL is old and had to support kernels which did not support spinlocks. But, yes, maybe PostgreSQL should stop doing so now that kernels do.

  • anarazel 1 day ago

    I really dislike the use of spinlocks in postgres (and have been replacing a lot of uses over time), but it's not always easy to replace them from a performance angle.

    On x86 a spinlock release doesn't need a memory barrier (unless you do insane things) / lock prefix, but a futex based lock does (because you otherwise may not realize you need to futex wake). Turns out that that increase in memory barriers causes regressions that are nontrivial to avoid.

    Another difficulty is that most of the remaining spinlocks are just a single bit in a 8 larger byte atomic. Futexes still don't support anything but 4 bytes (we could probably get away with using it on a part of the 8 byte atomic with some reordering) and unfortunately postgres still supports platforms with no 8 byte atomics (which I think is supremely silly), and the support for a fallback implementation makes it harder to use futexes.

    The spinlock triggering the contention in the report was just stupid and we only recently got around to removing it, because it isn't used during normal operation.

    Edit: forgot to add that the spinlock contention is not measurable on much more extreme workloads when using huge pages. A 100GB buffer pool with 4KB pages doesn't make much sense.

    • anarazel 1 day ago

      Addendum big enough to warrant a separate post: The fact the contention is a spinlock, rather than a futex is unrelated to the "regression".

      A quick hack shows the contended performance to be nearly indistinguishable with a futex based lock. Which makes sense, non-PI futexes don't transfer the scheduler slice the lock owner, because they don't know who the lock owner is. Postgres' spinlock use randomized exponential backoff, so they don't prevent the lock owner from getting scheduled.

      Thus the contention is worse with PREEMPT_LAZY, even with non-PI futexes (which is what typical lock implementations are based on), because the lock holder gets scheduled out more often.

      Probably worth repeating: This contention is due to an absurd configuration that should never be used in practice.

      • menaerus 6 hours ago

        Contention doesn't exist in older kernel versions even with huge-pages disabled, no?

        • anarazel 3 hours ago

          The contention does exist in older kernels and is quite substantial.

          • menaerus 51 minutes ago

            You said

            > Maybe we should, but requiring the use of a new low level facility that was introduced in the 7.0 kernel, to address a regression that exists only in 7.0+, seems not great.

            ... so that leaves me confused. My understanding is that the regression is triggered with the 7.0+ kernel and can be mitigated with huge pages turned on.

            My question therefore was how come this regression hasn't been visible with huge pages turned off with older kernel versions? You say that it was but I can't find this data point.

            • anarazel 27 minutes ago

              > ... so that leaves me confused. My understanding is that the regression is triggered with the 7.0+ kernel and can be mitigated with huge pages turned on.

              It gets a bit worse with preempt_lazy - for me just 15% percent or so - because the lock holder is scheduled out a bit more often. But it was bad before.

              > My question therefore was how come this regression hasn't been visible with huge pages turned off with older kernel versions? You say that it was but I can't find this data point.

              I mean it wasn't a regression before, because this is how it has behaved for a long time.

              This workload is not a realistic thing that anybody would encounter in this form in the real world. Even without the contention - which only happens the first time the buffer pool is filled - you lose so much by not using huge pages with a 100gb buffer pool that you will have many other issues.

              We (postgres and me personally) were concerned enough about potential contention in this path that we did get rid of that lock half a year ago (buffer replacement selection has been lock free for close to a decade, just unused buffers were found via a list protected by this lock).

              But the performance gains we saw were relatively small, we didn't measure large buffer pools without huge pages though.

              And at least I didn't test with this many connections doing small random reads into a cold buffer pool, just because it doesn't seem that interesting.

    • amluto 23 hours ago

      > On x86 a spinlock release doesn't need a memory barrier (unless you do insane things) / lock prefix, but a futex based lock does (because you otherwise may not realize you need to futex wake).

      Now you've gotten me wondering. This issue is, in some sense, artificial: the actual conceptual futex unlock operation does not require sequential consistency. What's needed is (roughly, anyway) an release operation that synchronizes with whoever subsequently acquires the lock (on x86, any non-WC store is sufficient) along with a promise that the kernel will get notified eventually (and preferably fairly quickly) if there was a non-spinning sleeper. But there is no requirement that the notification occur in any particular order wrt anything else except that the unlock must be visible by the time the notification occurs [0]; there isn't even a requirement that the notification not occur if there is no futex waiter.

      I think that, in common cache coherence protocols, this is kind of straightforward -- the unlock is a store-release, and as long as the cache line ends up being written locally, the hardware or ucode or whatever simply [1] needs to check whether a needs-notification flag is set in the same cacheline. Or the futex-wait operation needs to do a super-heavyweight barrier to synchronize with the releasing thread even though the releasing thread does not otherwise have any barrier that would do the job.

      One nasty approach that might work is to use something like membarrier, but I'm guessing that membarrier is so outrageously expensive that this would be a huge performance loss.

      But maybe there are sneaky tricks. I'm wondering whether CMPXCHG (no lock) is secretly good enough for this. Imagine a lock word where bit 0 set means locked and bit 1 set means that there is a waiter. The wait operation observes (via plain MOV?) that bit 0 is set and then sets bit 1 (let's say this is done with LOCK CMPXCHG for simplicity) and then calls futex_wait(), so it thinks the lock word has the value 3. The unlock operation does plain CMPXCHG to release the lock. The failure case would be that it reports success while changing the value from 1 to 0. I don't know whether this can happen on Intel or AMD architectures.

      I do expect that it would be nearly impossible to convince an x86 CPU vendor to commit to an answer either way.

      (Do other architectures, e.g. the most recent ARM variants, have an RMW release operation that naturally does this? I've tried, and entirely failed AFAICT, to convince x86 HW designers to add lighter weight atomics.)

      [0] Visible to the remote thread, but the kernel can easily mediate this, effectively for free.

      [1] Famous last words. At least in ossified microarchitectures, nothing is simple.

      • anarazel 22 hours ago

        > > On x86 a spinlock release doesn't need a memory barrier (unless you do insane things) / lock prefix, but a futex based lock does (because you otherwise may not realize you need to futex wake).

        > Now you've gotten me wondering. This issue is, in some sense, artificial: the actual conceptual futex unlock operation does not require sequential consistency. What's needed is (roughly, anyway) an release operation that synchronizes with whoever subsequently acquires the lock (on x86, any non-WC store is sufficient) along with a promise that the kernel will get notified eventually (and preferably fairly quickly) if there was a non-spinning sleeper. But there is no requirement that the notification occur in any particular order wrt anything else except that the unlock must be visible by the time the notification occurs [0]; there isn't even a requirement that the notification not occur if there is no futex waiter.

        Hah.

        > ... > But maybe there are sneaky tricks. I'm wondering whether CMPXCHG (no lock) is secretly good enough for this. Imagine a lock word where bit 0 set means locked and bit 1 set means that there is a waiter. The wait operation observes (via plain MOV?) that bit 0 is set and then sets bit 1 (let's say this is done with LOCK CMPXCHG for simplicity) and then calls futex_wait(), so it thinks the lock word has the value 3. The unlock operation does plain CMPXCHG to release the lock. The failure case would be that it reports success while changing the value from 1 to 0. I don't know whether this can happen on Intel or AMD architectures.

        I suspect the problem isn't so much the lock prefix, but that the non-futex spinlock release just is a store, whereas a futex release has to be a RMW operation.

        I'm talking out of my ass here, but my guess is that the reason for the performance gain of the plain-store-is-a-spinlock-release on x86 comes from being able to do the release via the store buffer, without having to wait for exclusive ownership of the cache line. Due to being a somewhat contended simple spinlock, often embedded on the same line as the to-be-protected data, it's common for the line not not be in modified ownership anymore at release.

        • amluto 21 hours ago

          > I suspect the problem isn't so much the lock prefix, but that the non-futex spinlock release just is a store, whereas a futex release has to be a RMW operation.

          > I'm talking out of my ass here, but my guess is that the reason for the performance gain of the plain-store-is-a-spinlock-release on x86 comes from being able to do the release via the store buffer, without having to wait for exclusive ownership of the cache line.

          I don’t think so. The CPU is pretty good about hiding that kind of latency — reading a contended cache line and doing a correctly predicted branch shouldn’t stall anything after it.

          But LOCK and MFENCE are quite expensive.

      • adrian_b 7 hours ago

        Using LOCK CMPXCHG or even plain CMPXCHG does not make sense unless it is done in a loop, which tests the success of the operation.

        Implementing locks does not need this kind of loops, which may greatly increase the overhead, but only loops that do simple loads, for detecting changes, or the invocation of a FUTEX_WAIT, which is equivalent with that.

        Besides loops that wait for changes, any kind of lock may be implemented with atomic read-modify-write instructions (e.g. on x86 XCHG, LOCK XADD, LOCK BTS and so on, and equivalent instructions on Armv8.1-A or later ISAs) that are not used in loops, so they have predictable overhead. For example, a futex may be used by a thread that waits for multiple events, if the other threads use a locked bit-test-and-set on the futex variable to signal the occurrence of an event, where each event is assigned to a distinct bit.

        CMPXCHG and the equivalent load-and-lock/store conditional are really needed far less often than some people use them. The culprit is a widely-quoted research paper that has shown that these instructions are more universal than simple atomic fetch-and-operation instructions, allowing the implementation of lock-free algorithms, but the fact that they can do more does not mean that they should also be used when their extra power is not necessary, because that is paid dearly by introducing non-deterministic overhead.

        A simple atomic instruction has an overhead much greater than an access to the L1 cache or the L2 cache, but typically the overhead is similar to that of a simple access to the L3 cache and significantly lower than the overhead of a simple access to the main memory, which remains the most expensive operation in modern CPUs.

        Moreover, while mutual exclusion can be implemented reasonably efficiently with locks, it is also used far more often than necessary. It is possible to implement shared buffers or message queues that use neither mutual exclusion nor optimistic access that may need to be retried (a.k.a. lock-free access), but instead of those they use dynamic partitioning of the shared resource, allowing concurrent accesses without interference.

        Organizing the cooperation between threads around shared buffers/message queues is frequently much better than using mutual exclusion, which stalls all contending threads, serializing their execution, and also much better than lock-free access, which may need an unpredictable number of retries when contention is high.

        • amluto 1 hour ago

          You are misunderstanding me, which is perhaps understandable, since I’m talking about the minutiae of x86, not locking in general.

          When unlocking a futex-backed mutex, one needs to do two things. First, one needs to actually unlock it: this is a store-release in modern lingo, and on x86 almost any store instruction has the correct ordering semantics. Second, one needs to determine whether to call futex_wake, which is conceptually just reading a flag “is someone waiting” and then branching on the result. The problem is that the load needs to be ordered after (or at least not before) the store.

          x86 provides two main ways to do this, MFENCE and LOCK. For whatever reason, at least Intel has tried pretty hard to optimize LOCK, and it’s often the case that LOCKed operations on a hot cache line is faster than MFENCE. (I have benchmarked this, and Linux uses this trick.)

          My point is that the specific algorithm of unlocking a futex-backed mutex does not require the full ordering semantics of MFENCE or LOCK. And my secondary observation is that x86 has some non-LOCKed RMW instructions, one of which is plain CMPXCHG. Unlocked CMPXCHG is much faster than LOCK anything or MFENCE — I’ve benchmarked it. There are also the flag outputs from operations like ADD. And I’m speculating that maybe some of these instructions are secretly actually ordered strongly enough for futex unlock.

    • jcalvinowens 21 hours ago

      That 64-bit atomic in the buffer head with flags, a spinlock, and refcounts all jammed into it is nasty. And there are like ten open coded spin waits around the uses... you certainly have my empathy :)

      This got me thinking about 64-bit futexes again. Obviously that can't work with PI... but for just FUTEX_WAIT/FUTEX_WAKE, why not?

      Somebody tried a long time ago, it got dropped but I didn't actually see any major objection: https://lore.kernel.org/lkml/20070327110757.GY355@devserv.de...

      • anarazel 20 hours ago

        > That 64-bit atomic in the buffer head with flags, a spinlock, and refcounts all jammed into it is nasty.

        Turns out to be pretty crucial for performance though... Not manipulating them with a single atomic leads to way way worse performance.

        For quite a while it was a 32bit atomic, but I recently made it a 64bit one, to allow the content lock (i.e. protecting the buffer contents, rather than the buffer header) to be in the same atomic var. That's for one nice for performance, it's e.g. very common to release a pin and a lock at the same time and there are more fun perf things we can do in the future. But the real motivation was work on adding support for async writes - an exclusive locker might need to consume an IO completion for a write that's in flight that is prevent it from acquiring the lock. And that was hard to do with a separate content lock and buffer state...

        > And there are like ten open coded spin waits around the uses... you certainly have my empathy :)

        Well, nearly all of those are all to avoid needing to hold a spinlock, which, as lamented a lot around this issue, don't perform that well when really contended :)

        We're on our way to barely ever need the spinlock for the buffer header, which then should allow us to get rid of many of those loops.

        > This got me thinking about 64-bit futexes again. Obviously that can't work with PI... but for just FUTEX_WAIT/FUTEX_WAKE, why not?

        It'd be pretty nice to have. There are lot of cases where one needs more lock state than one can really encode into a 32bit lock state.

        I'm quite keen to experiment with the rseq time slice extension stuff. Think it'll help with some important locks (which are not spinlocks...).

        • jcalvinowens 20 hours ago

          > Turns out to be pretty crucial for performance though...

          I don't doubt it. I just meant nasty with respect to using futex() to sleep instead of spin, I was having some "fun" trying.

          I can certainly see how pushing that state into one atomic would simplify things, I didn't really mean to question that.

          > We're on our way to barely ever need the spinlock for the buffer header, which then should allow us to get rid of many of those loops.

          I'm cheering you on, I hadn't looked at this code before and its been fun looking through some of the recent work on it.

          > It'd be pretty nice to have. There are lot of cases where one needs more lock state than one can really encode into a 32bit lock state.

          I've seen too much open coded spinning around 64-bit CAS in proprietary code, where it was a real demonstrable problem, and similar to here it was often not straightforward to avoid. I confess to some bias because of this experience ("not all spinlocks...") :)

          I remember a lot of cases where FUTEX_WAIT64/FUTEX_WAKE64 would have been a drop-in solution, that seems compelling to me.

dsr_ 1 day ago

Nobody sensible runs the latest kernel; nobody running PG in production should be afraid of setting a non-default at either boot time or as a sysctl. So this will, most likely, be another step in building a PG database server (turn off pre-emption if your kernel is 7.0 or later and PG is pre-whatever-version).

At worst it might become a permanent part of building a PG server and a FAQ... but if it affects one thing this badly, it will affect others.

  • stingraycharles 1 day ago

    That may be the case, but it’s still not a great situation to be in and one has to wonder: if PostgreSQL is affected, what else is?

    • bombcar 1 day ago

      That's the big thing - PSQL will be tested, noticed, and fixed (and likely have a version that handles 7.0 by the time it's in common use).

      But other software won't and may not even be noticed, except as a (I hate using the term) enshittification.

      Better to introduce the "correct way" in 7.0 but not regress the old (translate the "correct" into the old if necessary) - and then in 8.0 or some future release implement the regression.

      • stingraycharles 1 day ago

        Exactly, this is how it’s usually done. As the developer on the mailing list mentions, implementing a new low level construct in 7.0 and a performance regression that requires said new low level construct to mitigate is not great. You need a grace period in which both old and new approach is fast.

  • Meekro 1 day ago

    > Nobody sensible runs the latest kernel

    From the article: "Linux 7.0 stable is due out in about two weeks. This is also the kernel version powering Ubuntu 26.04 LTS to be released later in April."

    Unfortunately, lots of people will be running it in less than a month. At the moment, it'll take a kernel patch (not a sysctl) to undo this-- hopefully something changes.

    • Neywiny 1 day ago

      Not nobody but not everybody upgrades to the newest distros immediately. That's the advantage of LTS. I've even found that a lot of programs have poorer support on 24.04 than 22.04 due to security changes, so I'm fine sticking with 22.04 as my main dev system.

      • justinclift 1 day ago

        > ... not everybody upgrades to the newest distros immediately.

        While that's true, for new deployments the story is often "deploy on the latest release of things available at the time".

        So, there will probably be a substantial deployment of new projects / testing projects using the Linux 7.0 kernel along with the latest available software packages in a few weeks.

        • Maxion 1 day ago

          I would argue it's mainly inexperienced devs who deploy on the very latest. Once you get some more years under your belt you realize the value of LTS versions, even if you don't get the shiniest shiny.

          • yunohn 1 day ago

            > kernel version powering Ubuntu 26.04 *LTS*

            • josh-sematic 1 day ago

              Yes it’s LTS but the point is that the LTS system has overlapping support so you can wait on an older LTS for a bit before upgrading to a newer one. And it’s somewhat prudent to do so if you value stability highly, because often a few new issues will be discovered and patched after LTS goes live for a bit.

      • esafak 1 day ago

        That's the advantage of LTS? 24.04 is the LTS, not the one you use, 22.04.

        • SoftTalker 1 day ago

          22.04 is also an LTS release, supported for another year still.

          https://ubuntu.com/about/release-cycle

          We're just now looking at moving production machines to 24.04.

          • apelapan 1 day ago

            If you are on a maintenance contract with Ubuntu, 22.04 is supported until 2032.

            If it aint broken, don't fix it.

        • cortesoft 1 day ago

          All even number .04 releases are LTS in Ubuntu

      • vasco 1 day ago

        Someone said "its fine nobody uses this" and someone else gave the world's biggest slam dunk of "Ubuntu in 1 month" and your reply is that "not everyone does it". How far from the point can you be!

        In the Linux world this is the worst possible scenario, distro with the largest adoption, LTS.

        • ndsipa_pomu 1 day ago

          Not trying to downplay the importance of this, but the LTS versions aren't until the first point release, so 26.04.1 (typically six months or so after the release).

          • electroly 21 hours ago

            Is that true? I haven't heard that before. Do you have a link?

            Here's how they announced 24.04.0. It says LTS and doesn't mention anything about LTS coming in the .1 release: https://canonical.com/blog/canonical-releases-ubuntu-24-04-n...

            • ndsipa_pomu 21 hours ago

              I can't find any link, so I think I'm getting mixed up between what they consider LTS and when the upgrade tool starts prompting to upgrade. If you're on the 24.04 LTS, then you don't get prompted to upgrade until 26.04.1

    • 999900000999 1 day ago

      Depends on your shop.

      As someone with a heavy QA/Dev Opps background I don't think we have enough details.

      Is it only ARM64 ? How many ARM64 PG DBs are running 96 cores?

      However...

      This is the most popular database in the world. Odds are this will effect a bunch of other lesser known applications.

    • teekert 1 day ago

      I think most people on enterprise-y systems wait for (at least) 26.04.1, the window is 3 years (when on 24.04, which is supported until ~2029-04-30, it's 1 year when on 22.04) starting now, hardly anyone switches immediately.

    • tankenmate 1 day ago

      Not necessarily;

      ``` $ grep PREEMPT_DYNAMIC /boot/config-$(uname -r) CONFIG_PREEMPT_DYNAMIC=y CONFIG_HAVE_PREEMPT_DYNAMIC=y CONFIG_HAVE_PREEMPT_DYNAMIC_CALL=y ```

      if your kernel has CONFIG_PREEMPT_DYNAMIC then you can go back to the pre 7.0 default by adding preempt=none to your grub config. I haven't seen any plans by Ubuntu to drop CONFIG_PREEMPT_DYNAMIC from the default kernel config.

      • tankenmate 1 day ago

        actually i just checked, yeah, ubuntu would have to add none back to the kernel and `CONFIG_PREEMPT_NONE=y` the config so that it can be selected at boot.

  • bombcar 1 day ago

    We need some sensible people running the latest and greatest or we won't catch things like this.

  • cwillu 1 day ago

    The option to set PREEMPT_NONE was removed for basically all platforms.

  • Seattle3503 1 day ago

    If you're running in a docker container you share the host kernel. You might not have a choice.

longislandguido 1 day ago

Anyone check to see if Jia Tan has submitted any kernel patches lately?

  • rs_rs_rs_rs_rs 1 day ago

    They don't need to, there's about a billion bugs they can exploit.

cperciva 1 day ago

This makes me feel better about the 10% performance regression I just measured between FreeBSD 14 and FreeBSD 15.0.

  • db48x 1 day ago

    Heh. Did they at least add useful features to balance out that cost?

    • cperciva 21 hours ago

      FreeBSD 15 has lots of useful features! And better performance on other benchmarks; I just need to track down what's going wrong with this particular one.

      • db48x 7 hours ago

        Certainly could be worse!

bob1029 1 day ago

I'm struggling a bit with why we need all these fancy dynamic preemption modes. Is this about hyperscalars shoving more VMs per physical machine? What does a person trying to host a software solution gain from this kernel change?

If a user wants to spin in an infinite loop all day every day, I don't see the problem with that. Even if the spinning will provably never do any useful work.

  • ponco 1 day ago

    more throughput WITHOUT huge tail latency is my understanding. A user above posted this link https://lwn.net/Articles/994322/ which goes into the background. My mental model is "give the kernel more explicit information" and it will be able to make better decisions

teleforce 1 day ago

Does the PostgresSQL 18 performance increased with the latest asynchronous I/O, smarter query planning with improved parallelism kind of offset this performance hits? [1].

"Enhanced and smarter parallelisation; initial benchmarks indicate up to 40% faster analytical queries".

[1] PostgreSQL 18 released: Key features & upgrade tips:

https://www.baremon.eu/postgresql-18-released-key-features-u...

Deeg9rie9usi 1 day ago

Once again phoronix shoot out an article without further researching nor letting the mail thread in question cool down. The follow up mails make clear that the issue is more or less a non-issue since the benchmark is wrong.

  • adrian_b 1 day ago

    The following up mails conclude that the regression happens only when huge pages are not used.

    While using huge pages whenever possible is the right solution and this should be enough for PostgreSQL, perhaps there are applications that cannot use huge pages and which are affected by the regression.

    So I do not think that it is right to just ignore what happened.

    • Deeg9rie9usi 1 day ago

      I agree with you. The lurid headlines of phoronix.com just annoy me...

    • scottlamb 1 day ago

      > While using huge pages whenever possible is the right solution and this should be enough for PostgreSQL, perhaps there are applications that cannot use huge pages and which are affected by the regression.

      It will be more interesting to talk about those applications if and when they are found. And I wouldn't assume the solutions are limited to reverting this change, starting to use the new spinlock time-slice extension mechanism, and enabling huge pages.

      It sounds like using 4K pages with 100G of buffer cache was just the thing that made this spinlock's critical section become longer than PostgreSQL's developers had seen before. So when trying to apply the solution to some hypothetical other software that is suddenly benchmarking poorly, I'd generalize from "enable huge pages" to "look for other differences between your benchmark configuration and what the software's authors tested on".

      • justinclift 23 hours ago

        > It will be more interesting to talk about those applications if and when they are found.

        Redis recommend disabling hugepages: https://redis.io/docs/latest/operate/oss_and_stack/managemen...

        ---

        Actually, looks like they changed the log warning to be more specific, as it's just the "always" setting which seems to cause Redis grief?

        https://github.com/redis/redis/issues/3895

        • scottlamb 11 hours ago

          By "those applications" I'm talking about other applications affected by this regression. There are several apps in addition to Redis that recommend limiting the transparent huge page configuration. (Some of them recommend using explicit huge pages instead.) But it's quite possible none of them are affected by this regression, as it may be particular to apps using spinlocks. (Certainly the new rseq API mentioned in the thread is targeted at spinlock users.) It seems equally possible to me that some spinlock-using app has a regression irrespective of huge pages.

galbar 1 day ago

It's not a good look to break userspace applications without a deprecation period where both old and new solutions exist, allowing for a transition period.

FireBeyond 1 day ago

Once upon a time, Linus would shout and yell about how the kernel should never "break" userspace (and I see in some places, some arguments of "It's not broken, it's just a performance regression" - personally I'd argue a 50% hit to performance of a pre-eminent database engine is ... quite the regression).

Now, the kernel engineer who introduced the brand new mechanism (introduced in Linux 7.0) for handling pre-emption says the "fix" is for Postgres to start using this new mechanism (I think the sister comment below links to what one of the Postgres engineers thinks of that, and I'm inclined mostly to agree).

  • bear8642 1 day ago

    > I'd argue a 50% hit to performance [...] is ... quite the regression

    Indeed! Especially if said regression happens to impact anything trade/market related...

  • perching_aix 1 day ago

    Entertaining perspective - I thought that the whole "it's not an outage it's a (horizontal or vertical) degradation" thing was exclusive to web services, but thinking about it, I guess it does apply even in cases like this.

  • quietsegfault 1 day ago

    This was my immediate thought - kernel doesn’t break software, or at least it didn’t used to.

  • MBCook 1 day ago

    It wouldn’t be the first time one of the other maintainers ran afoul of “Linus’s law“.

    He may simply be waiting until more is known on exactly what’s causing it.

  • arjie 1 day ago

    Well, the reason he'd yell about it is that someone did it. If no one ever did it, he'd never yell and we'd never have the rule. So one can only imagine that this is one of those things where someone has to keep holding the line rather than one of those things where you set some rule and it self-holds.

    Doubtless someone will have to do the yelling.

  • shakna 1 day ago

    Freund seems to suggest that hugepages is the right way to run a system under this sort of load - which is the fix.

    > Hah. I had reflexively used huge_pages=on - as that is the only sane thing to do with 10s to 100s of GB of shared memory and thus part of all my benchmarking infrastructure - during the benchmark runs mentioned above.

    > Turns out, if I disable huge pages, I actually can reproduce the contention that Salvatore reported (didn't see whether it's a regression for me though). Not anywhere close to the same degree, because the bottleneck for me is the writes.

    But, they can speak for themselves here [0].

    [0] https://news.ycombinator.com/item?id=47646332

anal_reactor 1 day ago

Can someone explain to me what's the problem? I have very little knowledge of Linux kernel, but I'm curious. I've tried reading a little, but it's jargon over jargon.

  • alienchow 1 day ago

    I'm not familiar with the jargon either, but based on some reading it comes down to how the latest kernel treats process preempts.

    Postgres uses spinlocks to hold shared memory for very critical processes. Spinlocks are an infinite loop with no sleep to attempt to hold a lock, thus "spinning". Previous kernels allowed spinlocking processes to run with PREEMPT_NONE. This flag tells the kernel to let the locking process complete their work before doing anything. Now the latest kernel removed this functionality and is interrupting spinlocking processes. So if a process that is holding a lock gets interrupted, all other postgres spinlocks processes that need the same lock spin in place for way longer times, leading to performance degradation.

    • anal_reactor 1 day ago

      Why does it only appear on arm64 and not x86?

      • adrian_b 1 day ago

        It was not architecture-related. Not using huge pages also reproduced the regression on x86.

        I do not know why using huge pages mitigates the regression, but it could be just because when the application uses huge pages it uses spinlocks much less frequently so the additional delays do not accumulate enough to cause a significant performance reduction.

        • tux3 1 day ago

          The problem is the spinlock being interrupted by a minor fault (you're touching a page of memory for the first time, and the kernel needs to set it up the first time it's actually used)

          If your pages are 1GB instead of 4kB, this happens much less often.

  • tijsvd 1 day ago

    From what I understand in the follow up: postgres uses shared memory for buffers. This shared memory is read by a new connection while locked.

    In postgres, connections are handled with a process fork, not a new thread. If such a fork first reads memory, even if it already exists, that causes a minor page fault, which goes back to the kernel so it can update memory mapping tables.

    The operation under lock is only a few instructions, but if it takes longer than expected, then that causes lock contention. Regression in the kernel handling minor faults?

    The whole thing is then made worse because it's a spinlock, causing all waiting processes to contend over the cpus which adds to kernel processing.

    Mitigated by using huge pages, which dramatically reduces the number of mapping entries and faults. I reckon that it could also be mitigated in postgres by pre-faulting all shared memory early?

up2isomorphism 1 day ago

Not sure why people have to upgrade to the newest major kernel version as soon as it is released.

  • conradludgate 1 day ago

    It's the performance team's job to test these things. Doesn't mean they're going to deploy it immediately.

    Someone should be testing these things and reporting regressions

  • jeltz 1 day ago

    If nobody tests and reports these things when the version is released the regression would not be fixed when people start using it in production.

dmitrygr 22 hours ago

And this is exactly why we need the old Linus. Someone needs to yell “we do not break user space“

carlsborg 1 day ago

Perhaps in due time we will see workload specific forks of Linux maintained by a team of agents