>The task before the Muon g-2 Theory Initiative is to solve these dilemmas and update the 2020 data-driven SM prediction. Two new publications are planned. The first will be released in 2025 (to coincide with the new experimental result from Fermilab). This will describe the current status and ongoing body of work, but a full, updated SM prediction will have to wait for the second paper, likely to be published several years later.
a bit unsatisfying, basically the Muon g-2 Theory Initiative which gave us the 2020 prediction that turned out to be wildly different from FNAL measurement is going to publish an updated version of the prediction after FNAL releases their final result.
it means the Theory Initiative will have a target that will never move to aim for when working out their final SM prediction
> it means the Theory Initiative will have a target that will never move to aim for when working out their final SM prediction
Sure, and this breaks some level of independence between this two workstreams -- it seems unlikely that the Theory Initiative will publish a number that diverges further from the experimental side.
But on the other hand, we're going to be in a world where there are two theoretical estimates, one based partially on empirical methods and one based on lattice methods, and these are going to diverge. So the obvious next task for the theory group is to (a) explain why these diverge and (b) explain why the lattice method is the more accurate one. Which likely will lead to more work for the experimentalists to explain why the inputs to the empirical methods didn't generalize.
Essentially all of the theory research (specifically, lattice QCD calculations) since the previous white paper in 2020 have been conducted blinded, and at any rate, the deadline to be included in the theory average has already passed. It would take an act of extraordinary brashness to fudge the numbers now.
Since nobody with expertise is speculating about the reason, here is some speculation from somebody without expertise:
The data-driven approach yields an accurate SM prediction, but the universe and lattice methods agree on a "bad" SM result, due to an undiscovered fact about measure theory and the correct interpretation of path integrals that the implementations of lattice QCD accidentally get right. Since the limit taken by lattice methods have never been proven to equal what we think the right interpretation of path integrals should be, the former could be right when the latter is erroneous.
Former physicist here. Your speculation about path integrals and measure theory isn't quite right.
Lattice QCD methods are definitely proven to be equivalent to the continuum theory when you take the appropriate limits. This isn't some mathematical mystery - Wilson's groundbreaking work in the 70s established this, and it's been rigorously developed since.
The actual g-2 puzzle is much more straightforward: we have competing measurements from different sources:
1. The Fermilab Muon g-2 experiment in Illinois directly measured the muon's anomalous magnetic moment.
2. The "data-driven" prediction used e+e- collision measurements from various experiments (BaBar at SLAC, KLOE in Italy, etc.) to calculate what the Standard Model predicts for g-2. This initially disagreed with Fermilab's measurement.
3. Lattice QCD calculations (like those from the BMW collaboration) produced theoretical predictions that matched Fermilab's experimental result.
4. More recently, new e+e- measurements from the CMD-3 experiment at VEPP-2000 in Russia (2023) are bringing the data-driven approach closer to both the lattice calculations and Fermilab's experimental values.
So it's really just a question of which experimental measurements of e+e- collisions are most accurate/resolve any experimental differences, not some exotic mathematical issue with QFT formulations.
Anyone know what are inside those tubes? Thinking to create this with a few younger ones and want to understand any risks should those tube breaks and something escapes.
> However, these calculations did not take into account the effects of “radiative corrections” – the continuous emission and re-absorption of short-lived “virtual particles” (see box) by the electron or muon – which increases g by about 0.1%.
It's worth continuing. It's a well-put-together summary article. Yeah, the cupcake analogy is contrived -- but it's as good an analogy as any other to make it clear that the topic of the article is the difference between a theoretical and experimental measurement.
Couldn't read being the undismissable cookie banner that took up 90% of the screen. One big button that said "accept all and close". I'd rather just close, thank you
>The task before the Muon g-2 Theory Initiative is to solve these dilemmas and update the 2020 data-driven SM prediction. Two new publications are planned. The first will be released in 2025 (to coincide with the new experimental result from Fermilab). This will describe the current status and ongoing body of work, but a full, updated SM prediction will have to wait for the second paper, likely to be published several years later.
a bit unsatisfying, basically the Muon g-2 Theory Initiative which gave us the 2020 prediction that turned out to be wildly different from FNAL measurement is going to publish an updated version of the prediction after FNAL releases their final result.
it means the Theory Initiative will have a target that will never move to aim for when working out their final SM prediction
> it means the Theory Initiative will have a target that will never move to aim for when working out their final SM prediction
Sure, and this breaks some level of independence between this two workstreams -- it seems unlikely that the Theory Initiative will publish a number that diverges further from the experimental side.
But on the other hand, we're going to be in a world where there are two theoretical estimates, one based partially on empirical methods and one based on lattice methods, and these are going to diverge. So the obvious next task for the theory group is to (a) explain why these diverge and (b) explain why the lattice method is the more accurate one. Which likely will lead to more work for the experimentalists to explain why the inputs to the empirical methods didn't generalize.
Plenty to still learn here.
But if you observe the observation, does it change both obervations? or just yours? ( Im kiddin ).
FermiLab is doing some great amazing work.
We have references: https://cerncourier.com/a/new-muon-g-2-result-bolsters-earli...
Essentially all of the theory research (specifically, lattice QCD calculations) since the previous white paper in 2020 have been conducted blinded, and at any rate, the deadline to be included in the theory average has already passed. It would take an act of extraordinary brashness to fudge the numbers now.
Since nobody with expertise is speculating about the reason, here is some speculation from somebody without expertise:
The data-driven approach yields an accurate SM prediction, but the universe and lattice methods agree on a "bad" SM result, due to an undiscovered fact about measure theory and the correct interpretation of path integrals that the implementations of lattice QCD accidentally get right. Since the limit taken by lattice methods have never been proven to equal what we think the right interpretation of path integrals should be, the former could be right when the latter is erroneous.
Former physicist here. Your speculation about path integrals and measure theory isn't quite right.
Lattice QCD methods are definitely proven to be equivalent to the continuum theory when you take the appropriate limits. This isn't some mathematical mystery - Wilson's groundbreaking work in the 70s established this, and it's been rigorously developed since.
The actual g-2 puzzle is much more straightforward: we have competing measurements from different sources:
1. The Fermilab Muon g-2 experiment in Illinois directly measured the muon's anomalous magnetic moment.
2. The "data-driven" prediction used e+e- collision measurements from various experiments (BaBar at SLAC, KLOE in Italy, etc.) to calculate what the Standard Model predicts for g-2. This initially disagreed with Fermilab's measurement.
3. Lattice QCD calculations (like those from the BMW collaboration) produced theoretical predictions that matched Fermilab's experimental result.
4. More recently, new e+e- measurements from the CMD-3 experiment at VEPP-2000 in Russia (2023) are bringing the data-driven approach closer to both the lattice calculations and Fermilab's experimental values.
So it's really just a question of which experimental measurements of e+e- collisions are most accurate/resolve any experimental differences, not some exotic mathematical issue with QFT formulations.
https://bigthink.com/starts-with-a-bang/calculation-solves-m...
This article was a lot clearer for me
I appreciate the source, but a website that advertises Michio Kaku is deeply disappointing.
It doesn't. I know that, without reading the piece, because nothing has broken the standard model.
Enjoyed reading this and thank you for sharing.
Anyone know what are inside those tubes? Thinking to create this with a few younger ones and want to understand any risks should those tube breaks and something escapes.
You're not where you think you are.
> However, these calculations did not take into account the effects of “radiative corrections” – the continuous emission and re-absorption of short-lived “virtual particles” (see box) by the electron or muon – which increases g by about 0.1%.
Our universe is a subtle mess.
yeah... who designed this thing?
Weren't you guys supposed to save these stories for Muonday Mondays? Weekend's too long of a wait?
I miss Topological Tuesdays or Turing Thursdays
I did some googling. Does this help?
https://azure.microsoft.com/en-us/blog/quantum/2025/02/19/mi...
[stub for offtopicness]
[flagged]
It's worth continuing. It's a well-put-together summary article. Yeah, the cupcake analogy is contrived -- but it's as good an analogy as any other to make it clear that the topic of the article is the difference between a theoretical and experimental measurement.
Couldn't read being the undismissable cookie banner that took up 90% of the screen. One big button that said "accept all and close". I'd rather just close, thank you
[flagged]
I wonder why I didn't get it, on Chrome with UBO