Hacker News new | past | comments | ask | show | jobs | submit login
Particle mystery: physicists confirm the muon is more magnetic than predicted (sciencemag.org)
560 points by furcyd on April 7, 2021 | hide | past | favorite | 292 comments



The Quanta write up is a bit more neutral on this announcement. There is a computational result that was not included in the theoretical value used to bench the test against. Once reviewed, this difference may yet go back to oblivion.

https://www.quantamagazine.org/muon-g-2-experiment-at-fermil...


To clarify, for those not familiar with this topic, this experiment is making measurements at such exquisite precision that even the calculations for the theoretical prediction are extremely non-trivial and require careful estimation of many many pieces which are then combined. Which is to say that debugging the theoretical prediction is (almost) as hard as debugging the experiment. So I would expect the particle physics community to be extremely circumspect while the details get ironed out.

The Quanta article explains it quite nicely. To quote their example of what has happened in the past:

> ”A year after Brookhaven’s headline-making measurement, theorists spotted a mistake in the prediction. A formula representing one group of the tens of thousands of quantum fluctuations that muons can engage in contained a rogue minus sign; fixing it in the calculation reduced the difference between theory and experiment to just two sigma. That’s nothing to get excited about.”


If the theoretical prediction can't be calculated until the experiment is done that motivates the choices of what and what not to approximate, is it really a prediction?


If the theoretical prediction can't be calculated until the experiment is done that motivates the choices of what and what not to approximate, is it really a prediction?

Let me make that more meta.

If a theory is unable to predict a particular key value, is it still a theory?

This is not a hypothetical question. The theory being tested here is the Standard Model. The Standard Model in principle is entirely symmetric with regards to a whole variety of things that we don't see symmetry in. For example the relative mass of the electron and the proton.

But, you ask, how can it be that those things are different? Well, for the same reason that we find pencils lying on their side rather than perfectly balanced around the point of symmetry on the tip. Namely that the point of perfect symmetry is unstable, and there are fields setting the value of each asymmetry that we actually see. Each field is carried by a particle. Each particle's properties reflect the value of the field. And therefore the theory has a number of free parameters that can only be determined by experiment, not theory.

In fact there are 19 such parameters. https://en.wikipedia.org/wiki/Standard_Model#Theoretical_asp... has a table with the complete list. And for a measurement as precise as this experiment requires, the uncertainty of the values of those parameters is highly relevant to the measurement itself.


This is unequivocally the best explanation of this I've ever heard, including from university professors. You are very good at this.


That was beautifully explained thank you


That’s a good (and profound) question, not deserving of downvotes.

It turns out that the simplified paradigmatic “scientific method” is a very bad caricature of what actually happens on the cutting edge when we’re pushing the boundaries of what we understand (not just theory, but also experimental design). Even on the theoretical front, the principles might be well-understood, but making predictions requires accurately modeling all the aspects that contribute to the actual experimental measurement (and not just the simple principled part). In that sense, the border between theory and experiment is very fuzzy, and the two inevitably end-up influencing each other, and it is fundamentally unavoidable.

Unfortunately, it would require more effort on my part to articulate this, and all I can spare right now is a drive-by comment. Steven Weinberg has some very insightful thoughts on the topic, both generally and specifically in the context of particle physics, in his book “Dreams of a final theory” (chapter 5).

If you don’t have access to the book, in a pinch, you could peruse some slides that I made for a discussion: https://speakerdeck.com/sivark/walking-through-weinbergs-dre...


Philosopher Larry Laudan had a tripartite view. He proposed IIRC convergent processes between better (and more complete) measurements, better (and more complete) models and theory, and better instrumentation. Thus, one could also include a fourth term perhaps: improving technology.


Thanks for the pointer. That sounds vaguely like a view I've been toying with. I'll be interested to see if his version of it is more rigorous than mine.


Sometimes it's like unit tests, where you might get the test itself wrong at first, but that still helps you get closer and write better tests.


I have never thought of science as writing unit tests for the universe before, but I really like this analogy.


I'm fond of using this analogy in the other direction: "tests are experiments, types are proofs".

(To be more precise, static types are propositions that the type checker tries to prove, but that's not as catchy.)


That's what Duhem-Quine thesis in the philosophy of sciences is. The thesis is that "it is impossible to test a hypothesis in isolation, because an empirical of the hypothesis requires one or more auxiliary/background assumptions/hypotheses".


Not exactly. Analytic solutions to simple problems will produce as many predictions as you want from them, and you can test them in a year, two years, or a century from then. These highly approximated calculations, in contrast, will come out one way or the other, depending on how many of which terms you add (this is especially common in quantum chemistry) - and nobody will decide on the "right" way to choose terms until they have an experiment to compare it against. That means that they aren't predicting outcomes, they're rationalizing outcomes.


Of course, that's how two rival paradigms(research programs) 'rationalize' their own testing/outcomes.


it's not good to cherry-pick paragraphs from the whole artile.

> But as the Brookhaven team accrued 10 times more data, their measurement of the muon’s g-factor stayed the same while the error bars around the measurement shrank. The discrepancy with theory grew back to three sigma by the time of the experiment’s final report in 2006.


No, the essence of my point is that the number of sigmas is meaningless when you have a systematic error — in either the experiment or the theoretical estimate — all that the sigmas tell you is that the two are mismatched. If a mistake could happen once, a similar mistake could easily happen again, so we need to be extremely wary of taking the sigmas at face value. (Eg: the DAMA experiment reports dark matter detections with over 40sigma significance, but the community doesn’t take their validity too seriously)

Any change in the theoretical estimates could in principle drastically change the number of sigmas mismatch with experiment in either direction (but as the scientific endeavor is human after all, typically each helps debug the other and the two converge over time).


“A similar mistake could happen again”

“Similar” is doing a lot of work there - what constitutes similar basically dictates if error correction has any future proofing benefits or none at all.


The systematic errors enter the sigma calculation, doesn’t it?


Are you asking are systematic errors "priced-in"/"automatically represented" or are they hidden inside the sigma calculation?

Systematic errors can easily remain hidden. The faster-than-light neutrino had 6-sigma confidence[0], but 4 other labs couldn't reproduce the results. In the end it was attributed to fiber optic timing errors.

So if you don't know you have a system error, then you can very easily get great confidence in fundamentally flawed results.

[0] https://en.wikipedia.org/wiki/Neutrino#Superluminal_neutrino...


No. As written in another comment, imagine trying to determine whether two brands of cake mixes have the same density by weighing them. If you always weigh one of the brands with a glass bowl, but the other one with a steel bowl, you'll get enormously high units of sigma, but in reality you've only proven that steel is heavier than glass.


Cannot, because here we’re talking about “unknown unknowns”.


> it's not good to cherry-pick paragraphs from the whole artile

Isn't that exactly what you just did?

There's nothing wrong with showing only small quotes, the problem would be cherry picking them in a way that leads people to draw incorrect conclusions about the whole.


Which is what I demonstrated the parent poster did.


They were using a quote from the article to support their own point, not stating that it represented the article's overall conclusion.


That new alternative approach is considered substantially less reliable by most experts.

https://mobile.twitter.com/dangaristo/status/137982536595107...

From Gordan Krnjaic at Fermilab:

> if the lattice result [new approach] is mathematically sound then there would have to be some as yet unknown correlated systematic error in many decades worth of experiments that have studied e+e- annihilation to hadrons

> alternatively, it could mean that the theoretical techniques that map the experimental data onto the g-2 prediction could be subtly wrong for currently unknown reasons, but I have not heard of anyone making this argument in the literature

https://mobile.twitter.com/GordanKrnjaic/status/137984412453...


In the Scientific American article also currently linked on the front page a scientist & professor* at an Italian university is quoted as saying something along the lines of “this is probably an error in the theoretical calculation”. Would this be what the professor was referring to?

Edit: I’m not entirely sure whether they’re a professor, but here’s the exact quote

> “My feeling is that there’s nothing new under the sun,” says Tommaso Dorigo, an experimental physicist at the University of Padua in Italy, who was also not involved with the new study. “I think that this is still more likely to be a theoretical miscalculation.... But it is certainly the most important thing that we have to look into presently.”



On the BMW collaboration with the lattice qcd computational estimate -

This is a pre-print https://arxiv.org/abs/2002.12347

This is the link to the Nature publication: https://www.nature.com/articles/s41586-021-03418-1


As someone who has worked in fields that use lattice calculations (on the experimental side), the new calculation is interesting, but I would not say it’s particularly convincing yet. Lattice calculations are VERY difficult, and are not always stable. I am not questioning whether they did their work well or not, just pointing out that in high energy physics and high energy nuclear physics, many times our experimental results are significantly better constrained and also undergo significantly more testing via reproduction of results by other experiments than our theory counterparts’ work. Is it possible that all of our previous experiments have had some sort of correlated systematic error in them? Unlikely, but yes. Is it more likely that this lattice calculation may be underestimating its errors? Much more likely. Another interesting option is that one of the theoretical calculations was actually done slightly wrong. My first guess would be the lattice result, since it’s newer, but both procedures are complicated, so it could be either.


I am not sure I follow the logic. The new computation aligns with the experiment.

Why is it more likely for it to be wrong than the calculation that shows the theory deviating from experiment.


The old calculation relies on older experimental results that have been verified by multiple experiments - so if the older value is wrong, it means either the calculation was done wrong (possible), or the experiments all have had a significant correlated systematic error that has never been caught (also possible). However, I’d say both of those things are relatively unlikely, when compared to the probability of some small error in a new paper that was just released that uses a new method that involves lattice calculations. This is all a balance of probabilities argument, but from my experience in the field, I’d say it’s more likely that any errors in calculation or missed systematics would be in the new paper.

However, I’m an experimentalist who has worked close to a lot of this stuff, not an actual theorist, so I’d love to get a theorists interpretation as well.


I'm a lattice QCD practitioner. What I'll say is that the BMW collaboration isn't named that by coincidence---they're a resource-rich, extremely knowledgeable, cutting-edge group that is the envy of many others.

They're also cut-throat competitive, which is very divisive. Grad students and postdocs are forced to sign NDAs to work on the hot stuff. That's insane.

What's worse, from my point of view (as an actual LQCD practitioner) is: they're not very open about the actual details of their computation. It's tricky, because they treat their code as their 'secret sauce'. (Most of the community co-develops at least the base-level libraries; BMW goes it alone.)

OK, so they don't want to share their source code; that's fine. But they ALSO don't want to share any of their gauge configurations (read: monte carlo samples) because they're expensive to produce and can be reused for other calculations. So it'd be frustrating to share your own resource-intensive products and have someone else scoop you with them. I disagree with that, but I get it at least.

My biggest problem, and the one that I do not understand, is their reluctance to share the individual measurements they've made on each Monte Carlo sample. Then, at least, a motivated critic could develop their own statistical analysis (even if they can't develop their whole from-scratch computation).

Because of the structure and workflow of a LQCD calculation it's very difficult to blind. So, the only thing I know to do is to say "here are all the inputs, at the bit-exact level, to our analysis, here are our analysis scripts, here's the result we get; see if you agree."

This is the approach my collaborators and I took when we published a 1% determination of the nucleon axial coupling g_A [Nature 558, 91-94 (2018)]: we put the raw correlation functions as well as scripts on github https://github.com/callat-qcd/project_gA and said "look, here's literally exactly what we do; if you run this you will get the numbers in the paper." It's not great because our analysis code isn't the cleanest thing in the world (we're more interested in results than in nice software engineering). But at least the raw data is right there, we tell you what each data set is, and you're free to analyze it.

BMW does nothing of the sort. They (meaning those with power to dictate how the collaboration operates) seem to not want to adopt principles of nothing-up-my-sleeve really-honestly-truly open science. So their results need to be treated with care. That said, they themselves are extremely rigorous, top-notch scientists. They want you to trust them. Not that you shouldn't. Trust---but verify. That's currently not possible. I bet they're vindicated. But I can't check for myself.


> OK, so they don't want to share their source code; that's fine.

No, it is not. It is the exact reason why rheir results are not trust worthy.

Publish the code, let it be checked by the peers.

Closed source code has no place in science and most journals now rightly demand open code for the publications.


I appreciate the absolutist position. However, everybody agrees on what code _must do_. If you need to solve a (massive) system of linear equations (as happens often in LQCD) you can then take your alleged solution and plug it in and check. A variety of those sorts of things prevent you from doing anything too wrong. If you screw up gauge invariance, for example, you will get 0. There are agreed-upon small examples. Plus other benchmarks---they computed the hadronic spectrum. They computed the splitting between the proton's mass and the neutron's mass.

If you spend hundreds or thousands of man hours optimizing, for example, assembler for a communications-intensive highly-parallel linear solve, it's fair to be reluctant to give it away. If you do others will get the glory (publications / funding). Some people do [ eg. this solver library for the BlueGenes https://www2.ph.ed.ac.uk/~paboyle/bagel/Bagel.html ]. Most people are happy to let others do the hard work of building low-level libraries. But they COULD decide to write custom software that'd go faster. If their custom software reproduce results that community-standard libraries produce that's not nothing.


I think this clarifies a misunderstanding I had from your original comment.

It sounds like the "secret sauce" for this collaboration includes a set of numerical libraries. They would get relatively little funding, few publications ("glory", as you say), and at best be reduced to a citation (if people remember to cite their libraries) if all they did was improve the backbone of lattice QCD with better software.

So instead they keep it internal. It's a bit sad that there's so little glory in writing better numerical libraries, but it's a common problem across the sciences (and in the open source community in general) so I can believe they'd be reluctant to share.


> It sounds like the "secret sauce" for this collaboration includes a set of numerical libraries.

Indeed. There are really only a limited set of (physics) choices when making these libraries. As long as the discretization you pick goes to QCD in the continuum limit, you can make whatever choices you want. Some choices lead to faster convergence, or easier numerics, or better symmetry, or whatever---at that point it's a cost/benefit analysis. But if your discretization ('lattice action') goes is in the QCD universality class ('has the right continuum limit') you're guaranteed to get the right answer as long as you can extrapolate to the continuum.

> It's a bit sad that there's so little glory in writing better numerical libraries.

Agreed, but physics departments (by and large) award tenure for doing physics, not for doing computer science. It's hard to get departments to say "yes, your expertise in optimizing GPU code is enough to get you on the tenure track".

> It's a common problem across the sciences. [...] I can believe they'd be reluctant to share.

The larger community does center around common codes. The biggest players are

USQCD http://usqcd-software.github.io/ quda http://lattice.github.io/quda/ grid https://github.com/paboyle/Grid/

but there are others, and there are private codes (like BMW's) too.

As part of the SciDAC program and now exascale initiative the DOE does fund a few software-focused national lab jobs. But not many.


Saying they treat their code as secret sauce is a pretty damming accusation for scientists. I've seen a few other cases where relatively closed groups of otherwise top-notch scientists claim an interesting discovery [1,2], and then turn out to be wrong. It rarely ends anyone's career and the fiasco tends to fade in a few years, but it leads to a bit of a media circus, for better or worse, and is mostly distracting for the field as a whole.

I know nothing about this collaboration, but if what you say is true this isn't good science.

[1]: https://www.math.columbia.edu/~woit/wordpress/?p=3643

[2]: https://en.wikipedia.org/wiki/DAMA/NaI


I really want to stress: it's excellent science, and that's why they hold their code tightly. You can say 'no, a true scientist publishes everything' but---says you.

As someone in the field let me assure you: everything, of course, is more complicated than you make it out to be. I understand the absolutist position. But in a world of finite and ever-shrinking resources (grants, positions, etc.) it's fair to try to push your advantage. If funding were plentiful, adopting standards of publish-every-line-or-it-doesn't-count would be fair. People would have plenty of time and resources to get that done. As it stands there are basically no incentives to behave that way and being strapped for human resources puts the issue at the bottom of the list compared to actually getting results.


I'm not an absolutist, and don't want to come off as one. I'm just not in lattice QCD :)

What degree of data sharing is considered normal there? Across experimental physics it varies a lot: astronomers are often required by the funding agencies to make the data public, whereas particle physics experiments have traditionally shared very little (although pressure from funding agencies has started to change this too).

Given the ways you described this collaboration, my questions are:

- As an experimental physicist, when will I be able to believe them? Do we wait around for someone else to cook up a batch of similar secret sauce to confirm the result? Will they release their gauge configurations after some embargo period? Or should we believe them just because they are top-notch? I've seen top-notch groups like this fall before, so it seems quite reasonable if experiments aren't citing them now.

- Should funding agencies be attaching more importance to openness in science? From what you describe (and sorry if I'm misinterpreting you) there is very little incentive to share things that would make their results far more useful. Of course nothing is simple, but I've seen collaborations reverse their stance on open data overnight in response to a bit of pressure from the people writing the pay checks.


> Do we wait around for someone else to cook up a batch of similar secret sauce to confirm the result?

It took you folks 20 years to redo the experiment. Independent lattice calculations have already been underway for some time; I would expect (but I won't promise, not working on the topic myself and not having any particular insider information) results on the year-or-two timescale.

> Will they release their gauge configurations after some embargo period?

BMW probably will not do this. In their recent Nature paper they do say that upon request they'll give you a CPU code BUT when they provide a nerfed CPU code that produces the same numbers, rather than their performant production code. ... annoying.

> Or should we believe them just because they are top-notch?

Well, maybe? Why do you believe the theory initiative's determination of the vacuum polarization or the hadronic light-by-light? Some how it's more sensible to back out those things by fitting experimental data than by doing a direct QCD calculation? There's no free parameters in a QCD calculation, but fitting... well, give me a fifth and I can wiggle the elephant's trunk.

> I've seen top-notch groups like this fall before, so it seems quite reasonable if experiments aren't citing them now.

I think it's wrong not to hedge the experimental results and it's wrong not to cite them, but I understand why experimentalists wouldn't take their result as final either.


As a particle physicist (no longer working in the field, sadly), this is one of the more exciting results in a long time. Muon g-2 has been there, in some form of another for debate and model building, for many years (taken somewhat seriously for 15+?), waiting for better statistics and confirmation. At over 4 sigma this is much more compelling than it has ever been, and the best potential sign of new (non-Standard Model) physics.

I'm not current on what models people like to explain this result, but it has been factored in (or ignored if you didn't trust it) in particle physics model building and phenomenology for years. This result makes it much more serious and something I imagine all new physics models (say for dark matter or other collider predictions or tensions in data) will be using.

Whether or not anything interesting is predicted, theoretically, from this remains to be seen. I don't know off hand if it signals anything in particular, as the big ideas, like supersymmetry, are a bit removed from current collider experiments and aren't necessarily tied to g-2 if I remember correctly.


re "what models people like to explain": There was some good discussion of lepton universality violation at the end of the announcement talk.

tl;dr - electrons and muons are leptons, but what if they don't interact with photons the same way? (ie the rules of physics aren't universal to all leptons)


There was a nice explanation of the finding in comic format from APS & PhD Comics: https://physics.aps.org/articles/v14/47


Let me say that this is the best thing that I ever saw in science: people using art to explain extremely complex findings that might change the future in a bit. I laughed a bit on 'I don't know you anymore'.

When I was younger, I remember to read cyberpunk comics quite a lot. They explain a vision of the future that is improbable, but in many ways it get stuff right. Imagine aligning this with real word science. Imagine hearing from a superhero how his powers came to him. Imagine having a scientist name on the movie credits.

It doesn't need to make everything scientifically accurate, but explaining the fundamentals can engage more people to enter science.

Yesterday I was watching a new movie from Netflix called 'hacker'. The movie is awful, but it starts showing how Stuxnet should work, and that is pretty awesome. This is cool because I know the fundamentals of Stuxnet.

If they break the 4th wall and show something that could happen for real, it could bring more emotions to the movie.


I used to read the Cartoon Guide to... books as a kid: https://www.amazon.com/Cartoon-Guide-Physics/dp/0062731009. They were great.


Cartoon History of the Universe is probably the best "nonfiction" comic ever made. (it's not inaccurate but it's kind of psychedelic and retells more than one religious founding text as if it actually happened)


Best part that most people don't realize... There are 3 parts to it... All massive.

I still remember finding part 1 in the used books store with my dad around the age of 10-11 for like $2. Now I'm in my early 30's and all 3 parts are just a handful of books away from my physics and philosophy books on my book shelf :)


I’m a huge fan of the Cartoon History, but I think I’d have to give the prize to Maus for best nonfiction comics. Second runner up would probably be Understanding Comics.


The problem with Understanding Comics is that most comic readers are sensible enough to know that American style comics are bad, so they all read manga instead. Most of the books about that aren’t translated though there is Even a Monkey Can Draw Manga.


Today no starch press has a series of Manga Guide to ...

which are pretty great.

https://nostarch.com/catalog/manga


The Japanese originals have more topics.


> They explain a vision of the future that is improbable

We're currently heading into cyberpunk in basically every aspect except for the anarchy. More like totalitarian cyberpunk. It's left to see whether tech gives us the means for a semblance of anarchy, but I'm not getting my hopes up.


Economix, a comic book explanation of basic economics, is the only book on economics I have ever read.

It seemed biased but still covered the basics well, I thought, not that I'm a good judge.


Which cyberpunk comics? Give us some recommendations please. :)


not op but I recommend the Nikopol trilogy by Enki Bilal


They mystery here is why that comic image that is inlined into the page loads so slowly, but if you click on it while it is loading, you get a pop-up which shows the whole darn thing almost instantly, at what looks like the same resolution, even as the in-line one is still loading.

Spooky quantum effect, there!


NoScript lets you peek at a parallel universe in which the image pretty much instantly.

I didn't feel the need to click anything.


The creation of new particles, is that bremsstrahlung?? I’m trying to find more info on it.


Bremsstrahlung is not the creation of virtual particles, though it does involve a virtual photon. It is rather the radiation of (real) photons by electrons when they suddenly "decelerate" (i.e. collide with other charged particles). In fact the name "bremsstrahlung" means "braking radiation," if memory serves.


> when they suddenly "decelerate" (i.e. collide with other charged particles)

I think it'd be more accurate to say "interact" instead of "collide" – the electron could still be far away from the charged particle. More generally, bremsstrahlung also occurs when an electron's velocity vector (not necessarily its modulus) changes, i.e. when the electron changes direction, like in a synchroton.

> In fact the name "bremsstrahlung" means "braking radiation," if memory serves.

That's correct :)


Basically the change of momentum for the electron sheds some of energy used to accelerate it.


it is also important to note that due to experimental constraints and the nature of quantum mechanics different possible processes interfere with eachother.

eg: (a+b)^2 = a^2 + b^2 + 2ab

That 2ab is an interference term so a different process can get mixed in (quantum mechanically speaking). And we may not experimentally be able to disentangle it.


Also concisely covered in Fermilab's Youtube channel: https://www.youtube.com/watch?v=ZjnK5exNhZ0


why did they move the magnet from Brookhaven to Chicago?


From what I understand the Magnet is extremely specialized and it would cost millions more to manufacture a second one rather than ship the existing one. As to why Fermilab, scientists had exhausted the capabilities of the particle accelerator at Brookhaven and Fermilab already possessed the equipment to generate more intense muon beams.


All are correct! Also making a new magnet would take at least 3-5 more years.


The NYT sort of explained that repeating the experiment in Brookhaven would have cost a lot of money but wouldn't have resulted in an increase in accuracy that was worth that amount of money. Presumable other equipment exists at Fermilab that made the move cost effective compared to other options.


Oh, so it's a bit like electron screening, but with virtual particles ? Fine structurally neat !


What's the symbol that looks like a b fell over?


Lowercase Sigma


Just to expand a bit, the sigma symbol is a standard symbol used to indicate the standard deviation of a measurement, and standard deviation is roughly a measure of how much variation there is within a data set (and consequently how confident you can be in your measurement). So when they say that the theoretical result is now 4.2 sigma (units of standard deviation) away from the experimental result instead of 2.7 sigma, that is because the new experiment provided more precise data that scientists could use to lower the perceived variance.

Assuming that there were no experimental errors, you can use the measure of standard deviation to express roughly what % chance a measurement is due to a statistical anomaly vs. a real indication that something is wrong.

To put some numbers to this, a measurement 1 sigma from the prediction would mean that there is roughly a 84% chance that the measurement represented a deviation from the prediction and a 16% chance that it was just a statistical anomaly. Similarly:

> 2 sigma = 97.7%/2.3% chance of deviation/anomaly

> 3 sigma = 99.9%/0.1% chance of deviation/anomaly

> 4.2 sigma = 99.9987%/0.0013% chance of deviation/anomaly

Which is why this is potentially big news since there is a very small chance that the disagreements between measurement and prediction are due to a statistical anomaly, and a higher chance that there are some fundamental physics going on that we don't understand and thus cannot predict.

edit: Again, this assumes both that there were no errors made in the experiment (it inspires confidence that they were able to reproduce this result twice in different settings) and that there were no mistakes made in the predicition itself, which as another commenter mentions eleswhere, is a nontrivial task in and of itself.


> a measurement 1 sigma from the prediction would mean that there is roughly a 84% chance that the measurement represented a deviation from the prediction and a 16% chance that it was just a statistical anomaly.

No, this is a p-value misinterpretation. Sigma has to do with the probability that, if the null hypothesis were true, the observed data would be generated. It does not reflect the probability that any hypothesis is true given the data.


Hm, I was not being particularly precise with my language because I was trying to make my explanation easily digestible, but please correct me if I'm wrong.

The null hypothesis is that there are no new particles or physics and the Standard Model predicts the magnetic charge of a muon. A 4.2 sigma result means that given this null hypothesis prediction, the chances that we would have observed the given data is ~0.0013% (chance this was a statistical anomaly). Since this is a vanishingly small chance (assuming no experimental errors), we can reasonably reject the hypothesis that the Standard Model wholly predicts the charge of a muon.


> Again, this assumes both that there were no errors made in the experiment

This is worth repeating a lot when explaining sigma (even in a great and comprehensive explanation such as yours): Statistical anomalies are only relevant when the experiment itself is sound.

Imagine you are trying to see whether two brands of cake mix have different density (maybe you want to get a good initial idea whether they could be the same cake mix). You can do this by weighing the same amount (volume) of cake mix repeatedly, and comparing the mean value for weight measurements of either brand. That works well, but it totally breaks down if you consistently use a glass bowl for one brand, and a steel bowl for the other brand. You will get very high units of sigma, but not because of the cake mix.


Nitpick: it assumes that there were no systematic errors. If (say) you switch randomly between steel and glass bowls, you results will still be valid, just with much wider (worse) standard deviation than you could have gotten otherwise (or much greater numbers of measurements needed for a given accuracy, due to Shannon/noise floor issues).


Yes, that's entirely my point, hence why I said "consistently" using one type of bowl for one brand. That's a systematic error, but since this was supposed to be educational, I preferred explaining the error instead of using terminology that basically implies knowledge already.


This sound like the hypothesized „subtle-matter“ as proposed by Dr. Klaus Volkamer [1]?

- still looking for a better link than the Book… I’ll update this later

[1] https://amzn.to/3mvvsWW


But if muons are inanimate, why would they be affected by this hypothesised “subtle matter” which makes up the soul of living things?


heres is a paper [1] from 1994 here the results of weighing thermodynamically closed reactions are "interpreted to reveal the existence of a heretofore unknown kind of non-bradyonic, cold dark matter with two different forms of interaction with normal matter"

[1] http://klaus-volkamer.de/wp-content/uploads/2014/11/1994-Vol...


Maybe the muons are hitting the angels at a good fractions of the speed of light and the difference is the angel-splat. Maybe FERMI can contract Dr Klaus to come up the an experiment to measure the angel-goo and true the difference right up. Thanks for the link to an 'authoritative source'. :-)


Absolutely. there's this Paper from 1999[1] "Experimental Evidence of a New Type of Quantized Matter with Quanta as Integer Multiples of the Planck Mass" about how the weight of a closed system with a chemical reaction changes, violating the conservation of mass.

[1] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1....


"Weightable soul". Sounds like a con-man, who wants only the most foolish of marks to make his job as easy as possible, and hence begins his script "I am about to hoax you...but I have something very important to tell you" - and those that remain after that are proven suckers and can be taken to any sort of ride.


I totally agree. Id be great to have a peer review of his papers[1][2] and either confirm something interesting or just shut him up.

Seems like all he was initially doing in the 80’s was dig into the 2 out of 10 experiments from Landolt that failed to confirm a conservation of mass

[1] http://klaus-volkamer.de/wp-content/uploads/2014/11/1994-Vol...

[2] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1....


lol


Alexey Petrov, quoted in the article, subbed in to teach one day in my quantum mechanics class :) It was the first day we were being introduced to the theory of scattering, and I will never forget his intro. He asked the class, “what is scattering?”, waited a moment, and then threw a whiteboard marker against the wall, and answered his own question: “that’s scattering”. Lots of times, physics classes can be so heavy on math that it’s hard to even remember that you’re trying to describe the real world sometimes, and moments like that were always very memorable to me, because it helped remind me I wasn’t just solving equations for the hell of it :)


My favorite example of this was during a lecture on waveguides, when Michael Schick picked up the section of cylindrical metal pipe he was using to motivate the cylindrical-waveguide problem at hand, looked at the class through the pipe, and said, "clearly, it admits higher-order modes."

That little episode brought great joy to this experimentalist's heart.


I have a theory about how well educated the mass of humans are, could be and should be.

Bear with me.

Roughly 2000 years ago, the number of people who could do arithmetic and writing was < 1% of the population. By 200 years ago it was maybe what 10%?

Now it is 95% of the world population, and 99.9% of 'Western' world.

Lets say that Alexey Petrov is about as highly educated and trained as any human so far. (A Physics PhD represents pretty much 25 years of full-time full-on education). But most of us stop earlier, say 20 years, and many have less full-on education, perhaps not doing an hour a day of revision or whatever.

But imagine we could build the computing resources, the smaller class sizes, the gamification, whatever, that meant that each child was pushed as far as they could get (maybe some kind of Mastery learning approach ) - not as far as they can get if the teacher is dealing with 30 other unruly kids, but actually as far as their brain will take them.

Will Alexey be that much far ahead when we do this? Is Alexey as far ahead as any human can be? Or can we go further - how much further? And if every kid leaving university is as well trained as an Astronaut, is capable of calculus and vector multiplication, will that make a difference in the world today?


You can't really manufacture geniuses, right?

I'm "smart" relative to the general population, but you could have thrown all the education in the world at me and I'd never have become Alexey Petrov.

I have a hunch that the Alexey Petrovs -- the upper 0.001% or whatever -- of the world do tend to get recognized and/or carve out their own space.

I think the ones who'd benefit from your plan would be... well, folks like me. I mean, I did fine I guess, but surely there are millions as smart as me and smarter than me who fell through the cracks in one way or another.

I suspect fairly quickly we'd run into some interesting limits.

For example, how many particle physicists can the world actually support? There are already more aspiring particle physicists than jobs or academic positions. Throwing more candidates at these positions would raise the bar for acceptance, but it's not like we'd actually get... hordes of additional practicing particle physicists than we have now. We'd also have to invest in more LHC-style experimental opportunities, more doctorate programs, and so on.

Obviously, you can replace "particle physicist" with other cutting-edge big-brain vocation. How many top-tier semiconductor engineers can the world support? I mean, there are only so many cutting-edge semiconductor fabs, and the availability of top-tier semiconductor engineers is not the limiting factor preventing us from making more.

There are also cultural issues. A lot of people just don't trust the whole "establishment" for science and learning these days. Anti-intellectualism is a thing. You can't throw education at that problem when education itself is seen as the problem.


> ...will that make a difference in the world today?

It will make a huge difference, and no difference at all. It will probably help us solve all of our current problems. And then it will also introduce a whole new brand of problems which will be sources of crises that generation will deal with. What you read on news will change, but the human emotional response to those news will be very similar to today's.


Most people demonstrate pretty clearly that they don’t have the aptitude for serious physics. A substantial number of people can’t get passed freshman classes and that’s true even for the top few% of high school students.


That doesn't necessarily mean that the content is the problem. 200 years ago you could probably say the same thing about "basic algebra" instead of "serious physics".


I agree wholeheartedly. We would live in an exceptional world. The obstacle preventing this is greed and exploitation of people who are born into low income situations. Rising out is the exception, not the rule. Affording many years of education is simply not an option for some. I wish it were, but this is another issue.


The evidence is quite clear that going to college doesn’t actually improve life outcomes very much at all. We mistakenly thought it did for a while, but what was actually happening is the people who were going to college were smart and very likely to succeed anyway.


Everyone being as trained as an astronaut would definitely make a difference, if only because they would appreciate the importance of science, technology, innovation... And not believe stupid conspiracy theories about vaccines.


Not all trained astronauts follow scientific consensus about everything.

https://en.wikipedia.org/wiki/Edgar_Mitchell#Post-NASA_caree...


An old professor of mine loved the "Throw something at the blackboard" technique. Great way to get the class potheads to wake up


how many potheads did you have in your quantum mechanics class?


Hmm probably about a third of my graduate level QED class and considerably less in my undergraduate QM but you'd be surprised at the cross over between potheads and high level physics.


I don't see any reason that would cause them to be mutually exclusive.


Is this trying to imply that it would be surprising for a pothead to take a quantum mechanics class? Cause, having hung out with plenty of physicists, that wouldn’t surprise me too much... :P


It was an algorithms class. But I'm 100% certain there was at least one ;)


The joke I have heard is that Physics students are either shut-ins or party animals, either way they're both microdosing something or other...


Personally I had grown out of that habit a semester or two before undergrad QM (though "Modern Physics" and "Experimental Physics" were another story...) but there were still some hangers on. Maybe 1-3 in a class of 20-25? Neither the norm nor unheard of. From that point on the statistics were probably about the same in grad school.


That article is https://www.bbc.com/news/56643677.

(The comment was posted to https://news.ycombinator.com/item?id=26726981 before we merged the threads.)


I have the opposite experience. Physics classes where always the most interactive and practical. But then again, I only ever studied up to undergrad level physics.


would have been even more impressive example with a dusty chalkboard eraser to be able to see the scattering


that's super cool! i've always been able to connect the work in physics class to some physical system except for when i studied quantum mechanical density matrices. still have no idea what those are about :)


I love that kind of practical example.


Only 4.2 sigmas. ;)

That is really a lot. It's less than the official arbitrary threshold of 5 sigmas to proclaim a discovery, but it's a lot.

In the past, experiments with 2 or 3 sigmas were later classified as flukes, but AFAIK no experiment with 4 sigmas has "disappeared" later.


Oh sweet summer physicist, what do you know of reality? Reality is for the markets, lovely mathey person, when a one in a million chance comes every month, and investment portfolios lie scattered over the floor like the corpses on a battlefield. Reality is for when your mortgage and the kid's school fees are riding on it, and quantitative strategies are borne and die with the fads of last summers interns pet projects.

In some domains 7 sigma events come and go - statistics is not something to be used to determine possibility in the absence of theory. If you go shopping you will buy a dress, just because it's a pretty one doesn't mean that it was made for you.


Neutrinos faster than light had 6 sigma.

It just shows probabilistic significance. Confirmation by independent research teams helps eliminate calculation and execution errors.


As I recall, FTL neutrinos were the result of experimental error, not chance; and so are outside the scope of what sigma screen for.


In scope for the context of this thread though; your GP claimed that 4 sigmas means “it’ll probably pan out as being real”, your parent provided a 6-sigma counter example.


> your GP claimed that 4 sigmas means "it'll probably pan out as being real"

No they didn't; they claimed that 4 sigmas means it will probably turn out to be something other than statistical noise. They made no claims about "it's real" versus "it's a systematic, non-statistical error".

See also https://www.explainxkcd.com/wiki/index.php/2440


"It's 99.99% significant, if we assume the 10% case that we haven't fucked up somewhere."


Or the title of this topic as it is right now is misleading. It says they’ve confirmed the stronger magnetic field. Ie it was either predicted elsewhere or seen elsewhere. The later would build confidence in the testing apparatus.


That's the point.

At the time it was very significant results, just like this one.

Turned out someone hadn't plugged a piece of equipment in right and it was very precisely measuring that flaw in the experiment.

You can't look at any 8 sigma result and just state that it must necessarily be true. Your theory may be flawed or you may not understand your experiment and you just have highly precise data as to how you've messed something else up.


It's probably worth saying that even "chance" is still a little misleading in the sense that the quantification of that chance is still done by the physicists and therefore can be biased


Isn't the existence of experiemental error also something you can model as a probability?


This is the second separate experiment giving similar value.


That does help a lot!

Of course, this is still not good enough. But the nice thing about things that are real is they eventually stand up to increasing levels of self-doubt and 3rd party verification... it’s an extraordinary result (because, of course, the Standard Model seems to be sufficient for just about everything else... so any verified deviation is extraordinary), and so funding shouldn’t be a problem.

A decent heuristic: Real effects are those that get bigger the more careful your experiment is (and the more times it is replicated by careful outsiders), not smaller.


The use of a secret frequency source not known to the experimenters is also a very good way to deal with potential bias.


"Separate" for slightly small values of separate. It's the same measurement approach, and using many components from the first experiment, so there could be correlated errors. But they made many fundamental improvements to the experiment, so it's great to see that the effect hasn't gone away.


The primary shared component is the ring/yoke. I worked in the same lab as a substantial team of g-2 scientists for the last decade and watched them come to this result. The level of re-characterization of the properties of the entire instrument was extremely extensive. If anything, one should regard the lessons that they have learned along the way as providing extra insight into the properties of the original BNL measurement.

To use a car analogy: This is as if you took someone's prize-winning race car, kept the moderately-priceless chassis, installed upgraded components in essentially every other sense (remove the piston engine, install a jet engine, remove the entire cockpit and replace with modern avionics, install entirely new outer shell, replace the tires with new materials that are two-decades newer...), put the car through the most extensive testing program anyone has ever performed on a race car, filled the gas tank with rocket fuel, and took it back to Le Mans.

I believe that the likelihood of a meaningful ring-correlated systematic, while still possible, is quite low in this case. The magnetic-field mapping, shimming, and monitoring campaigns, in particular, should give people confidence that any run-to-run correlated impact of the ring ought to be very small.


Ideally they have all their fiber optic cables screwed on tight at Fermilab.




There is a nice video explanation from PBS at https://youtu.be/O4Ko7NW2yQo


PBS, man. Just steadily and reliably educating everyone for years now. Good shit.


SpaceTime (that channel) in general is of impeccable quality and production value. Definitely worth subscribing.


Worth the patreon contribution also.


Everytime I see news like this, it just reminds me of the three body problem and the extremely unique Sophons in them.


Amusingly - fittingly for our times - in the same issue of the exact same journal (Nature) another paper has been published that indicates that the prior, so much "hyped" discrepancy might be due to the theory having being applied inaccurately in the past. When computed with the new method, the experimental and theoretical models align far more accurately.

So now all that matters is what kind of article do your want to write. A sensationalist one to get eyeballs or a realistic one that is far less exciting. Thus the exact same discovery can be presented via two radically different headlines:

BBC goes with "Muons: 'Strong' evidence found for a new force of nature" https://www.bbc.com/news/56643677

> "Now, physicists say they have found possible signs of a fifth fundamental force of nature"

ScienceDaily says: "The muon's magnetic moment fits just fine" https://www.sciencedaily.com/releases/2021/04/210407114159.h...

> "A new estimate of the strength of the sub-atomic particle's magnetic field aligns with the standard model of particle physics."

There you have it, the mainstream media is not credible even when they attempt to write about a physics experiment ...


What were the times when journalism was better?


This is an incredibly complicated and abstract subject, yet you have somehow managed to boil it down into a sweeping generalization about the basics of media and reporting. Masterfully done.


I highly recommend the YouTube channel PBS Space Time's coverage of this, it's informative, well organized, and accessible even to someone like me who doesn't have any background in physics.


I can't wait for PBS Spacetime to tell me what to think about this.


They already did, 15 minutes ago: https://www.youtube.com/watch?v=O4Ko7NW2yQo

For those who do not know - PBS Spacetime is YouTube channel hosted by astrophysics Ph.D Matt O'Dowd, aimed at casual physics enthusiasts without oversimplifying underlying physics too much.


Fermilab has a channel as well describing it. https://www.youtube.com/watch?v=ZjnK5exNhZ0


Am I the only one who barely understands anything from that show?

Every episode I hear a dozen barely explained confusing terms with quantum this and higgs-field that.

I feel like they care more about impressing me with how complicated this stuff is than they do about actually teaching me much. Maybe I'm just not the target audience :(


Binge watching it from past to recent videos helps much. Physics is complex, and explaining it without references to a prior knowledge is not possible in a single 20min video, even for Matt. They could do a complete layman format in every video, but that bullshit is already in abundance on yt, and it only solidifies misconceptions and the lack of understanding. You are the target audience, but it is more like a real learning curve than a weekly entertainment. You have to begin from the beginning.

My own impression of SpaceTime is that they are consistent and chronological. I wouldn’t understand that math on my own nor make any inferences, but conceptually everything is pretty clear to me.


There are a lot of quantum mechanics episodes from 1-2 years ago that cause my eyes to just glaze over from all of the math and technical terms. However I feel like the newer episodes are much better at explaining things to the casual viewer rather than math nerds.


honestly i some times have the same problem sometimes, but I think it is because there is just so much background that you have to grok before you understand the discussion being had. And as that is not part of my knowledge domain I haven't spent enough time to pick it up as it will probably never effect me my daily life.


No. I imagine lots of children don’t understand anything said on that show as well.


Nature seems to have this interesting property of always increasing in perceived complexity.


Wouldn't that be amazing if the universe developed more and more characteristics as you look for them? Or even, that it's pushed to create something when you do?

Infinite playground.


Godel kind of proved that about Mathematics.


That sounds wild, do you have a link where I can read more about this, or is wikipedia fine to learn about it?


https://en.m.wikipedia.org/wiki/G%C3%B6del%27s_incompletenes...

TLDR: you can have a mathematics that always gives true answers (but that cannot answer everything). Or you can have a mathematics that can answer every possible question (but some answers are wrong, you do not know which). Choose.

This dispaired mathematicians of the early 20th century, who had hoped to create 'one mathematics to rule them all'. Of course you can have several disjunct mathematics, each one for the problem you like.


Oh, cool!


Axioms are the foundational assumptions from which formal systems of mathematics are built. Some systems of axioms are unable to prove the truth or falsity of some statements within that system. But you can add such statements to your set of axioms to form a new, larger formal system, which in turn has other indeterminate statements, and so on, thus building, in GP’s terms, an infinite playground of mathematics.

Book recommendation: Gödel, Escher, Bach by Douglas Hofstadter.


If there was a single force in the beginning, there might be more forces branching out in the future of the universe, who knows.


We're evolutionarily optimized for understanding slow, macro-scale, somewhat low-energy things.

Of course we'll perceive things as complex when we move outside of that regime.


The less mysterious reformulation is that humans are better at finding less mysterious relationships.


Sometimes I think about this half-baked theory where physical laws don't exist until they are discovered. Once you catch physics with it's pants down it now must maintain those constraints or have it's bluff called.


sounds a lot like Sheldrake's theory of 'physical habits' - he describes it as things being quite random the first time and becoming more likely to follow the same patterns the more often they're followed.


Just my mind voice,

Knowledge - expands, Space exploration knowledge - expands, Sub atomic exploration - expands, (muon and we may even find its sub atomic particles as well) Space - expands, Number series - expands, Fibonacci - expands.

Be warned when something expands, it's a trap.

Science expands external knowledge and shrinks self-knowledge, Spirituality shrinks external-knowledge and expands self-knowledge.

Be warned when something expands. Be warned when something shrinks.

E=mc^2

where c is not just the speed of light, c is the speed of space expansion as well.

Mass expands to form energy (star)

Energy shrinks to form mass (black hole)


I wonder where's the limit to what our minds can comprehend. It's fascinating we went this far, since brain didn't evolve to study physics.


Maybe there aren't anything like "fundamental laws" and all are emergent patterns, like we are, and in other places in the Universe the "fundamental laws" are completely different. In that case, the hermetics had a point when they talked about infinite divisibility.


The NYTimes[0] article takes a more measured tone and reports 1 in 40,000 confidence, ~5 sigma.

5 sigma results have disappeared (even 6-sigma) upon independent testing, so more testing is needed.

[0] https://www.nytimes.com/2021/04/07/science/particle-physics-...


Physics noob question: is there any physical framework that does away with the concept of "force"?

I know a bit about how it is reconceptualized as space-time deformation in the context of general relativity, but that's about it.

It just seems like one of those inherently anthropocentric concepts that (potentially) holds us back from exploring something different?


Lagrangian mechanics is equivalent to Newtonian mechanics, but doesn't involve force https://en.wikipedia.org/wiki/Lagrangian_mechanics

The idea of replacing a 'gravitational force' with spacetime curvature gave us General Relativity; extending this same idea to electromagnetism gives us Kaluza-Klein theory https://en.wikipedia.org/wiki/Kaluza%E2%80%93Klein_theory

The current state of the art is Quantum Field Theory (of which the Standard Model is an example) https://en.wikipedia.org/wiki/Quantum_field_theory

In QFT, "particles" and "forces" are emergent phenomena (waves of excitation in the underlying fields, and the couplings/interactions/symmetries of those fields). QFT tends to be modelled using Lagrangian mechanics too.


I still need someone to ELI5 to me how space curvature model explains the attraction between two bodies that have a delta-v of 0.


An attempt at a true ELI5 is the bodies exist in what we know as spacetime, not as separate independent concepts of space and time which we perceive from our day to day experience, so we have to know a bit about the difference. Chiefly in spacetime everything always travels the same "speed" (c, the universal speed limit) and it's just a matter of how much of that speed appears as "traveling through space" and how much appears as "traveling through time". When 2 bodies warp spacetime it causes changes in the way each body's spacetime speed is distributed causing them to accelerate towards each other.

The ELI15 version is think about vectors in our normal concept of 3D space first, if I told you a body was always moving at 100 meters per second and it was 100% in the horizontal direction you'd say there was 0 meters per second in the vertical direction. Now say something curves this geometry a little bit, the body will still be traveling at 100 meters per second but now a tiny bit of that speed may appear to manifest in the vertical direction and a tiny bit less appear to manifest in the horizontal direction. Same general story with spacetime except the math is a lot more complex leading to some nuance in how things actually change.

The ELI20 version should you want to understand how to calculate the effects yourself is probably best left to this 8 part mini series rather than me https://youtu.be/xodtfM1r9FA and the 8th episode recap actually has a challenge problem to calculate what causes a stationary satellite to fall to the sun (in an idealized example) that exactly matches your question.


That's the best explanation I've ever heard. I'd like to know if it really is mathematically rigorous. If so, bravo.


It's 1:1 with the relations in the equations up until the analogy of warped Euclidean space changing the vector at which point the description is functionally very similar but relativity follows very different (but also somewhat similar in a way) mathematical mechanics to the vector changing.

The "spacetime speed vector" is more formally the four-velocity and it's true that the norm of this 3 component space 1 component time vector is strictly tied to c. At the same time the four-velocity doesn't actually mathematically behave like a euclidean vector space vector where you can just add another like vector describing the effects of the warping and call it a day. In reality you have to run it through the metric tensor first (some function for the given instance that describes the geometry of warped spacetime) to get things in a coordinate space that is usable. Once you have that you actually have to run it through the geodesic equation to see what the acceleration will be as using the mapped four-vector alone will only tell you about the current velocity components in your coordinate space not the effect of the spacetime warping on something in them. These kinds of differences are the bits I swept under the rug as "nuance in how things actually change" but the net concept of the four-vector shifting components due to the warping of spacetime as an object moves along its world line is 100% the net result.

Also I can't really take credit for the method of explanation, just some of the simplified wording. I do find this explanation not only infinitely more accurate but actually easier to understand than the damn rubber sheet analogies or even improved/3D space warping analogies as they still leave out the time portion of the spacetime gradient which actually plays a bigger role in these examples.


It's spacetime curvature. This is an important distinction, because although you can zero out the spatial component of your 4-vector you can't also zero out the time component.

Apparently you can think of the gravitational force as arising from time gradients [1]. Time flows slower closer to the planet, so if your arm is pointing towards the planet then your arm is advancing slightly slower in a particular way and this creates a situation where your arm wants to pull away from you; an apparent force.

1: https://www.youtube.com/watch?v=UKxQTvqcpSg


A common framework for explaining spacetime gravitation is the rubber sheet with a heavy ball, showing that other objects on the sheet fall towards the ball. This is really flawed because it explains gravity using gravity.

Instead, you keep the rubber sheet and the single ball. Instead of placing other objects on the curved rubber, project (using a projector if you want) a straight line (from a flat surface) down onto the rubber. If you trace the projection of the line onto the rubber, you'll notice that it is no longer straight - it curves with the rubber (especially if you subsequently flatten the rubber out). That's a world line[1]. That's the direction of movement that an object would see as its "momentum" - but it wouldn't actually follow the world line, as the world line changes when the object moves.

To build a geodesic (the actual orbit/movement of the object), you need to move along the world line and then build a new one, repeatedly. I haven't completely figured out the instructions to build a geodesic in this analogy, but seeing/imagining the curved world line should be enlightening:

There is no attraction.

[1]: https://en.wikipedia.org/wiki/World_line#World_lines_in_gene...


Think of your velocity vector as having a time component. The magnitude of this vector is c, so when you are at rest, you're moving full speed through time. When you accelerate, you shift some of this speed into the spatial dimensions. This is also why time passes more slowly for moving objects. Gravity also has this effect because not only is space curved, but space-time is curved. This means what would normally be a straight path through time is partially warped into the spatial dimensions when you encounter such a curvature.


There is no such attraction, same asyour question doesn't make sense. The Delta v has to do with the net force, what's actually happening, but this "attraction" is described as "what if you took away one of the forces impacting this"

For a curvature based model, the delta v being 0 means that the gradients around each body are equal to each other, but that doesn't say anything about what's causing those gradients.

To find this "attraction", you have to calculate the curvature while leaving some sources out


Imagine a 2d sheet that is weiged by steel balls. It'll be curved because of weights. Now, put a sand on it and it'll start rolling according to sheet's curvature. That's attraction between bodies for you.


This is a good video explaining just that! https://www.youtube.com/watch?v=wrwgIjBUYVc


They don't. You're only thinking in three (spatial) dimensions. Time is more fundamental than you think.


This was done as ELIPhD by Raychaudhuri.

Essentially for any given spacetime we can calculate out geodesics for any freely-falling object; it's just the path these objects follow through spacetime unless otherwise disturbed. Here we're interested in such objects that couple only to gravitation. These "test objects" do not radiate at all, not even when brought into contact with each other, and they don't absorb radiation. They don't attract electromagnetically, or feel electromagnetic attraction, and they don't feel such repulsion either. They also don't feel the weak or strong interactions. So they're always in free-fall -- always in geodesic motion -- because they can't "land" on anything.

We take one further step into fiction and prevent these test objects from generating curvature themselves. You can fill flat spacetime with them, and spacetime will stay flat. This is completely unphysical, but it's a handy property for exploring General Relativity.

If we put such an object into flat spacetime, we can use it to define a set of spacetime-filling extended Cartesian coordinates, where we add time to the Cartesian x, y, and z labels. We set things up so that the object is always at x=0, y=0, z=0, but can be found at t < 0 and t=0 and t > 0. The units are totally arbitrary. You can use SI units of seconds and metres, or seconds and light-seconds, or microseconds and furlongs: for our purposes it doesn't matter.

We can introduce another such object offset a bit, so that it is found initially at t=0,x=200,y=0,z=0. Again, the units are unimportant, it only matters that the second object is not at the same place as the first. This object is set up to always be at y=0,z=0.

In perfectly flat spacetime, these two objects, for t=anything, will be found at x=0 and x=200 respectively, and always at y=0, z=0.

They do not converge, ever, not in the past or in the future. They also do not diverge. The choice of coordinates doesn't matter any more than the choice of units; we could change the picture to keep the second object always at x=200, and the first will move from x=0. Or we can let them both wander back and forth along x, but with constant separation. But let's stick with our first choice of holding the first particle at the spatial origin at all times.

Now, what happens if we give the first object a little bit of stress-energy (you can think of that as mass in this setup)?

The geodesics generated now are not those of flat spacetime, but rather much closer to those of Schwarzschild. We have perturbed flat spacetime with the nonzero mass.

The first object, if we keep it always at x=0,y=0,z=0 now causes the second object to be on a new geodesic that is x != 200 at different times. Depending on the relationship between the "central mass" at the origin and distance x=200, the geodesic evolution of x for all t for the second object might look like an elliptical, circular, or hyperbolic trajectory [1].

If on the other hand we give both objects the same mass, we end up calculating out geodesics that focus. There will be at least one time t > 0 where the test objects will occupy the same point in spacetime, t=?,x=?,y=0,z=0. (This is called a "caustic").

Raychauduhri showed that caustics are highly generic[2]: you need electromagnetic repulsion (which means a global charge imbalance, which is not a feature of our universe); strong gravitational radiation (which is not a feature of our universe except perhaps in the extremely early universe); or a metric expansion of space (which is a feature of our universe, and leads to large volumes in which geodesics diverge, avoiding caustics, and small volumes in which geodesics converge such that caustics are only avoided by non-gravitational interactions).

This is the General Relativistic picture of masses attracting each other: objects follow geodesics unless shoved off them (by e.g. electromagnetic interaction), or until they "land" on something; in most physically plausible spacetimes there are generically intersecting geodesics and most things find themselves on one; and so close approaches, collisions, mergers, and so forth are practically inevitable.

Lastly, consider an https://en.wikipedia.org/wiki/Accelerometer . A calibrated one in free-fall anywhere should always report "0"; dropping the same out of an airplane should show a slight upwards acceleration imparted by collisions with the air, and then a big upwards one upon contact with the surface. These collisions with air molecules and water or ground molecules shove the falling accelerometer off its geodesic. An accelerometer resting on the ground or on the airplane will show an acceleration somewhere around 10 m/s^2 in SI units: it is being pushed off free-fall by interactions.

Two accelerometers freely-falling in flat spacetime will eventually collide with one another thanks to the focusing theorem. Only as they collide will the accelerometers show nonzero.

Finally, you can even experiment with this yourself: install https://phyphox.org/ on a modern smartphone and rest it on the floor, take it with you into an elevator, jump up and down, or throw it a long way (try not to break it, and try to avoid it rotating much while in the air) and you'll see that when in flight it registers a near-zero acceleration, but a substantial acceleration when in your hand as you wind up and throw, and a substantial acceleration when it lands. While in the air your phone is in practically-geodesic-motion.

It's this property of free-fall -- the absence of acceleration, even if one is orbiting or falling straight towards some massive object -- that is at the root of Einstein's gravitation, and which distinguishes it from Newton's gravity. It is formalized into the https://en.wikipedia.org/wiki/Equivalence_principle .

Although your thrown phone and the Earth are interacting gravitationally, neither the phone nor the planet feels a "pull" towards one another during the phone's flight, or during a parachutist's drop. The geodesics generated around the freely-falling Earth and (effectively) freely-falling phone just lead to greater radial motion by the phone.

- --

Definitely not ELI5:

[1] https://en.wikipedia.org/wiki/Hyperbolic_trajectory

[2] https://en.wikipedia.org/wiki/Raychaudhuri_equation#Focusing...


Lagrangian mechanics gets a bit ugly if you want to include friction.


It's a good question.

One thing you find in modern physics is that ideas are often named according to some mathematical analogue to classical physics. You start thinking about forces by imagining a ball being kicked, and after boiling away the conceptual baggage you realise it's all about the exchange of energy.

It turns out that energy exchange is one of the most fundamental mechanisms that drives nature so it makes sense that this same mathematics appears in deeper theories. Unlike classical physics the symbols in quantum equations don't represent simple numbers, they're usually quite complicated and subtle actually but remarkably these equations share many properties with their classical counterparts. To be fair this could just be that phenomena that differ completely from classical physics are incomprehensible to us.

So an electron "spin", at least mathematically, is governed by equations that are remarkable similar to classical equations of angular momentum and so on. Force is in the same category and really just means "fundamental interaction".


Yes, very much so. Forces are not really a thing in the Standard Model. There are symmetry groups attached to spacetime which lead to exchanges of gauge bosons which 'create' forces.


Aren’t forces in the standard model just fields which their quanta is gauge bosons (force carrying particles)?


Well yes. I would say it as: forces correspond to the bosonic fields (except perhaps the Higgs, not sure if that can be regarded as a force), which do not 'take up space' as fermionic fields do.

But the point I was making is just that modern physics has already done away with the concept of Force, as in, things pushing each other from afar. It is quite a bit more complicated (and yet more elegant) than that.


Gravity isn’t a force in general relativity.

However other forces such as the strong nuclear and the electroweak are forces in theories such as the standard model.

Grand Unification theories often are trying to turn gravity into a force this is where mediating particles such as the graviton come into play but these aren’t very successful yet.

It may be that gravity isn’t a force at all and is just an emergent phenomenon from the geometric properties of space time, or it could be both basically two distinct phenomena that cause attraction between massive objects where on a larger scale it’s primarily dominated by the geometry of space time and on the quantum scales by a mediated force with its own field and quanta (particles).


> Gravity isn’t a force in general relativity.

More importantly, GR has nothing to say about forces at all.


> It just seems like one of those inherently anthropocentric concepts that (potentially) holds us back from exploring something different?

This is something I struggle with.

I know that physics originated from an experimental framework. We observe phenomena, then we try to come up with explanations for said phenomena, formulate hypothesis, then test them. That is fine.

But this breaks down when the 'fundamental forces' are involved. What _is_ a force? All the explanations I've ever seen (apart from gravity) seem to treat a 'force' as an atomic concept. They will describe what a force 'does', but not what it 'is'. Maybe that's something unknowable, but it bothers me.

F* magnets, how do they work.


At its essence, in the modern understanding, a force is an emergent phenomena arising out of the fact that a world (a spacetime filled with your particles) where two particles of opposite charge seem to move towards each other is more probable than a world where they don't.

This sounds silly but it's exactly the root cause in the current understanding and shoehorning in the word "force" in "force-carrying particles" is a stretch and causes this confusion. It's true that there would be no electromagnetic force without the photons. But photons and their likes are not the only way a "force" arises. For example, the Pauli exclusion principle can be seen as a "force" and it arises without photons with just electrons.



Isn't quantum field theory kinda like that in that "forces" are actually just the effects of the fields interacting? (Not a physicist, so...)


Somewhat tangential, but Newton has been made fun of because he suggested the apparently "magical" idea that forces could act at a distance...


I'd have to brush up on my quantum mechanics, but IIRC they don't have the concept of "force" ?

(F=ma being replaced by Schrodinger's equation.)


Why is it an anthropocentric concept, did you never place anything on a scale? Or have a wire rip from a weight hanging on it?


Of course. The point is that interpreting that as a "force" is anthropomorphization ("this physical thing is "pushing"/"pulling" this").


string theory?


Science is a never ending series of incorrect observations, each disqualifying the penultimate while asserting the ultimate is axiomatic.

When you're young you get excited each time a new breakthrough is happening. If you manage to grow up, you get tired of the pattern, and the signal to noise ratio starts to look like a good statistical P value.


That is very true.

Knowledge – expands,

Space exploration knowledge – expands,

Sub atomic exploration – expands, (muon and we may even find its sub atomic particles as well)

Space – expands,

Number series – expands,

Fibonacci – expands.

Science expands external knowledge and shrinks self-knowledge, Spirituality shrinks external-knowledge and expands self-knowledge.

Be warned when something expands. Be warned when something shrinks.

E=mc^2

where c is not just the speed of light, c is the speed of space expansion as well.

Mass expands to form energy (science)

Energy shrinks to form mass (spirituality)


Is this the same thing that this 2016 article is about? Or is it a new finding with a similiar conclusion?

https://www.nature.com/news/has-a-hungarian-physics-lab-foun...


It's unrelated


> "The concordance shows the old result was neither a statistical fluke nor the product of some undetected flaw in the experiment, says Chris Polly, a Fermilab physicist and co-spokesperson for the g-2 team. “Because I was a graduate student on the Brookhaven experiment, it was certainly an overwhelming sense of relief for me,” he says."

A committed scientist should worry about having such feelings, even though it is very human. It represents a possible source of non-independence of tests and of scientific bias.


With 19 free parameters in the standard model, can't they fit any experimental result by adjusting a "constant"?


Sure they can fit any experimental result that way, they can probably fit any 19 experimental results that way, but in general if you would freely adjust a constant to fit one experiment then it would stop fitting other experiments.


Do we need TDD for particle physics so CI could run tests on what experiments break when merging a theory


Yes. This would be extremely helpful for experimentalists who spend a lot of their time pointing out, for example, that one's new theory can't violate the equivalence principle by very much at all. Similarly, it would be helpful for people planning new experiments to know whether or not their proposed experiment will probe new ground (i.e. CERN's anti-hydrogen experiments are of intense value for spectroscopic studies, but existing experiments [1] show that antimatter, at the 10^-8 level or better, obeys the equivalence principle and therefore will reliably fall in every experiment of which CERN is capable.).

As a sibling comment points out, it is difficult to implement the markup that spans the space of all possible theories. Kostelecky's parametric Standard Model Extension offers one avenue to do so.

One could implement such a test as a checklist, too, which might already make a difference.

[1] https://arxiv.org/pdf/1207.2442.pdf


Are you volunteering to write the YAML for it? =) Should be pretty much trivial! Exercise left to the reader.


That's done by hand. I guess you could automate it. Maybe we'll see that some time in the next century.


My understanding is that with the lagrangian approach then the free parameters are not all interacting with each other because they are part of different terms. This means a change to a free parameter doesn't necessarily break experiments.


The point is that there are now 10s-100s of experiments that have been reported to very good precision (obviously not all to the extra-ordinary precision of this measurement). There are no longer any “free parameters” in the SM, in the sense that each one has been constrained by at least one experiment by now. Also, in complicated processes like this one, multiple parameters could make an effect on the observed value, such as the fermion masses. (Not saying the fermion masses actually affect g-2, it’s been a few years since I’ve done any QED, so my memory is a little cloudy :) )


ah, well it will be interesting to see how the theorists resolve this!


“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” - John von Neumann


Genuine question from ignorance. Is this related to this work at CERN? https://www.theguardian.com/science/2021/mar/23/large-hadron...


Yes and no. It is two very different experimental situations, the magnetic moment is at rest (well, in an accelerator but the rest frame is defined by the muon) and the R_k anomaly is in an collision. On the other hand, as a theorists the immediate thing one thinks about is lepton universality, that the only difference between a electron and a muon is its mass, is violated. So there will be a lot of work this year on trying to explain both results at the same time.


Maybe. There are plenty of attempts to explain g-2 and LFUV in B decays in one go.

But really there's no way to know for sure yet.


Why is it called a particle accelerator when it's possible for the particle to go both ways: forwards and backwards?

Don't they measure the difference in the direction of the particle as determined by its virtual cloud? more energy emitted from the cloud = forward and vice-versa.

Loved the explanation and accompanying illustrations.


Very interesting indeed. But if one read again all depends the idea of quantum field theory or "like all charged particle it interacts with its own field to create virtual particles" etc. It is hard to understand for old guy learning basic physics long time ago.


What is everyones favorite book on quantum mechanics (I would love understand more of the 3 generations of matter)?


Quantum Mechanics and the three generations of matter are slightly different. Quantum Mechanics is like Newton's laws at small scales, in that if you know what things are like at time t, and you know all the potentials (forces), it tells you how they evolve. It also tells you what states are physically allowed (e.g. only certain energies for electrons orbiting an atom). You can study QM for years without any real look at the standard model, which is where the three generations come from.

If you want an undergraduate class in QM, edX has MIT's classes on line:

https://learning.edx.org/course/course-v1:MITx+8.04.1x+3T201...

If you want a textbook, Griffth's "Introduction to Quantum Mechanics" is the standard answer. It's very much a "shut up and calculate" book, you'll learn how to compute expected values of commutators without much intuition for what they mean.

Update: Others point out Griffth's "Introduction to Elementary Particles", read their recommendations, sounds like the way to go.

If don't want to spend 12 hours a week for 3 months and still not have learned much about the 3 generations, then ... I don't know, maybe QED: The Strange Theory of Light and Matter? I don't know if it has the 3 generations, but it only assumes high school math, yet gets into the quantum version of electricity and magnetism.


thx


Did you want a QM text or a text on the Standard Model?


QM


A great intro is Sean Carroll's youtube series "The Biggest Ideas in the Universe". https://www.youtube.com/playlist?list=PLrxfgDEc2NxZJcWcrxH3j...


I keep listening to that while falling asleep, but the moments where I'm still awake are quite informative


Sakurai, but it won't help you understand the 3 generations of matter because we don't understand why there are 3 generations at all. If you just want to learn particle physics, you can do worse than just reading the review sections of the PDG (pdg.lbl.gov)

And it's probably not a great beginner's text, even though it's really good.


I would not start Sakurai without at least doing some of an undergrad book first, to get the basic concepts.


Sakurai is very clear, IMO, but requires a better understanding of linear algebra than a typical undergraduate text. But if you know linear algebra well, QM is pretty straightforward...


How much physics do you know? How much math? Griffins introduction to elementary particles is the standard model at an undergrad level... and is great. To understand the three generations at a higher level you need a lot of math (you need to know what a Lie algebra is and Noether’s theorem)


I do not use math or physics on a daily basis, but have an MS in Applied Math, and a lot of classes in EE.


You might also check on Perkins Intro to High Energy Physics which also links to experimental techniques.


Griffin is a good book then (as well as his intro to qm)


Mine is Sakurai's "Modern Quantum Mechanics." But it sounds like you're really asking which book would be good for you to learn about quantum mechanics and also the Standard Model of particle physics.


I would not just throw someone into Sakurai starting from scratch.


Bellentine's book is a good introduction to a lot of quantum physics (you will need mathematics), and to really understand particle physics you need even more mathematics


As a layperson i really enjoyed Brian Greenes Fabric of the Cosmos. It is a great read and the chapters on quantum mechanics are captivating.


As others have noted, it sounds like what you're really interested in is particle physics. In that case, I'd recommend Griffiths's "Introduction to Elementary Particles", which would be accessible to someone with an undergraduate level knowledge of physics. But you could probably get away with knowing less, depending on your background.


Actually just for High Energy Physics you do not really need Quantum mechanics, I think Griffith 'Introduction to Elementary Particles' was pretty good. You might want to look more into special relativity first.


I seem to recall that Feynman said that we don't understand why there are three generations, and that it's embarrassing that we don't. It means we don't really know what's going on.


This one is just what you need:

Sudbery, A. (1986): Quantum Mechanics and the Particles of Nature: An Outline for Mathematicians.


Cohen-Tannoudji, Sakurai.


Knowledge - expands, Space exploration knowledge - expands, Sub atomic exploration - expands, (muon and we may even find its sub atomic particles as well) Space - expands, Number series - expands, Fibonacci - expands.

Be warned when something expands, you can never reach.

Its not the destination, its the journey :D


I read the headline wrong and came here to find out how mutton is magnetic at all.


hahaha


It'll be a huge victory for lattice-QCD if the computational result is true.


The muon was the first non-standard matter particle and 2nd transient particle discovered in 1937. It lead to new physics then and continues to suggest there is new physics.


Honestly feel sorry for particle physicists... Their entire gig is spending billions on fancy equipment, and hoping that observe something unexpected. If they see exactly what they expected to see, all that effort was basically wasted. Also, a lot of "discoveries" turn out to be equipment miscalibration - remember those particles which supposedly moved faster than light a few years back? Always struck me as an odd way to do science.


From a physicists standpoint, not seeing something unexpected is not a waste at all.


From a physicist's standpoint, always being right is disheartening.

I think that every physicist hopes to see something that does not match and then a fantastic work begins.

I did not see anything like this during my studies, PhD and short career and moved to industry. I terribly miss the teaching, though.


Is there a way you can continue to teach in some capacity?


This is something I have in mind for some time. I have a great job, but it takes all my "professional" time, the rest if for my family and hobbies.

I am still 10-12 years away from official retirement and until then I doubt to have the time. Taken into account the seniority of my position, I am quite confident that I could teach afterwards at a good school, something I would do even for free.


Can you expand on that? I was under the impression that many thought of it as a waste (Sabine Hossenfelder comes to mind, for example).


> Sabine Hossenfelder

Hossenfelder has a lot of... unique takes in the physics world, I don't think she should be used as a general barometer of the field.


Yea, some people are disappointed; some of the more interesting and exciting moments in physics are when we find out we're wrong, but not always. E.g. I will never forget the time and place I heard about the preliminary detection of primordial B-modes by BICEP (which turned out to be dust contamination) -- that was a predicted detection from canonical inflation models, as the Higgs was a standard prediction from the standard model (also a pretty exciting moment).

Not seeing something when we "expect" to not see anything (from the perspective of certain models) might be more boring, but it's definitely not a "waste" (again speaking purely from a physicist's standpoint).

We know the standard model is incomplete, but where and how are not well known. Not seeing evidence for new physics rules out certain models, and places upper/lower limits on others. It's progress either way.


Some do I'm sure. However if we see something unexpected and it turns out to be true that means our ideas of physics are fundamentally wrong. While it is long term good to correct our understanding, in the mean time a lot of the real world depends on us being right, and so until we correct the theory who knows what will work. I'd hate to find our margin of safety on nuclear bombs was too small and it is only luck that they haven't all blown up in their silos over the years.


I assume it helps trim off the branches of research that become unviable with the new evidence.


Theorizing a phenomenon and having experimental evidence of a phenomenon are very different things.


The quantum mechanics approach is to get a good idea about what happens for everything under a certain energy level.

They keep building bigger machines to fill out the parts that don't have a definition yet.

Anything that specifies what happens at the next band of energy levels is a success, whether it yields new particles, or rules them out at that energy level.

There's some destination of approaching the most energy dense states like describing the mechanics that were active during the big bang period


> Honestly feel sorry for particle physicists... Their entire gig is spending billions on fancy equipment, and hoping that observe something unexpected.

This isn’t the way I would frame it. No one will fund billions on fancy equipment for unexpected results, and no one is flipping a coin expecting something other than heads/tails. The usual course is that there is some theoretical expectation/justification of a result, however we then need to build the experimental capacity to see if it is true.


I think learning to observe anything at such small scales as a routine matter will increase understanding of all kinds of other things we look at. There are folks riding on their coattails, and folks riding on their coattails.

But yeah, it's the long game.


The Structure of Scientific Revolutions by Thomas Kuhn lays all this out pretty clearly. The work of "normal science" is to make predictions based on established models and test them until you find something that breaks, then you have a "paradigm shift" that creates a new model.

https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...


Remember, you can't solve the halting problem.

This is progress. Sometimes science takes two steps back and one step forward. Sometimes that one step is bigger than you realized. And it wasn't backwards, it was projecting into a different spacial dimension. Or something.

The point is, this is probably good news, honestly.


Could you explain what you mean by halting problem in this context?


Two steps back, but the new step forward is in better direction.


> If they see exactly what they expected to see

Why? Validating a hypothesis is quite valuable.


It's actually not at all. Or more precisely, no one treats it as valuable. If you fail to reject H0 repeatedly your career is doomed to mediocrity.


>the strong force and the weak force.

Is there a reason we're leaving "nuclear" off these forces' names now?


I think this would be misleading once you dive deeper into particle physics. The strong interaction is really »the interaction mediated by gluons between color-charged things«.

• Gluons interact with gluons, without the need for quarks.

• Many (almost all) bound quark states are not found in nuclei, only uud (protons) and udd (neutrons) are. But there are also all the mesons (e.g. the pion), and a whole lot of other baryons (xis and sigmas and what have you) exist.

To put this into perspective, it feels a bit like calling electromagnetic interaction the »chemical interaction«, because chemistry is explained for the most part by the interaction of electrons. But that would leave out a lot of different ways matter can interact, like Bremsstrahlung, positrons, proton/proton repulsion, and all that.


They aren't tied to the nucleus of the atom in any way. It's just that they were discovered in phenomena involving atom nucleus.


I have indeed often seen the names referred to without the term "nuclear".


This might be a new variant of: You can tell how old a national lab is by what they study in the "physics" division.


Weird. This must have changed in the past 10 years or so, since I've been out of college.


This (very important) paper from 1967 calls them "weak interaction" and "strong interaction": https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.19...

Putting the word "nuclear" in the middle seems to just be done in textbooks and classrooms.


It's something you never get used to. As you get older, this will just keep happening. We used to put commas before the last item in a list back in like the stone ages when I was in school. My SAT score looked really lame for a bit of time when those suddenly changed.


I understand the grumpy old person archetype now. I feel like I've been one for a long time, but it's really hitting home over the past decade.


Can anyone explain in layman's terms why this is important?


The electrons and the muons are very similar. We can measure the magnetic moment and make some calculations and calculate a number g. If they were perfectly ideal particles, then g must be exactly 2, so it's interesting to measure g-2.

The real particles have a lot of virtual particles that appear around them and are impossible to detect directly. It's like a cloud of more electrons, positrons, photons, and other particles.

They are impossible to detect directly, but they affect slightly the result of the experiments, so when you go to a lab and measure g, you don't get exactly 2.

We have a very good model for all the virtual particles that appear around them, i.e. the electrons, positrons, photons, and other particles. It's call the "Standard Model". (But I don't like the name.)

We can use the "Standard Model" to calculate the correction of g of an electron, and the theoretical calculation agree with the experiments up to the current precision level.

We[1] can use the "Standard Model" to calculate the correction of g of a muon, and the theoretical calculation does not agree with the experiments!!!

The disagreement is very small, and there is still a small chance that the disagreement is a fluke, but people is optimistic and think that it they continue measuring they can be confident enough that it is not a fluke.

[1] Actually not me, this is not my research area, but I know a few persons that can.

---

Back to your question:

> *Why is this important?

If the theoretical calculation and the experimental value disagree, it means that the "Standard Model" is wrong. Physicist would be very happy to prove that it is wrong, because they can study variants of this experiment and try to improve the model. (And be famous, and get a Nobel prize.)

Physicist are very worried because they are afraid that the "Standard Model" is so good that to prove it is wrong they need to build a device that is as big as the Solar system. (And they can't be famous, and the Nobel prize will go that work in other areas.)

If this result is "confirmed", the idea is to add a new particle to the "Standard Model" and get the "Standard Model II". (IIRC it already has a few corrections, so we will call the new version the "Standard Model".)

It's difficult because the new particle must change the predictions for this experiment, but not change too much the predictions for other experiments. It may take a few years or decades to find the new theoretical particle that match the experiments.

If you are pessimistic, the new particle will be useful only to explain a small correction that is only relevant in very accurate experiments in the lab, or inside a big star, or other unusual events.

If you are optimistic, in 100 year every moron on Earth will have in the pocket a device that will use this new particle for something amazing.

Or perhaps something in between. Nobody has any clue about this.


If you take the current sum of all human knowledge and calculate something called g, and then subtract two, you get something different from the the real value of g-2. Therefore, we have identified something that lies beyond the sum of all human knowledge. That's kind of the whole idea behind being a physicist so understandably anyone remotely related to the area this belongs to is pretty excited.

If you are wondering, "why does this one single number matter so much, who cares if we didn't know it before," it is because it hints at a great new theory that could change everything. Nobody knows what theory, but in the past small discrepancies in fundamental measurements have been the seeds of great theories.


From another comment, there's this PBS Space Time video on Youtube.

https://www.youtube.com/watch?v=O4Ko7NW2yQo


3D point clouds and x-rays! More research can be done on low-cost devices. It puts LiDAR to shame but there are also great privacy implications. Muon tomography: https://en.wikipedia.org/wiki/Muon_tomography


Extremely precise measurements of the muon magnetic moment are not going to be useful for those applications.


Can anyone recommend any pop-sci books? I haven't taken a science class since high school, and that is barely remembered. Mostly interested in getting philosophically up to date with the state of matter(?), it's different types, how these objects interact.


Recommended up thread, but Feynman's QED: The Strange Theory of Light and Matter [0] is fantastic and very accessible. It's not particularly "up to date" (dating back to 1985), but it's not obsolete.

[0] https://en.wikipedia.org/wiki/QED:_The_Strange_Theory_of_Lig...


My personal favorite:

- Thirty Years that Shook Physics: The Story of Quantum Theory

Other great books:

- The Theory Of Everything

- The Quark and the Jaguar

- Six Easy Pieces


The only update that I got since was high school was that electrons aren't on concrete orbitals around the nucleus, but that there is a probability distribution saying that they are likely somewhere around the area where the concrete "orbital" concept is usually drawn.

That and quantum shenanigans, but that comes down to "we can't transport information faster than light."


Just mention pilot wave theory and someone on this site might reply with a very detailed explanation of quantum mechanics.

https://en.wikipedia.org/wiki/Pilot_wave_theory


And the gluon is the opposite


I’m getting a “faster than light neutrinos” feeling about this one


hahah all this for some what? 10^(-6) or 10^(-5) discrepancy?! What about this age old 10^120 discrepancy that eveyone seems to be just fine about... https://en.wikipedia.org/wiki/Cosmological_constant_problem


People aren't "just fine" about dark energy. It is an entire field of study in physics/astronomy. A problem there is that we are quite stuck; some future experiments might tell us something (if it has changed over time for instance), but theoretically there aren't any stand out answers or ones that can see experimental confirmation soon.


Please read up on Dark Energy. It is quite fascinating that people do not make the connection. Dark Energy was invented because the theory does predict not enough energy. Meaning observation requires there to be WAAY more energy than what the theory predicts!

This vacuum catastrophe is completely different! The theory does predict WAAAAY, I mean WAAAAAAAAYYYYYY more energy than what was found in observations.


Perhaps I should have mentioned I am (was) a theoretical physicist and have worked a little on dark energy. But perhaps you meant the comment to others in general.


Must be background radiation day at HN.


This is not a collider experiment, so it doesn't have that particular failure mode.


It’s simple. The universe is electromagnetic. The Bose-Einstein condensate is the aether in most dense form. Everything evaporates into lower densities by means of rotation via the torus and vortices. Everything is pressure finding equilibrium spread throughout densities in fluid. Easy to reason about. The sun is hollow and incompressible aether inside, which is why it’s cold. The surface is electromagnetic activated by the currents spread throughout the galaxy. Every sun is like a lamp. Every sun is a plasmoid. Outer space is least dense form of the aether. Sound makes matter.

Fun!


Is there legitimate support for this hollow sun theory, or is this a fringe theory like flat earth and so on?


I dont know about hollow suns and stuff. But what he said in the beginning has some reason.

Check some of this guys papers: https://file.scirp.org/Html/8-2180368_91083.htm


The EM universe hypothesis is not much better than the flat Earth one. Similar levels of agreement with observed behavior.


Not much is published that makes sense unfortunately. The EM universe hypothesis has its flaws because it’s not United on first principles connecting the aether and primitive constructs like the torus and hyperboloid. It’s dangerous for many easy to reason about considerations. Most people’s lives are reputation based and confined to incentives of pay that prevent such topics being published or discussed. Hard problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: