The Quanta article explains it quite nicely. To quote their example of what has happened in the past:
> ”A year after Brookhaven’s headline-making measurement, theorists spotted a mistake in the prediction. A formula representing one group of the tens of thousands of quantum fluctuations that muons can engage in contained a rogue minus sign; fixing it in the calculation reduced the difference between theory and experiment to just two sigma. That’s nothing to get excited about.”
Let me make that more meta.
If a theory is unable to predict a particular key value, is it still a theory?
This is not a hypothetical question. The theory being tested here is the Standard Model. The Standard Model in principle is entirely symmetric with regards to a whole variety of things that we don't see symmetry in. For example the relative mass of the electron and the proton.
But, you ask, how can it be that those things are different? Well, for the same reason that we find pencils lying on their side rather than perfectly balanced around the point of symmetry on the tip. Namely that the point of perfect symmetry is unstable, and there are fields setting the value of each asymmetry that we actually see. Each field is carried by a particle. Each particle's properties reflect the value of the field. And therefore the theory has a number of free parameters that can only be determined by experiment, not theory.
In fact there are 19 such parameters. https://en.wikipedia.org/wiki/Standard_Model#Theoretical_asp... has a table with the complete list. And for a measurement as precise as this experiment requires, the uncertainty of the values of those parameters is highly relevant to the measurement itself.
It turns out that the simplified paradigmatic “scientific method” is a very bad caricature of what actually happens on the cutting edge when we’re pushing the boundaries of what we understand (not just theory, but also experimental design). Even on the theoretical front, the principles might be well-understood, but making predictions requires accurately modeling all the aspects that contribute to the actual experimental measurement (and not just the simple principled part). In that sense, the border between theory and experiment is very fuzzy, and the two inevitably end-up influencing each other, and it is fundamentally unavoidable.
Unfortunately, it would require more effort on my part to articulate this, and all I can spare right now is a drive-by comment. Steven Weinberg has some very insightful thoughts on the topic, both generally and specifically in the context of particle physics, in his book “Dreams of a final theory” (chapter 5).
If you don’t have access to the book, in a pinch, you could peruse some slides that I made for a discussion: https://speakerdeck.com/sivark/walking-through-weinbergs-dre...
(To be more precise, static types are propositions that the type checker tries to prove, but that's not as catchy.)
> But as the Brookhaven team accrued 10 times more data, their measurement of the muon’s g-factor stayed the same while the error bars around the measurement shrank. The discrepancy with theory grew back to three sigma by the time of the experiment’s final report in 2006.
Any change in the theoretical estimates could in principle drastically change the number of sigmas mismatch with experiment in either direction (but as the scientific endeavor is human after all, typically each helps debug the other and the two converge over time).
“Similar” is doing a lot of work there - what constitutes similar basically dictates if error correction has any future proofing benefits or none at all.
Systematic errors can easily remain hidden. The faster-than-light neutrino had 6-sigma confidence, but 4 other labs couldn't reproduce the results. In the end it was attributed to fiber optic timing errors.
So if you don't know you have a system error, then you can very easily get great confidence in fundamentally flawed results.
Isn't that exactly what you just did?
There's nothing wrong with showing only small quotes, the problem would be cherry picking them in a way that leads people to draw incorrect conclusions about the whole.
From Gordan Krnjaic at Fermilab:
> if the lattice result [new approach] is mathematically sound then there would have to be some as yet unknown correlated systematic error in many decades worth of experiments that have studied e+e- annihilation to hadrons
> alternatively, it could mean that the theoretical techniques that map the experimental data onto the g-2 prediction could be subtly wrong for currently unknown reasons, but I have not heard of anyone making this argument in the literature
Edit: I’m not entirely sure whether they’re a professor, but here’s the exact quote
> “My feeling is that there’s nothing new under the sun,” says Tommaso Dorigo, an experimental physicist at the University of Padua in Italy, who was also not involved with the new study. “I think that this is still more likely to be a theoretical miscalculation.... But it is certainly the most important thing that we have to look into presently.”
This is a pre-print
This is the link to the Nature publication: https://www.nature.com/articles/s41586-021-03418-1
Why is it more likely for it to be wrong than the calculation that shows the theory deviating from experiment.
However, I’m an experimentalist who has worked close to a lot of this stuff, not an actual theorist, so I’d love to get a theorists interpretation as well.
They're also cut-throat competitive, which is very divisive. Grad students and postdocs are forced to sign NDAs to work on the hot stuff. That's insane.
What's worse, from my point of view (as an actual LQCD practitioner) is: they're not very open about the actual details of their computation. It's tricky, because they treat their code as their 'secret sauce'. (Most of the community co-develops at least the base-level libraries; BMW goes it alone.)
OK, so they don't want to share their source code; that's fine. But they ALSO don't want to share any of their gauge configurations (read: monte carlo samples) because they're expensive to produce and can be reused for other calculations. So it'd be frustrating to share your own resource-intensive products and have someone else scoop you with them. I disagree with that, but I get it at least.
My biggest problem, and the one that I do not understand, is their reluctance to share the individual measurements they've made on each Monte Carlo sample. Then, at least, a motivated critic could develop their own statistical analysis (even if they can't develop their whole from-scratch computation).
Because of the structure and workflow of a LQCD calculation it's very difficult to blind. So, the only thing I know to do is to say "here are all the inputs, at the bit-exact level, to our analysis, here are our analysis scripts, here's the result we get; see if you agree."
This is the approach my collaborators and I took when we published a 1% determination of the nucleon axial coupling g_A [Nature 558, 91-94 (2018)]: we put the raw correlation functions as well as scripts on github https://github.com/callat-qcd/project_gA and said "look, here's literally exactly what we do; if you run this you will get the numbers in the paper." It's not great because our analysis code isn't the cleanest thing in the world (we're more interested in results than in nice software engineering). But at least the raw data is right there, we tell you what each data set is, and you're free to analyze it.
BMW does nothing of the sort. They (meaning those with power to dictate how the collaboration operates) seem to not want to adopt principles of nothing-up-my-sleeve really-honestly-truly open science. So their results need to be treated with care. That said, they themselves are extremely rigorous, top-notch scientists. They want you to trust them. Not that you shouldn't. Trust---but verify. That's currently not possible. I bet they're vindicated. But I can't check for myself.
No, it is not. It is the exact reason why rheir results are not trust worthy.
Publish the code, let it be checked by the peers.
Closed source code has no place in science and most journals now rightly demand open code for the publications.
If you spend hundreds or thousands of man hours optimizing, for example, assembler for a communications-intensive highly-parallel linear solve, it's fair to be reluctant to give it away. If you do others will get the glory (publications / funding). Some people do [ eg. this solver library for the BlueGenes https://www2.ph.ed.ac.uk/~paboyle/bagel/Bagel.html ]. Most people are happy to let others do the hard work of building low-level libraries. But they COULD decide to write custom software that'd go faster. If their custom software reproduce results that community-standard libraries produce that's not nothing.
It sounds like the "secret sauce" for this collaboration includes a set of numerical libraries. They would get relatively little funding, few publications ("glory", as you say), and at best be reduced to a citation (if people remember to cite their libraries) if all they did was improve the backbone of lattice QCD with better software.
So instead they keep it internal. It's a bit sad that there's so little glory in writing better numerical libraries, but it's a common problem across the sciences (and in the open source community in general) so I can believe they'd be reluctant to share.
Indeed. There are really only a limited set of (physics) choices when making these libraries. As long as the discretization you pick goes to QCD in the continuum limit, you can make whatever choices you want. Some choices lead to faster convergence, or easier numerics, or better symmetry, or whatever---at that point it's a cost/benefit analysis. But if your discretization ('lattice action') goes is in the QCD universality class ('has the right continuum limit') you're guaranteed to get the right answer as long as you can extrapolate to the continuum.
> It's a bit sad that there's so little glory in writing better numerical libraries.
Agreed, but physics departments (by and large) award tenure for doing physics, not for doing computer science. It's hard to get departments to say "yes, your expertise in optimizing GPU code is enough to get you on the tenure track".
> It's a common problem across the sciences. [...] I can believe they'd be reluctant to share.
The larger community does center around common codes. The biggest players are
but there are others, and there are private codes (like BMW's) too.
As part of the SciDAC program and now exascale initiative the DOE does fund a few software-focused national lab jobs. But not many.
I know nothing about this collaboration, but if what you say is true this isn't good science.
As someone in the field let me assure you: everything, of course, is more complicated than you make it out to be. I understand the absolutist position. But in a world of finite and ever-shrinking resources (grants, positions, etc.) it's fair to try to push your advantage. If funding were plentiful, adopting standards of publish-every-line-or-it-doesn't-count would be fair. People would have plenty of time and resources to get that done. As it stands there are basically no incentives to behave that way and being strapped for human resources puts the issue at the bottom of the list compared to actually getting results.
What degree of data sharing is considered normal there? Across experimental physics it varies a lot: astronomers are often required by the funding agencies to make the data public, whereas particle physics experiments have traditionally shared very little (although pressure from funding agencies has started to change this too).
Given the ways you described this collaboration, my questions are:
- As an experimental physicist, when will I be able to believe them? Do we wait around for someone else to cook up a batch of similar secret sauce to confirm the result? Will they release their gauge configurations after some embargo period? Or should we believe them just because they are top-notch? I've seen top-notch groups like this fall before, so it seems quite reasonable if experiments aren't citing them now.
- Should funding agencies be attaching more importance to openness in science? From what you describe (and sorry if I'm misinterpreting you) there is very little incentive to share things that would make their results far more useful. Of course nothing is simple, but I've seen collaborations reverse their stance on open data overnight in response to a bit of pressure from the people writing the pay checks.
It took you folks 20 years to redo the experiment. Independent lattice calculations have already been underway for some time; I would expect (but I won't promise, not working on the topic myself and not having any particular insider information) results on the year-or-two timescale.
> Will they release their gauge configurations after some embargo period?
BMW probably will not do this. In their recent Nature paper they do say that upon request they'll give you a CPU code BUT when they provide a nerfed CPU code that produces the same numbers, rather than their performant production code. ... annoying.
> Or should we believe them just because they are top-notch?
Well, maybe? Why do you believe the theory initiative's determination of the vacuum polarization or the hadronic light-by-light? Some how it's more sensible to back out those things by fitting experimental data than by doing a direct QCD calculation? There's no free parameters in a QCD calculation, but fitting... well, give me a fifth and I can wiggle the elephant's trunk.
> I've seen top-notch groups like this fall before, so it seems quite reasonable if experiments aren't citing them now.
I think it's wrong not to hedge the experimental results and it's wrong not to cite them, but I understand why experimentalists wouldn't take their result as final either.
I'm not current on what models people like to explain this result, but it has been factored in (or ignored if you didn't trust it) in particle physics model building and phenomenology for years. This result makes it much more serious and something I imagine all new physics models (say for dark matter or other collider predictions or tensions in data) will be using.
Whether or not anything interesting is predicted, theoretically, from this remains to be seen. I don't know off hand if it signals anything in particular, as the big ideas, like supersymmetry, are a bit removed from current collider experiments and aren't necessarily tied to g-2 if I remember correctly.
tl;dr - electrons and muons are leptons, but what if they don't interact with photons the same way? (ie the rules of physics aren't universal to all leptons)
When I was younger, I remember to read cyberpunk comics quite a lot. They explain a vision of the future that is improbable, but in many ways it get stuff right. Imagine aligning this with real word science. Imagine hearing from a superhero how his powers came to him. Imagine having a scientist name on the movie credits.
It doesn't need to make everything scientifically accurate, but explaining the fundamentals can engage more people to enter science.
Yesterday I was watching a new movie from Netflix called 'hacker'. The movie is awful, but it starts showing how Stuxnet should work, and that is pretty awesome. This is cool because I know the fundamentals of Stuxnet.
If they break the 4th wall and show something that could happen for real, it could bring more emotions to the movie.
I still remember finding part 1 in the used books store with my dad around the age of 10-11 for like $2. Now I'm in my early 30's and all 3 parts are just a handful of books away from my physics and philosophy books on my book shelf :)
which are pretty great.
We're currently heading into cyberpunk in basically every aspect except for the anarchy. More like totalitarian cyberpunk. It's left to see whether tech gives us the means for a semblance of anarchy, but I'm not getting my hopes up.
It seemed biased but still covered the basics well, I thought, not that I'm a good judge.
Spooky quantum effect, there!
I didn't feel the need to click anything.
I think it'd be more accurate to say "interact" instead of "collide" – the electron could still be far away from the charged particle. More generally, bremsstrahlung also occurs when an electron's velocity vector (not necessarily its modulus) changes, i.e. when the electron changes direction, like in a synchroton.
> In fact the name "bremsstrahlung" means "braking radiation," if memory serves.
That's correct :)
eg: (a+b)^2 = a^2 + b^2 + 2ab
That 2ab is an interference term so a different process can get mixed in (quantum mechanically speaking). And we may not experimentally be able to disentangle it.
Assuming that there were no experimental errors, you can use the measure of standard deviation to express roughly what % chance a measurement is due to a statistical anomaly vs. a real indication that something is wrong.
To put some numbers to this, a measurement 1 sigma from the prediction would mean that there is roughly a 84% chance that the measurement represented a deviation from the prediction and a 16% chance that it was just a statistical anomaly. Similarly:
> 2 sigma = 97.7%/2.3% chance of deviation/anomaly
> 3 sigma = 99.9%/0.1% chance of deviation/anomaly
> 4.2 sigma = 99.9987%/0.0013% chance of deviation/anomaly
Which is why this is potentially big news since there is a very small chance that the disagreements between measurement and prediction are due to a statistical anomaly, and a higher chance that there are some fundamental physics going on that we don't understand and thus cannot predict.
edit: Again, this assumes both that there were no errors made in the experiment (it inspires confidence that they were able to reproduce this result twice in different settings) and that there were no mistakes made in the predicition itself, which as another commenter mentions eleswhere, is a nontrivial task in and of itself.
No, this is a p-value misinterpretation. Sigma has to do with the probability that, if the null hypothesis were true, the observed data would be generated. It does not reflect the probability that any hypothesis is true given the data.
The null hypothesis is that there are no new particles or physics and the Standard Model predicts the magnetic charge of a muon. A 4.2 sigma result means that given this null hypothesis prediction, the chances that we would have observed the given data is ~0.0013% (chance this was a statistical anomaly). Since this is a vanishingly small chance (assuming no experimental errors), we can reasonably reject the hypothesis that the Standard Model wholly predicts the charge of a muon.
This is worth repeating a lot when explaining sigma (even in a great and comprehensive explanation such as yours): Statistical anomalies are only relevant when the experiment itself is sound.
Imagine you are trying to see whether two brands of cake mix have different density (maybe you want to get a good initial idea whether they could be the same cake mix). You can do this by weighing the same amount (volume) of cake mix repeatedly, and comparing the mean value for weight measurements of either brand. That works well, but it totally breaks down if you consistently use a glass bowl for one brand, and a steel bowl for the other brand. You will get very high units of sigma, but not because of the cake mix.
- still looking for a better link than the Book… I’ll update this later
Seems like all he was initially doing in the 80’s was dig into the 2 out of 10 experiments from Landolt that failed to confirm a conservation of mass
That little episode brought great joy to this experimentalist's heart.
Bear with me.
Roughly 2000 years ago, the number of people who could do arithmetic and writing was < 1% of the population. By 200 years ago it was maybe what 10%?
Now it is 95% of the world population, and 99.9% of 'Western' world.
Lets say that Alexey Petrov is about as highly educated and trained as any human so far. (A Physics PhD represents pretty much 25 years of full-time full-on education). But most of us stop earlier, say 20 years, and many have less full-on education, perhaps not doing an hour a day of revision or whatever.
But imagine we could build the computing resources, the smaller class sizes, the gamification, whatever, that meant that each child was pushed as far as they could get (maybe some kind of Mastery learning approach ) - not as far as they can get if the teacher is dealing with 30 other unruly kids, but actually as far as their brain will take them.
Will Alexey be that much far ahead when we do this?
Is Alexey as far ahead as any human can be? Or can we go further - how much further? And if every kid leaving university is as well trained as an Astronaut, is capable of calculus and vector multiplication, will that make a difference in the world today?
I'm "smart" relative to the general population, but you could have thrown all the education in the world at me and I'd never have become Alexey Petrov.
I have a hunch that the Alexey Petrovs -- the upper 0.001% or whatever -- of the world do tend to get recognized and/or carve out their own space.
I think the ones who'd benefit from your plan would be... well, folks like me. I mean, I did fine I guess, but surely there are millions as smart as me and smarter than me who fell through the cracks in one way or another.
I suspect fairly quickly we'd run into some interesting limits.
For example, how many particle physicists can the world actually support? There are already more aspiring particle physicists than jobs or academic positions. Throwing more candidates at these positions would raise the bar for acceptance, but it's not like we'd actually get... hordes of additional practicing particle physicists than we have now. We'd also have to invest in more LHC-style experimental opportunities, more doctorate programs, and so on.
Obviously, you can replace "particle physicist" with other cutting-edge big-brain vocation. How many top-tier semiconductor engineers can the world support? I mean, there are only so many cutting-edge semiconductor fabs, and the availability of top-tier semiconductor engineers is not the limiting factor preventing us from making more.
There are also cultural issues. A lot of people just don't trust the whole "establishment" for science and learning these days. Anti-intellectualism is a thing. You can't throw education at that problem when education itself is seen as the problem.
It will make a huge difference, and no difference at all. It will probably help us solve all of our current problems. And then it will also introduce a whole new brand of problems which will be sources of crises that generation will deal with. What you read on news will change, but the human emotional response to those news will be very similar to today's.
(The comment was posted to https://news.ycombinator.com/item?id=26726981 before we merged the threads.)
That is really a lot. It's less than the official arbitrary threshold of 5 sigmas to proclaim a discovery, but it's a lot.
In the past, experiments with 2 or 3 sigmas were later classified as flukes, but AFAIK no experiment with 4 sigmas has "disappeared" later.
In some domains 7 sigma events come and go - statistics is not something to be used to determine possibility in the absence of theory. If you go shopping you will buy a dress, just because it's a pretty one doesn't mean that it was made for you.
It just shows probabilistic significance. Confirmation by independent research teams helps eliminate calculation and execution errors.
No they didn't; they claimed that 4 sigmas means it will probably turn out to be something other than statistical noise. They made no claims about "it's real" versus "it's a systematic, non-statistical error".
See also https://www.explainxkcd.com/wiki/index.php/2440
At the time it was very significant results, just like this one.
Turned out someone hadn't plugged a piece of equipment in right and it was very precisely measuring that flaw in the experiment.
You can't look at any 8 sigma result and just state that it must necessarily be true. Your theory may be flawed or you may not understand your experiment and you just have highly precise data as to how you've messed something else up.
Of course, this is still not good enough. But the nice thing about things that are real is they eventually stand up to increasing levels of self-doubt and 3rd party verification... it’s an extraordinary result (because, of course, the Standard Model seems to be sufficient for just about everything else... so any verified deviation is extraordinary), and so funding shouldn’t be a problem.
A decent heuristic: Real effects are those that get bigger the more careful your experiment is (and the more times it is replicated by careful outsiders), not smaller.
To use a car analogy: This is as if you took someone's prize-winning race car, kept the moderately-priceless chassis, installed upgraded components in essentially every other sense (remove the piston engine, install a jet engine, remove the entire cockpit and replace with modern avionics, install entirely new outer shell, replace the tires with new materials that are two-decades newer...), put the car through the most extensive testing program anyone has ever performed on a race car, filled the gas tank with rocket fuel, and took it back to Le Mans.
I believe that the likelihood of a meaningful ring-correlated systematic, while still possible, is quite low in this case. The magnetic-field mapping, shimming, and monitoring campaigns, in particular, should give people confidence that any run-to-run correlated impact of the ring ought to be very small.
So now all that matters is what kind of article do your want to write. A sensationalist one to get eyeballs or a realistic one that is far less exciting. Thus the exact same discovery can be presented via two radically different headlines:
BBC goes with "Muons: 'Strong' evidence found for a new force of nature"
> "Now, physicists say they have found possible signs of a fifth fundamental force of nature"
ScienceDaily says: "The muon's magnetic moment fits just fine"
> "A new estimate of the strength of the sub-atomic particle's magnetic field aligns with the standard model of particle physics."
There you have it, the mainstream media is not credible even when they attempt to write about a physics experiment ...
For those who do not know - PBS Spacetime is YouTube channel hosted by astrophysics Ph.D Matt O'Dowd, aimed at casual physics enthusiasts without oversimplifying underlying physics too much.
Every episode I hear a dozen barely explained confusing terms with quantum this and higgs-field that.
I feel like they care more about impressing me with how complicated this stuff is than they do about actually teaching me much. Maybe I'm just not the target audience :(
My own impression of SpaceTime is that they are consistent and chronological. I wouldn’t understand that math on my own nor make any inferences, but conceptually everything is pretty clear to me.
5 sigma results have disappeared (even 6-sigma) upon independent testing, so more testing is needed.
TLDR: you can have a mathematics that always gives true answers (but that cannot answer everything). Or you can have a mathematics that can answer every possible question (but some answers are wrong, you do not know which). Choose.
This dispaired mathematicians of the early 20th century, who had hoped to create 'one mathematics to rule them all'. Of course you can have several disjunct mathematics, each one for the problem you like.
Book recommendation: Gödel, Escher, Bach by Douglas Hofstadter.
Of course we'll perceive things as complex when we move outside of that regime.
Knowledge - expands,
Space exploration knowledge - expands,
Sub atomic exploration - expands, (muon and we may even find its sub atomic particles as well)
Space - expands,
Number series - expands,
Fibonacci - expands.
Be warned when something expands, it's a trap.
Science expands external knowledge and shrinks self-knowledge,
Spirituality shrinks external-knowledge and expands self-knowledge.
Be warned when something expands.
Be warned when something shrinks.
where c is not just the speed of light, c is the speed of space expansion as well.
Mass expands to form energy (star)
Energy shrinks to form mass (black hole)
When you're young you get excited each time a new breakthrough is happening. If you manage to grow up, you get tired of the pattern, and the signal to noise ratio starts to look like a good statistical P value.
Knowledge – expands,
Space exploration knowledge – expands,
Sub atomic exploration – expands, (muon and we may even find its sub atomic particles as well)
Space – expands,
Number series – expands,
Fibonacci – expands.
Mass expands to form energy (science)
Energy shrinks to form mass (spirituality)
I know a bit about how it is reconceptualized as space-time deformation in the context of general relativity, but that's about it.
It just seems like one of those inherently anthropocentric concepts that (potentially) holds us back from exploring something different?
The idea of replacing a 'gravitational force' with spacetime curvature gave us General Relativity; extending this same idea to electromagnetism gives us Kaluza-Klein theory https://en.wikipedia.org/wiki/Kaluza%E2%80%93Klein_theory
The current state of the art is Quantum Field Theory (of which the Standard Model is an example) https://en.wikipedia.org/wiki/Quantum_field_theory
In QFT, "particles" and "forces" are emergent phenomena (waves of excitation in the underlying fields, and the couplings/interactions/symmetries of those fields). QFT tends to be modelled using Lagrangian mechanics too.
The ELI15 version is think about vectors in our normal concept of 3D space first, if I told you a body was always moving at 100 meters per second and it was 100% in the horizontal direction you'd say there was 0 meters per second in the vertical direction. Now say something curves this geometry a little bit, the body will still be traveling at 100 meters per second but now a tiny bit of that speed may appear to manifest in the vertical direction and a tiny bit less appear to manifest in the horizontal direction. Same general story with spacetime except the math is a lot more complex leading to some nuance in how things actually change.
The ELI20 version should you want to understand how to calculate the effects yourself is probably best left to this 8 part mini series rather than me https://youtu.be/xodtfM1r9FA and the 8th episode recap actually has a challenge problem to calculate what causes a stationary satellite to fall to the sun (in an idealized example) that exactly matches your question.
The "spacetime speed vector" is more formally the four-velocity and it's true that the norm of this 3 component space 1 component time vector is strictly tied to c. At the same time the four-velocity doesn't actually mathematically behave like a euclidean vector space vector where you can just add another like vector describing the effects of the warping and call it a day. In reality you have to run it through the metric tensor first (some function for the given instance that describes the geometry of warped spacetime) to get things in a coordinate space that is usable. Once you have that you actually have to run it through the geodesic equation to see what the acceleration will be as using the mapped four-vector alone will only tell you about the current velocity components in your coordinate space not the effect of the spacetime warping on something in them. These kinds of differences are the bits I swept under the rug as "nuance in how things actually change" but the net concept of the four-vector shifting components due to the warping of spacetime as an object moves along its world line is 100% the net result.
Also I can't really take credit for the method of explanation, just some of the simplified wording. I do find this explanation not only infinitely more accurate but actually easier to understand than the damn rubber sheet analogies or even improved/3D space warping analogies as they still leave out the time portion of the spacetime gradient which actually plays a bigger role in these examples.
Apparently you can think of the gravitational force as arising from time gradients . Time flows slower closer to the planet, so if your arm is pointing towards the planet then your arm is advancing slightly slower in a particular way and this creates a situation where your arm wants to pull away from you; an apparent force.
Instead, you keep the rubber sheet and the single ball. Instead of placing other objects on the curved rubber, project (using a projector if you want) a straight line (from a flat surface) down onto the rubber. If you trace the projection of the line onto the rubber, you'll notice that it is no longer straight - it curves with the rubber (especially if you subsequently flatten the rubber out). That's a world line. That's the direction of movement that an object would see as its "momentum" - but it wouldn't actually follow the world line, as the world line changes when the object moves.
To build a geodesic (the actual orbit/movement of the object), you need to move along the world line and then build a new one, repeatedly. I haven't completely figured out the instructions to build a geodesic in this analogy, but seeing/imagining the curved world line should be enlightening:
There is no attraction.
For a curvature based model, the delta v being 0 means that the gradients around each body are equal to each other, but that doesn't say anything about what's causing those gradients.
To find this "attraction", you have to calculate the curvature while leaving some sources out
Essentially for any given spacetime we can calculate out geodesics for any freely-falling object; it's just the path these objects follow through spacetime unless otherwise disturbed. Here we're interested in such objects that couple only to gravitation. These "test objects" do not radiate at all, not even when brought into contact with each other, and they don't absorb radiation. They don't attract electromagnetically, or feel electromagnetic attraction, and they don't feel such repulsion either. They also don't feel the weak or strong interactions. So they're always in free-fall -- always in geodesic motion -- because they can't "land" on anything.
We take one further step into fiction and prevent these test objects from generating curvature themselves. You can fill flat spacetime with them, and spacetime will stay flat. This is completely unphysical, but it's a handy property for exploring General Relativity.
If we put such an object into flat spacetime, we can use it to define a set of spacetime-filling extended Cartesian coordinates, where we add time to the Cartesian x, y, and z labels. We set things up so that the object is always at x=0, y=0, z=0, but can be found at t < 0 and t=0 and t > 0. The units are totally arbitrary. You can use SI units of seconds and metres, or seconds and light-seconds, or microseconds and furlongs: for our purposes it doesn't matter.
We can introduce another such object offset a bit, so that it is found initially at t=0,x=200,y=0,z=0. Again, the units are unimportant, it only matters that the second object is not at the same place as the first. This object is set up to always be at y=0,z=0.
In perfectly flat spacetime, these two objects, for t=anything, will be found at x=0 and x=200 respectively, and always at y=0, z=0.
They do not converge, ever, not in the past or in the future. They also do not diverge. The choice of coordinates doesn't matter any more than the choice of units; we could change the picture to keep the second object always at x=200, and the first will move from x=0. Or we can let them both wander back and forth along x, but with constant separation. But let's stick with our first choice of holding the first particle at the spatial origin at all times.
Now, what happens if we give the first object a little bit of stress-energy (you can think of that as mass in this setup)?
The geodesics generated now are not those of flat spacetime, but rather much closer to those of Schwarzschild. We have perturbed flat spacetime with the nonzero mass.
The first object, if we keep it always at x=0,y=0,z=0 now causes the second object to be on a new geodesic that is x != 200 at different times. Depending on the relationship between the "central mass" at the origin and distance x=200, the geodesic evolution of x for all t for the second object might look like an elliptical, circular, or hyperbolic trajectory .
If on the other hand we give both objects the same mass, we end up calculating out geodesics that focus. There will be at least one time t > 0 where the test objects will occupy the same point in spacetime, t=?,x=?,y=0,z=0. (This is called a "caustic").
Raychauduhri showed that caustics are highly generic: you need electromagnetic repulsion (which means a global charge imbalance, which is not a feature of our universe); strong gravitational radiation (which is not a feature of our universe except perhaps in the extremely early universe); or a metric expansion of space (which is a feature of our universe, and leads to large volumes in which geodesics diverge, avoiding caustics, and small volumes in which geodesics converge such that caustics are only avoided by non-gravitational interactions).
This is the General Relativistic picture of masses attracting each other: objects follow geodesics unless shoved off them (by e.g. electromagnetic interaction), or until they "land" on something; in most physically plausible spacetimes there are generically intersecting geodesics and most things find themselves on one; and so close approaches, collisions, mergers, and so forth are practically inevitable.
Lastly, consider an https://en.wikipedia.org/wiki/Accelerometer . A calibrated one in free-fall anywhere should always report "0"; dropping the same out of an airplane should show a slight upwards acceleration imparted by collisions with the air, and then a big upwards one upon contact with the surface. These collisions with air molecules and water or ground molecules shove the falling accelerometer off its geodesic. An accelerometer resting on the ground or on the airplane will show an acceleration somewhere around 10 m/s^2 in SI units: it is being pushed off free-fall by interactions.
Two accelerometers freely-falling in flat spacetime will eventually collide with one another thanks to the focusing theorem. Only as they collide will the accelerometers show nonzero.
Finally, you can even experiment with this yourself: install https://phyphox.org/ on a modern smartphone and rest it on the floor, take it with you into an elevator, jump up and down, or throw it a long way (try not to break it, and try to avoid it rotating much while in the air) and you'll see that when in flight it registers a near-zero acceleration, but a substantial acceleration when in your hand as you wind up and throw, and a substantial acceleration when it lands. While in the air your phone is in practically-geodesic-motion.
It's this property of free-fall -- the absence of acceleration, even if one is orbiting or falling straight towards some massive object -- that is at the root of Einstein's gravitation, and which distinguishes it from Newton's gravity. It is formalized into the https://en.wikipedia.org/wiki/Equivalence_principle .
Although your thrown phone and the Earth are interacting gravitationally, neither the phone nor the planet feels a "pull" towards one another during the phone's flight, or during a parachutist's drop. The geodesics generated around the freely-falling Earth and (effectively) freely-falling phone just lead to greater radial motion by the phone.
Definitely not ELI5:
One thing you find in modern physics is that ideas are often named according to some mathematical analogue to classical physics. You start thinking about forces by imagining a ball being kicked, and after boiling away the conceptual baggage you realise it's all about the exchange of energy.
It turns out that energy exchange is one of the most fundamental mechanisms that drives nature so it makes sense that this same mathematics appears in deeper theories. Unlike classical physics the symbols in quantum equations don't represent simple numbers, they're usually quite complicated and subtle actually but remarkably these equations share many properties with their classical counterparts. To be fair this could just be that phenomena that differ completely from classical physics are incomprehensible to us.
So an electron "spin", at least mathematically, is governed by equations that are remarkable similar to classical equations of angular momentum and so on. Force is in the same category and really just means "fundamental interaction".
But the point I was making is just that modern physics has already done away with the concept of Force, as in, things pushing each other from afar. It is quite a bit more complicated (and yet more elegant) than that.
However other forces such as the strong nuclear and the electroweak are forces in theories such as the standard model.
Grand Unification theories often are trying to turn gravity into a force this is where mediating particles such as the graviton come into play but these aren’t very successful yet.
It may be that gravity isn’t a force at all and is just an emergent phenomenon from the geometric properties of space time, or it could be both basically two distinct phenomena that cause attraction between massive objects where on a larger scale it’s primarily dominated by the geometry of space time and on the quantum scales by a mediated force with its own field and quanta (particles).
More importantly, GR has nothing to say about forces at all.
This is something I struggle with.
I know that physics originated from an experimental framework. We observe phenomena, then we try to come up with explanations for said phenomena, formulate hypothesis, then test them. That is fine.
But this breaks down when the 'fundamental forces' are involved. What _is_ a force? All the explanations I've ever seen (apart from gravity) seem to treat a 'force' as an atomic concept. They will describe what a force 'does', but not what it 'is'. Maybe that's something unknowable, but it bothers me.
F* magnets, how do they work.
This sounds silly but it's exactly the root cause in the current understanding and shoehorning in the word "force" in "force-carrying particles" is a stretch and causes this confusion. It's true that there would be no electromagnetic force without the photons. But photons and their likes are not the only way a "force" arises. For example, the Pauli exclusion principle can be seen as a "force" and it arises without photons with just electrons.
(F=ma being replaced by Schrodinger's equation.)
A committed scientist should worry about having such feelings, even though it is very human. It represents a possible source of non-independence of tests and of scientific bias.
Don't they measure the difference in the direction of the particle as determined by its virtual cloud? more energy emitted from the cloud = forward and vice-versa.
Loved the explanation and accompanying illustrations.
Be warned when something expands, you can never reach.
Its not the destination, its the journey :D
But really there's no way to know for sure yet.
As a sibling comment points out, it is difficult to implement the markup that spans the space of all possible theories. Kostelecky's parametric Standard Model Extension offers one avenue to do so.
One could implement such a test as a checklist, too, which might already make a difference.
If you want an undergraduate class in QM, edX has MIT's classes on line:
If you want a textbook, Griffth's "Introduction to Quantum Mechanics" is the standard answer. It's very much a "shut up and calculate" book, you'll learn how to compute expected values of commutators without much intuition for what they mean.
Update: Others point out Griffth's "Introduction to Elementary Particles", read their recommendations, sounds like the way to go.
If don't want to spend 12 hours a week for 3 months and still not have learned much about the 3 generations, then ... I don't know, maybe QED: The Strange Theory of Light and Matter? I don't know if it has the 3 generations, but it only assumes high school math, yet gets into the quantum version of electricity and magnetism.
And it's probably not a great beginner's text, even though it's really good.
Sudbery, A. (1986): Quantum Mechanics and the Particles of Nature: An Outline for Mathematicians.
I think that every physicist hopes to see something that does not match and then a fantastic work begins.
I did not see anything like this during my studies, PhD and short career and moved to industry. I terribly miss the teaching, though.
I am still 10-12 years away from official retirement and until then I doubt to have the time. Taken into account the seniority of my position, I am quite confident that I could teach afterwards at a good school, something I would do even for free.
Hossenfelder has a lot of... unique takes in the physics world, I don't think she should be used as a general barometer of the field.
Not seeing something when we "expect" to not see anything (from the perspective of certain models) might be more boring, but it's definitely not a "waste" (again speaking purely from a physicist's standpoint).
We know the standard model is incomplete, but where and how are not well known. Not seeing evidence for new physics rules out certain models, and places upper/lower limits on others. It's progress either way.
They keep building bigger machines to fill out the parts that don't have a definition yet.
Anything that specifies what happens at the next band of energy levels is a success, whether it yields new particles, or rules them out at that energy level.
There's some destination of approaching the most energy dense states like describing the mechanics that were active during the big bang period
This isn’t the way I would frame it. No one will fund billions on fancy equipment for unexpected results, and no one is flipping a coin expecting something other than heads/tails. The usual course is that there is some theoretical expectation/justification of a result, however we then need to build the experimental capacity to see if it is true.
But yeah, it's the long game.
This is progress. Sometimes science takes two steps back and one step forward. Sometimes that one step is bigger than you realized. And it wasn't backwards, it was projecting into a different spacial dimension. Or something.
The point is, this is probably good news, honestly.
Why? Validating a hypothesis is quite valuable.