Yes, fermions. But that cool laser light show you saw that one time? Bosons.
And the others? Are they with Keanu now?
Obligatory xkcd.
Please, don’t mind me. I’m still agog that Rutherford didn’t even hypothesize the neutron until less than a hundred years ago. So much, so fast!
In 1920, also less than a hundred years ago, the National Academy of Sciences convened a symposium in Washington which among other things debated conflicting views about the size of the galaxy and the nature of mysterious vaguely spiral-shaped nebulous objects that were then being seriously discussed for the first time. There was a wild theory that these things might be other “island universes” – other galaxies – but no one knew for sure, and it was thought that if one of the prevailing views about the huge size of our own galaxy was true, it would pretty much rule out the multiple-galaxy hypothesis. Our whole view of the cosmos was barely in its infancy. So much, so fast, indeed.
I do think that correct, if extremely basic, layman’s explanations are possible for most or all discoveries. That said, if you can’t come up with a correct explanation, please don’t waste our time with an incorrect one. I get a little bit ragey every time I hear someone describe the Higgs field as molasses, with mass due to the field’s viscosity, for instance. No, that’s not just oversimplified, that’s completely, utterly wrong. It doesn’t help anyone understand anything; if anything, it makes it even harder for people to understand it, because it’s so wrong, and if there’s one person in a million in your audience who is inspired to go into science and does actually start working in particle physics, somewhere along the way some poor professor or hapless grad student is going to have to knock all the wrong ideas out of that kid’s head before they can even begin to educate them in the right ideas.
Chronos: You know what makes me angry? People teaching the Uncertainty Principle as if it had anything whatsoever to do with the measurement effect.
The Uncertainty Principle has nothing to do with the measurement effect! Bouncing photons off electrons and so on has absolutely fuck-all relevance to the Heisenberg Uncertainty Principle!
For everyone who just got confused, angry, and otherwise ready to throw feces: How do we know this? How can I say something like the above with such certainty? Well, it’s simple, but you need to know some things to understand the simplicity.
First, quantum physics is very closely related to the physics of waves. Electrons are mathematically similar to sound waves in some very important ways, one of which is the idea of the Fourier transform. Therefore, I’ll explain the basic ideas in terms of sound, and then transition back to electrons and photons and such at the end.
The Fourier transform is an immensely practical tool, so the best way to describe it is to describe the problem it solves. Sound, as you may or may not care, is recorded by seeing how loud it is at a microphone a lot of times a second. (44,100 times a second for CD-quality sound, occasionally times a second for telephone-quality sound, and so on.) This information about loudness as a function of time is called the “time domain representation” of the sound. If all you care about is playback, that’s all you need. It’s all a CD stores, for example, and it’s all an MP3 or Ogg Vorbis or FLAC file stores.
However, if you want to make the resulting file as small as possible, you need to take advantage of the fact humans only hear within a certain range of frequencies, so you can remove all sounds outside that range and the result will still sound good to people. How do I do that? Time-domain information doesn’t seem to tell me anything about how loud the sound is at a given frequency.
Oh, right. Context. The Fourier transform transforms time-domain information into frequency-domain information, so I can apply it to the raw digital input file, remove the extraneous high- and low-frequency stuff, and apply the inverse transform to get the result back in a time-domain form. Easy-peasy, and fairly fast, what with the Fast Fourier Transform existing and what not.
So… if time-domain information looks like all the instants of loudness which go into driving a speaker coil to reproduce a given sound, what does frequency-domain information look like? All of the pure tones which, when added together, give a specific complex sound. For example, a Satanically-tuned piano will have an A-above-middle-C of mostly 440 Hz when plotted in the frequency domain. It will look like a nice big peak at 440 Hz, with some smaller peaks because pianos don’t generate pure tones, and they don’t make a noise forever and ever.
But what frequency does a single sharp hit on a drum have? It is a simple pulse of sound which only occurs once. The time-domain graph looks like a single sharp peak with no real repetition. It… well… doesn’t really have one frequency, and its frequency-domain form reflects that. That graph would look like a lot of peaks with, in a simple example, a single peak of maximum height and peaks of gradually decaying height surrounding that peak; in fact, ideally, you’d have an infinite number of peaks, reflecting the fact a single pulse is made up of infinite frequencies. The frequency of this sound is not well-defined. The infinite answer reflects that.
So let’s ask the other question: When did a long tone occur? Well, again, there is no single answer. It happened over a span of time, as opposed to at a single defined instant. The time-domain graph would look like a sine wave, or some other periodic (repetitive) function, repeating itself for a certain length. The time this sound happened is not well-defined. Its frequency-domain graph, OTOH, would look like the graph of the piano note I mentioned above.
So you see a trade-off: The better-defined the time a sound happened is, the worse-defined its frequency is; the better-defined the frequency is, the less it has a single time when it occurred. Because of this relationship, time and frequency form a “conjugate pair” which is a phrase I stole directly from quantum physics so you know what time it is now.
As it turns out, in quantum physics, momentum is equivalent to frequency and position is equivalent to time for the purposes of applying the Fourier transform to quantum wave-functions. Also, in case you didn’t know, momentum is what quantum physicists use a lot instead of velocity. Therefore, position and velocity are a conjugate pair and that is where the Uncertainty Principle comes from.
I remember when I first learned this. It was as if a formalism had reached off the page and slapped me in the face.
The one that annoys me is when chaos theory is presented either as simply randomness, or that the energy of a butterfly flapping its wings becomes the energy of a hurricane :smack:
But someone should point out, physics isn’t special in this regard. For example, I’m a software engineer and I take a deep breath before reading any story in the general news media on, say, AI, a violent video game, or some hacking story.
Sites dedicated to programming or gaming, generally get things right even if they give only a broad overview. But the analogy to that in physics would be something like the BBC Science Hour, which is largely accurate even if it doesn’t go into subjects in depth.
No shit, I tried making Rum out of a Higgs Field once - do not recommend.
To be fair, Derleth, the observer effect formulation at least gives qualitatively-correct results for uncertainty, and the observer effect is indeed one (of a great many) manifestation of the Uncertainty Principle. That at least makes it better than the viscosity-as-mass explanation, which goes all the way back to the Aristotelian notion that heavy things want to be at rest. But yeah, the Fourier transform thing is far more fundamental, and a far greater help to understanding, than the observer effect.
If you replace “eight year old” with “intelligent and eager adult”, then “utter garbage” is way off the mark. This assumes a useful definition of “explain”, of course. For instance, you can’t require that the listener be able to go write a textbook on the subject afterwards, and you can’t constrain the explainer to five hundred words. But someone who has a deep understanding of the material and its connections should be able to move an intelligent and eager adult’s understanding forward, at least a bit.
This seems overly cynical. I rag on bad science writing regularly, but I would rather there be bad writing than no writing. (Obviously good writing trumps both.) This is partly for selfish reasons: if folks aren’t excited by science, then funding will suffer. But more immediately: the balance of harm and good is still toward “good”. Most people won’t read the article anyway, so they don’t matter. Some might read it and change from being entirely ignorant of the issue to having a wrong impression of the issue. Some might read it and seek out additional information. Some might read it and, having been warned by, say, a message board to take things with a grain of salt, come away with an appreciation for the complexity and excitement of the issue. Some might be motivated to study science. And sometimes the article itself might even be good!
Is science writing often terrible? Yes. Does science writing improve folks understanding of science, on balance? Yes. Personally, I wouldn’t throw away the second due to disappointment over the first. But, I would rally against the first.
I can accept that. It’s not really consistent with other things I’ve studied, but I’m the farthest thing from a physicist - I can’t even understand relativity - so I’ll take your word for it.
With other things, the better I understand it, the easier it is to explain. And there are always better, clearer ways to explain things, if you work at it.
Correction: You don’t yet understand relativity, but you probably can. It’s not nearly as difficult as it’s made out to be. Really, it’s all just about rotations.
Actually, I suspect that what is wrong with a lot of science reporting is simply that the science they are reporting on is, in fact, not very good science: it is mostly either stuff that is going to turn out to be wrong, or it is fairly trivial (just a small piece of a very large puzzle), or both. It is not going to change our view of the universe, but given the nature of news media, both the reporters and the scientists who brief them feel the need to build up every experimental result or theoretical speculation as something Earth-shattering. Science does not really (or hardly ever) advance via big breakthroughs, but puts together a picture of things, piece by piece, relatively slowly, and the significance of particular experimental findings or new theoretical ideas, even when they are particularly important, only emerges over time.
Proper science popularization/journalism ought to be basing itself on review articles, that sum up the current state of an area of research, not reporting the latest experimental results and theoretical speculations, which, inevitably, almost always have a very narrow focus (and, as I said, mostly turn out to be wrong, trivial, or both).
Correction: Most of it is good science, including the fairly large part that will turn out to be wrong. It’s just not great science.
And a lot of scientific advances come from scientists pursuing long shots. Even the scientist who comes up with the idea thinks it’s probably not going to pan out… but if it does, it’ll be huge. You don’t want to build a career out of these long shots, but most scientists seem to have one or two.
Well, that depends what you mean by “good” I suppose. I wasn’t meaning to imply that it shouldn’t be done, or that it is necessarily done badly, just that what you find, even if you are doing science very well, is very often not going to be something that will prove worth keeping. You have to pan through a whole lot of mud, not to mention iron pyrites, to find any gold, but if you aren’t prepared to do that, you aren’t going to find any gold at all.
Anyway, whichever way you define “good” in this context, I think my point about science journalism remains sound. It is mostly the wrong stuff, the wrong publications, that get reported on. It should not be at the level of the latest experimental reports and the latest theoretical speculations; it should be at the level of literature reviews, or even, in some cases, textbooks (if the textbook is innovative, rather than just a rehash of other ones).
Yeah, I agree with the larger point. It feels like a lot of science journalism consists of an editor noticing that the paper has some space that needs to be filled, and so the science journalist goes over to the local university and asks “Hey, so what are you guys working on?”. Which will usually be pretty routine.
That’s the first time I’ve heard somebody say that.