Thanks, I didn’t remember that one off the top of my head.
And us poor kids would have recently learned about phases of matter, too. By the time we got to lessons on radiocarbon dating, we would have some educational exposure to liquid carbon compounds (softdrink fizz!), solid & gaseous carbons (mon- & dioxides, global warming), diamonds, graphite, char/coal carbons, carbon fiber golf clubs, buckyballs. And also all life as we know it. At least I don’t think the nano-phenes allotropes were around yet.
Later engineering coursework covering physics, materials, crystals, electrochemistry, semiconduction would help clear up what’s going on down there and I probably still have gaping holes. Or perhaps more like thinly covered pitfalls upon which I confidently stride. beckons Follow me this way, folks!
I think what I was saying is that I came to the same conclusion as @CookingWithGas below and would have continued misunderstanding but for higher education.
One of the mysteries (or have they figured it out) is about half-life, which is related to how long it takes for half the atoms in a particular isotope sample to decay. It’s unclear why this happens, purely randomly and unpredictable which nucleus will self-destruct, but statistically it happens at a regular predictable rate.
I guess the main reason is that radioactivity is physics, not chemistry. Chemistry is (generally) about the properties of atoms binding, the process and propertiies. In this, isotopes are generally irrelevant. Since the number of protons defines the element, regardless of the number of neutrons, that essentially defines the valence, how it reacts to combine with other elements, the energy of the reaction, etc.
The only difference is that the extra neutrons change the weight/mass of the molecules compounds formed. This tiny difference is what is used to purify one isotope from another. Generally, there is no chemical, reactive, difference to separate them.
It’s unclear only in the same sense that anything in the world is unclear, in that no matter how much you explain, you’re always still able to ask “Why?”.
It’s not really that mysterious. Exponential decay follows directly from nuclei having no “memory.” And shouldn’t that be the default? That there isn’t some little mechanism inside the nucleus that records how long it’s been around? It would be more mysterious if there was such a little mechanism.
Aside from that, it’s based on probability. Quantum mechanics is a little mysterious but once you accept that particles can tunnel through an energy barrier, the rest follows pretty directly.
Anything that decays with a constant probability that is independent of its age has a half-life. This is not true of most everyday objects. For example, a car will eventually break down, but a 20 year old car is more likely to break down in the next year than a 1 year old car is, so cars do not have a half life. If you start with 1000 brand new cars, you’ll find that very few break down in the first year, perhaps a few more will break down in the second year, but as time goes on, each car has a higher and higher probability of breaking down.
It’s not easy to find everyday examples of things that decay with a constant probability independent of their age. As an example, take flipping a coin. Lets say if it comes up tails, it has “decayed”. Obviously the probability of flipping tails is independent of what has happened to the coin in the past. So take a bag of 1000 coins. Flip them all, and you’ll find that about 500 have “decayed”. Remove the tail coins and one minute later, flip the remaining 500 head coins. About 250 will be tails (decayed). Keep repeating by removing the decayed coins and flipping the remainder once a minute, and about half the coins will decay each flip. The ensemble has a half-life of one minute.
Stability is straightforward to know based on the mass of the initial nucleus and the masses of any viable set of decay products, and these masses are in turn easy to determine very precisely by mass spectrometry, at least for things in the “long-lived” category. (“Viable” here means a candidate set of products that respects all the relevant quantum number requirements.) In other words, it’s known what is stable or not. Estimating the actual lifetimes themselves is a much trickier endeavor that does require the messy intranuclear details.
This applies to things that can be practically obtained and can thus have their masses determined.
I’m not sure that has ever happened in the modern era. There are isotopes that are known to have very long lifetimes, and there is every once in a while a new experimental triumph in observing one of the rare decay modes. But in none of these was the experimental result, I would expect, an observational surprise.
The mass defect can’t be positive. If it was, that would mean the assembled atom would have more total mass-energy than its constituent particles, which would mean that it could (and therefore would) fly apart spontaneously.
What’s happening is that the mass of your typical nuclide is the mass of a neutral atom of that isotope. The mass of a proton and a neutron combined is 1.007825 Da (for a neutral atom, the number has to be the same, so we can group them together); and the mass of a neutron is 1.008665 Da. That means for your typical neutral atom with mass number A, the mass of the constituent particles will about 0.8% greater than A daltons. But because of the mass defect, the total mass of the bound atom is less than the total mass of its consistuents; and for many nuclides, the mass defect is big enough that the total mass of the bound nucleus is less than A daltons.
The example that I know of is Bismuth-209, which had its half-life estimated to be about 4.6 \times 10^{19} years but was not actually observed to decay until 2003, when its half-life was found to be closer to 2.0 \times 10^{19} years.
My default assumption would be that if a big atom emits a particle, that would have some relation to its external environment, either as direct cause or catalyst affecting something in the atom’s internal structure. Instead, radioactive decay is something like a poisson process, which aggregates to a half-life, if I understand things correctly. Mysterious.
More ignorant questions; I have zillions:
Wiki seems to think radioactivity is related to the weak force, so why are we talking about the strong force? Heck, how do these 4 forces work?
Gravity - operates on mass of particle. Right?
Electromagnitism - So… positive attracts negative? Then why don’t hydrogen electrons just make their merry way over to the proton? What stops them?
Which forces decline with the square of the distance?
Such catalyzed reactions can and do occur, but it’s not the main thing people mean with “radioactive decay”. (Electron capture is the one exception where an external particle is needed yet it’s semantically grouped into radioactive decay. This is natural since electrons are generally around a nucleus. There are more obscure cases that come up in niche settings, though, that don’t get inclusion in a list of radioactive decay “types”.)
If you have an isolated single object that can decay (like a radioactive nucleus), and you watch it closely for a fixed interval of time, there is some specific probability that it will decay during that time interval. If it does not decay, then you can restart your stopwatch and watch it for another same-sized time interval. The key fact here is that the object has the same decay probability in the new time interval as it had in the original one. In other words, how likely this thing is to decay per time interval does not depend on the past. Given that a particle or nucleus doesn’t have a memory, it makes sense that the behavior is independent of the past. And the fact that the outcome in any time interval is probabilistic stems from the intrinsically probabilistic aspects of quantum mechanics.
This underlying probability picture is all that is needed to get, through just math: the Poisson process; that a large collection of objects behaving this way leads to various (negative) exponential behaviors; that a half-life can be defined; etc.
Some radioactivity is related to the weak force, but not all. The strong force is always around since the strong force is what binds the nucleus together (i.e., holds the protons and neutrons together) and thus determines how stable that configuration of protons and neutrons is. If there is another configuration that would be more stable, then the nucleus can decay if there is a possible physical process to get from configuration A to configuration B.
If the transition requires changing a neutron to a proton or vice versa, the weak force is required. If not, then not.
Example 1: 238U is one possible way to stick 92 protons and 146 neutrons together. Another way is to make two separate objects: 234Th and 4He. This latter pair of objects is much more stable than the original 238U, so 238U can undergo “alpha” decay – essentially spitting out the 4He piece and becoming 234Th. The ejected 4He nucleus is called an alpha particle. The weak force does not enter this story in any direct way.
Example 2: 14C is a possible isotope of carbon, and it’s unstable. You might imagine that it could just spit out a neutron or two to become the stable 13C or 12C, but spitting out free neutrons actually costs energy rather than saving it, since those original bound neutrons are at least somewhat happily bound in the 14C. Otherwise, 14C wouldn’t even be a thing at all. This is in contrast to the alpha decay case, since breaking off two neutrons and two protons to make 4He can be an energy savings given that 4He is so strongly bound.
So, since you can’t spit out a (free) neutron or two due to energy cost, maybe you can change a (bound) neutron to a (bound) proton and be in a better configuration? That transition can happen via the weak-force-enabled “beta” decay, wherein a neutron becomes a proton plus an electron and an antineutrino. The electron zips away and is called the “beta” particle in this context; the antineutrino also zips away; and the proton replaces the neutron in the nucleus. In this example, it means 14C has decayed into 14N. The total number of protons+neutrons is unchanged, but it’s now 7+7 instead of 6+8, and this configuration – including any energy spent for the zipping away particles – is a better overall energy situation than the original 14C.
For the level of discussion here, yes. (There are caveats, but ignorable ones in everyday life.)
This was a big question at the start of the 20th century, and it (among other things) led to the development of quantum mechanics. Some features: The electrons aren’t moving around along some path and can’t be said to exist “at a point”. Rather, each electron’s position is fundamentally a probability cloud around the nucleus. In a very real sense, this is what “electrons making their way to the proton” ends up as. There is no sense in which they can get “closer”. (More can be said, but that’s a starting point.)
(1) Gravitational attraction and (2) electrical attraction/repulsion between charged particles. To go a step deeper, the inverse square law here applies particularly in regimes where relativity and quantum mechanics can be ignored. When those aspects of physics can’t be ignored, the inverse square law may no longer be relevant. The strong force and weak force rear up in more intrinsically relativistic and quantum regimes, and thus their behavior is described in very different terms.
The wiki article on atomic orbitals is nice, even if the underlying science is baffling.
“Cross-sections of atomic orbitals of the electron in a hydrogen atom at different energy levels. The probability of finding the electron is given by the color, as shown in the key at upper right.”
“Oh a probability distribution? Uniform? Normal (aka Gauss)?”
“It’s distributed as a superposition of complex number functions.”
Quantum mechanics is, uh, non-trivial, but the first step in understanding it is that it’s a generalization of ordinary probability.
Normal probability assigns a number to each of some possible outcomes. A few rules apply: the number must always be between 0 and 1, and the numbers must always sum to 1. For example, each side of a fair coin has probability 0.5. If the coin is biased, perhaps it’s 0.4 and 0.6 instead–but those still sum to 1.
The problem is that you are very limited in what transformations you can apply while maintaining the constraints. You can permute the values (say, by renumbering the sides of a die). You can take the probability from one outcome and move it to another. You can merge two outcomes by adding the probabilities together–with the constraint that since the probabilities are always 0-1, the summed probability must be greater than the individual ones. That’s about it.
What if you could do more? QM says: ok, we won’t work with probabilities directly. We’ll work with something called an amplitude. You can always convert an amplitude to a probability by taking the absolute value squared (that’s just normal squaring for real numbers, but multiplying by the conjugate for complex numbers). You don’t usually mess with the probabilities, just the amplitudes, but when you perform a measurement it’s always with a probability.
Why is this interesting? Well, perhaps the biggest factor is that because amplitudes can be negative, when you sum them together you can get 0, or some number less than the originals. That’s what makes interference work: a +0.707 amplitude sums with a -0.707 amplitude to make 0. If you tried to measure the probabilities individually, you wouldn’t find anything weird–because they each square to +0.5, they are separately like a coin flip. But combined they sum to 0, which is impossible for normal probabilities.
Since amplitudes can be complex, you can also multiply by i, which also doesn’t affect the probabilities. An amplitude of i corresponds to a probability of +1 (i * -i). So it doesn’t change the observed probability, but it does change how the amplitudes add together amongst themselves.
The normal rules of probability still hold. All of the amplitudes squared still have to sum to 1. That might sound like a tricky constraint to maintain, but it’s not. Because it acts as a vector in some N-dimensional space, you can rotate the vector any way you like and the length will stay the same–that is, the amplitudes squared will still sum to 1. Just like if you have a meter stick, you can point it in any direction you like, and the position of the tip (measured as XYZ relative to the origin) will always be constrained to X2+Y2+Z2=1. Just Pythagoras, really.
Permutation also works, as with normal probability. Really you can do almost anything that maintains the total of 1. It’s just that amplitudes give you much more flexibility and power than probabilities.
I think that one source of confusion is referring to the four fundamental “forces”. I think it’s more useful to think of them as interactions. In any sort of interaction at all, you have some set of particles going in, and some set of particles going out. In most of the sorts of interactions we’re familiar with, the set of particles coming out of the interaction is the same as the set that goes in, or at least correspond very closely. When a baseball bat hits a baseball, there might be a few little bits of the ball that stick to the bat, or bits of the bat that stick to the ball, or bits of either that fly off, but for the most part, you have one ball and one bat going into the interaction, and one ball and one bat going out of the interaction. In cases like this, the most interesting thing that happens is that the momentums of the particles change, and forces cause changes in momentum, so we refer to the interaction as a force.
Even when the particles don’t stay basically the same, like in a bomb exploding (i.e., one bomb going in, a bunch of pieces of shrapnel going out), we can still say that the shrapnel was the pieces of the bomb, and then look at the change in momentum of all of those bomb-pieces.
On the subatomic scale, though, quite often the particles that go into an interaction don’t correspond at all to the particles that come out at all. You might, for instance, have an interaction where a neutron goes in, and a proton, electron, and antineutrino come out. These are not pieces of the neutron: A neutron is not composed of a proton, an electron, and an antineutrino. They’re completely new particles. In an interaction like this, you can’t talk about how particular particles’ momentums change, because the particular particles don’t even continue to exist. And anyway, when the particles coming out are completely different than the ones going in, the momentum isn’t even the most interesting thing going on: You’re usually more interested in what those particles even are. That particular one is an example of the weak interaction.
Circling back, if a single hydrogen proton is positive and the electron is negative, why doesn’t the cloud collapse into the nucleus? What holds it back? The internet seems to think the answer is, “Kinetic energy”. Here are 2 alternatives from Reddit:
Here is another answer in addition to the others, consider the exclusion principle, if all the electrons were to fall to the nuclear center they would have the same energy and same quantum numbers. The exclusion principle forbids fermions (like electrons) from doing this.
There is also the uncertainty principle. If the electrons were at the center of the nucleus their position would be more defined, which would increase the uncertainty in momentum, leading to a higher energy than if they were more distant in orbitals.
ETA: Article length treatment of question; I’ll read it later.
I imagine you can think of neutrons as being a catalyst. Put U235 near a bunch of neutrons and you get a lot of fission. Absent something like that, the “external environment” is generally not energetic enough or near enough to the nucleus to disrupt the strong nuclear force. What remains is just the statistical probability of the nucleus breaking down.
What causes the death? H+ is essential to chemiosmosis, an essential step in energy harnessing in all life forms. I guess that ions twice as heavy do not work a well. (OTOH Na+ ions can substitute for H+ in chemiosmosis IIUC.)