I’m guessing I have a very fundamental misunderstanding of light waves but here goes (thanks in advance):
1.Is it correct that the intensity of the photon is the absolute value of the amplitude? So two waves meeting at their troughs would be just as intense as two waves meeting at their peak and y=0 would be darkness? If so, what is the qualitative/quantitative difference between the photon at its peak vs at its trough?
If you shoot a single photon at the screen so it doesn’t interfere with itself, shouldn’t it make a difference what the value of y (of the sine curve) was when it hit the screen as with the double slit experiment? Presumably the photon is still acting like a wave until the screen collapses the wave function, correct?
Even if the photons are sparse enough to arrive at the slit one at time an interference pattern still develops.
An important version of this experiment involves single particles. Sending particles through a double-slit apparatus one at a time results in single particles appearing on the screen, as expected. Remarkably, however, an interference pattern emerges when these particles are allowed to build up one by one (see the adjacent image). This demonstrates the wave–particle duality, which states that all matter exhibits both wave and particle properties: The particle is measured as a single pulse at a single position, while the wave describes the probability of absorbing the particle at a specific place on the screen.[28] This phenomenon has been shown to occur with photons, electrons,[29] atoms, and even some molecules.
The interference pattern only shows when there are two slits that form two new concentric waves (that’s at least how it’s described). When there’s one slit you only see a particle on the screen. In the case of no screen or one slit, what creates interference?
In case of a single slit, the photon still interferes with itself (it has a non-zero wavelength!) and produces diffraction.
A particle travelling in empty space from one point to another has an amplitude something like \displaystyle\frac{e^{ipr/\hbar}}{r}, where r is the distance from point 1 to point 2. So it has a phase, but not really “peaks” or “troughs”. The spot of light on the screen will not suddenly vanish if you move the paper a tiny bit closer or farther away.
I’m not a QP, but a photon isn’t like a particle on a classical wave moving between different states. There’s a reason the illustration of a single photon is a squiggly line, it’s better described as a Wave packet.
To expand on this, remember that photons are electro-magnetic radiation. When the electric component is zero, the magnetic component is at a peak.
Another interesting fact is that phase and number are a Heisenberg pair like position and momentum. Measurements of one member of a pair affects the other member. And this is essentially what a slit is doing: making a measurement of the phase. A very precise measurement of the phase, which makes the number of particles completely undetermined as the photon passes through the slits.
No, actually, the electric and magnetic parts hit zero at the same points (assuming a vacuum). There are two derivatives in the induction equations, not one.
I should clarify that Feynman’s example of what a typical amplitude looks like is only meant to illustrate a particle going through one or more slits and experiencing interference; it does not include things like polarization of light or electron spin. https://www.feynmanlectures.caltech.edu/III_03.html
Well, the modulus squared. The amplitude can be complex. If the amplitude is i, then the probability is i \cdot -i=1, not i \cdot i=-1.
Also, there is no such thing as the intensity of a photon. There’s a certain probability that you’ll detect a photon in a specific area, and we can say that overall light intensity is proportional to photon density, but an individual photon is all or nothing.
My guess is that it would be undetectable. You’d lose some infinitesimal fraction of the incoming photons, but not enough to notice the difference.
Then again, I suspect a massive (decaying) photon would break a whole lot more physics than the thermal distribution of the cosmic background radiation, so maybe you would notice it as all of your subatomic particles lose integrity and you dissolve into a warm gas.
Why would subatomic particles lose integrity if photons have a very small mass? By undetectable, do you mean with currently technology, or theoretically?
I just meant what Chronos wrote–if photons have mass, then pretty much everything we know about physics is wrong, including how all of the other fundamental forces work. I certainly wouldn’t press a button labeled “give all photons a tiny mass”. It could do a lot of things, almost all of them bad.
A dedicated experiment could potentially reveal a massive photon, but it’s not going to be easy. You’d have to start with an incredibly intense light source, and put it next to a particle detector–probably a neutrino detector. And then somehow detect the slight excess of neutrinos coming from the beam as the photons decay. Ideally, you’d actually detect both particles coming out (it would have to be two or more in order to conserve momentum). Correlating the detections would give you better confidence that the decay came from the beam. But neutrinos are so hard to detect that that will never happen. Even worse if they decay to whatever dark matter is made from (like neutrinos, but even more elusive).
I don’t understand why a slightly massive photon would imply our understanding of physics is wrong. GR didn’t wreck our understanding of Newtonian Physics.
Because it’s moving at the speed of light, and things with mass can’t do that as they require infinite energy to do so. ← This was extremely simplified, but close enough.