Laser Light Dispersion

100 miles is not an exageration at all, as you would know if you read my post from Feb. 16. If anything, it’s an underestimate. From that post, for a Gaussian beam exit beam width of 31 cm, and for visible light at 600 nm wavelength, at 100 miles the beam is only about 1.6 times as wide. For the MIRACL laser, the wavelength is about 10 times longer, so you’d only need to be about 30 miles away (for a 31 cm diameter Gaussian beam). This is why I said it was too simple a criterion. I’d never think “significantly farther” than 1 foot meant 30 miles.

100 feet is nowhere near far enough. If your point source approximation were valid at 100 feet, then the beam would be about 5000 times wider at 100 miles, and would be useless against satellites. So your statement that “for similar purposes of blowing things out of the sky, a point source is a very good approximation.” is incorrect.

As far as laser beams being smaller, it doesn’t matter whether you generate a beam 1 foot diameter directly or start from a smaller beam and increase its size using lenses, so this is irrelevant.

It is too clear, and so it is hard to see.

I just took some time to look over the equations again, ZenBeam, and I retract my previous statement; 100 feet is a slight understatement. The inverse-square law begins to show its effects at 500 feet, and by 1 mile, it has pretty much taken over. My apologies for tossing that number out without prior calculations, but my wild guess was still closer to the truth. I’m not going to check where you got your equations and how you did the math, but I simply don’t see how we can have such a large discrepancy between our results. If the MIRACL beam had a 1mrad divergence, it would be 760+ times as wide at a distance of 100 miles, while it’s intensity would have dropped by a factor in the million range; and that’s just what you get with simple geometry. My guess is you have way underestimated what the divergence of a real life laser beam is, or you might have plugged the wrong numbers into the wrong formula.

By the way, my point about the rarity of 1 foot beams is very much relevant to the topic at hand, at least in the direction you seem to be steering to. Trying to enlarge a beam with a lens system is highly problematic, as you have to deal with a whole bunch of other obstacles than simple beam divergence.
p.s. The MIRACL beam was in fact useless against satellites; I said they did a test, I didn’t say they succeeded in any way :smiley:

If you’d post what equations you’re using, that would help find the discrepancy.

Perhaps this is part of it. Where did you get that MIRACL has a 1 mrad divergence? Stephen’s post on Feb 16 says a i]typical* beam divergence of 1.1 mrad, but MIRACL isn’t typical. From Frolix’s post, Divergence Angle (in radians) = Theta/2 = (Wavelength * 4 )/ pi * Beam Diameter. MIRACL has wavelength = 6E-6. On a MIRACL site I found, they said they converted their 3 by 21 cm beam to 14 cm square, although I think they may still put this through more optics. The site wasn’t clear on this. Using diameter = 14 cm, I get 0.055 mrad. Regardless, this formula is valid only when you are in the far field. You can’t use it to determine when you are in the far field.

I’ve posted the equations I’ve been using, and the parameters I’ve been assuming. I hate saying this, but I’ve checked several times, and I haven’t made any mistakes. If you don’t like the assumptions, I’ve provided formulas so you can plug in whatever wavelength and beam diameter you’d like. If that’s too much trouble, tell me what wavelength and beam diameter you’d like to use. I’ll tell you how far you need to be to be in the far field for that case.

By the way, the 31 cm diameter I was using came from a crude estimate of the beam diameter you’d need to hit a 100 mile target with a 6E-7 wavelength laser without too much dispersion, not from MIRACL.

It is too clear, and so it is hard to see.

Yipes! I go a away for a day and I come back to find that I’m “wrong.” No indication that my interlocutor even understood my argument or my point; I’m just wrong. But then, perhaps I was a bit hasty, and I didn’t really make my subtext clear. I was trying to accomplish two things.

First, I was trying to address the original question, which I understood as “does a collimated beam–a laser for example–obey the inverse square law?” My answer was that, to the extent that a laser beam is, in fact, collimated, it obeys the inverse square law as it is commonly understood, provided that you recognize that the “point source” involved is virtual and that you correctly identify where it is. In a perfectly collimated beam, the virtual point source is an infinite distance “upstream” in the beam. This is basic geometric optics. It is also easily demonstrated by passing a laser beam through a converging lens and noting that the beam focuses to a point. If you were foolish enough to look into the business end of a laser with a telescope (or even without one) you could see the virtual point source way out there at (or near) infinity.

The complication, as ZenBeam has been at pains to point out, is that description of the collimation of real laser beams is complicated by the beam waist effect and by the fact the for most practical exit apertures, Fraunhofer diffraction plays an important role in shaping the beam profile as a function of distance along the optical axis. More on this in a moment.

My second objective (and at this I simply failed) was that I wanted to bring out that the inverse square law is a property of light itself, not a property of some kinds of sources of light. There are no exceptions to the inverse square law. Hence my initial remark that laser light obeys the inverse square law.

Rigorously put, the inverse square law states that the power density of light propagating along a ray varies with displacement along the ray as the inverse of the square of the radius of curvature of the wavefront. The most obvious application of this principle is to describe the falloff of intensity from a “point source.” Hence the simplified definition cited in elementary textbooks and encyclopedia articles. The light bulb is the canonical example of a source which is supposed to obey the inverse square law–but it will come as no surprise to the erudite followers of this thread that it does so only approximately, and then only at distances large compared to the diameter of the bulb. However (fanfare, please) you can predict very accurately the power density as a function of distance by using the rigorous statement of the inverse square law in an appropriately detailed calculation.

So yes, ZB, if you get a really tiny photometer probe and run it down the axis of a laser beam, and record the output as a function of distance from the center of the laser instrument, the output coupler, or the beam waist point, or wherever, you won’t get Eo/z^2. At least not until you get way out past the Rayleigh length. My point is that you will get Eo/R^2 where R is the local radius of curvature of the wavefront. That value of R corresponds to the position of the virtual point source.

So what’s the value of R? Well, to respond to your challenge, ZB: in a system with no waist, z0=infinity, and the equation is satisfied when R = z0. This is the simple case I started with.

In a system with a waist, it’s a bit more complicated, and we’re getting beyond my expertise. But what it looks like, is that on axis, the local radius of curvature is negative inside the waist point, passes through infinite (i.e. plane) at the waist point, and becomes positive outside. Thus, the value of R changes with z–and rather dramatically.

But what really scares me about this analysis, is this: my understanding of the origin of the beam waist is that it arises as a cavity effect of the standing wave pattern inside a laser cavity with non-planar endplates. The inverse square law applies to propagating radiation, so you have to be careful how you apply it in a standing wave situation. I’m also not entirely sure how a waist is formed outside a cavity. ZB, the equation you use for the beam power density looks to me like the inverse square of the beam width, for the waist region of a confocal cavity. How general is it? My (friendly) challenge to you is to derive the equation. I don’t have time to do the research to do it myself, and you seem to have expertise in lasers.

None of this treats the question of diffraction effects. The beam emerging from the end coupler (or beam waist) of a laser is subject to Fraunhofer diffraction at the aperture. The far-field divergence is due to this diffraction as well as emergence from the waist.

But enough. You can wake up now.

Zor, it’s nice to hear from you again.

I’m going to pass on that challenge. My college days are over. :slight_smile: I will point you to my source (pun intended), which is Fields and Waves in Communication Electronics by Ramo, Whinnery and Van Duzer. I believe my copy is the 2nd Edition, not the 3rd. My field is electromagnetics, not lasers specifically. The particular formulas in my posts above are for a Gaussian beam, but the criteria for far field of being farther than about D^2/wavelength is in general necessary for all antennas.

Good call. R = z + z0^2/z.

This sounds reasonable, although I’m trying to visualize what happens when R becomes smaller than a wavelength (say for a dipole FM radio antenna). I’m not sure a wavefront is even clearly definable near the dipole. Is this a high frequency approximation? (Although the OP (remember him? :wink: ) was talking about lasers in particular, so that would certainly be valid.)

It is too clear, and so it is hard to see.

All right, who has a copy of Hecht’s * Optics * handy? :wink: I suppose Geezer summed it up pretty well though; it all comes down to the initial divergence angle if we don’t consider a lens system. I did a quick search through the websites of several laser manufacturers and also the technical manuals I have nearby. The common full beam divergence of lasers we have to day is about 1mrad, with 0.8mrad being the lowest (UV laser by the way). Just a guess, but the limiting factor is probably more of diffraction than the natural divergence of a Gaussian beam. Note that this is just the divergence you get right out of the exit window. You can significantly decrease the divergence of the beam with a lens system, but then what are we debating over? If we go back to the original post and put a lens system on top of our laser, then yes, we can limit the inverse square intensity loss. If we don’t, then we’ll just have a big flashlight. Now, what are the conditions again?