Is all water curved

This logic doesn’t follow at all.
Heck if that worked we should also infer that objects cannot have a shape at all, since they’re all made of atoms, which do not have a clear, consistent shape.

Cool—then we can talk turkey. Also, I didn’t mean to patronize you (or anyone else for that matter).

Well, the brute-force method for this is to have a calibrated, known-straight-to-some-precision edge in the photo. You write a transfer function to “straighten” the known-straight line, and then you can measure the horizon’s curvature with a reasonable degree of certainty—and you can quantify that (un)certainty.

This is how adaptive optics works: you use a known-circular body near (a “guide star”) near the body of interest, correct that known body to circular and then get a much clearer image of the thing you’re trying to see. You can also use a laser to light up the thin layer of sodium atoms in the upper atmosphere, creating a synthetic guide star.

This is usually applied to correct atmospheric distortion—:notes:do not twinkle, little star:musical_note:—but would work fine for less transient distortions.

The math is largely in the transfer function I mentioned. But at one extreme—taking an image of the earth from the moon—you can calculate the curvature of the earth with no lens corrections at all. All we’re doing is moving the camera closer to the subject.

Gotcha. Again, I didn’t intend to patronize you. In that case, we’re talking primarily about computational deconvolution I described above.

Totally. Your point is well taken; I didn’t mean for my nitpick to overwhelm it. Cheers!

Missed the edit window, but you asked about the math that determines how sensitive this method would be, and that’s a good question.

That would depend on the resolution of the sensor, the diffraction limit of the lens, the inherent distortion in the lens, its FOV, the transfer function itself and a host of other factors.

Obviously, I haven’t computed this, but it should be pretty easy to figure out, at least to a first approximation.

Problem is, in any real picture of the Earth, the only person who’d be able to find a “known, true straight line” would be the flat-Earther.

The other problem is that, for an image that contains multiple known, true straight lines, there is, in general, no transfer function that will straighten all of them.

I appreciate the informative reply – theoretically, it all seems quite possible to me; it’s just that in my real-world use, I find the lens correction tools available to me through widely available commercial software to be visually imperfect. If, say, I’m shooting at an upward angle at a level, straight root with my Nikon 24-70 f/2.8G lens (so the best 24-70 they offer) at the wider end of the focal lengths, even when I apply lens correction to it, there’s still a slight distortion. Now, this IS complicated by the fact that this a variable focal length lens and I assume the distortion varies by focal length – you would know better than I since you really seem to know the nitty gritty about this. Perhaps using a prime (fixed focal length) lens, the distortion correction could be more accurately applied, though I still wonder if it can be completely generalized or if it does vary slightly from lens to lens due to manufacturing tolerances. (Though I guess it probably wouldn’t vary much.)

Thank you for the technical knowledge.

Yep. I’m a Lightroom user too, and I’ve seen what you just described. More below.

Yes, a prime makes things easier. Even hyperfocal lenses have tolerances.

The issue isn’t that Lightroom-style lens corrections can’t be completely generalized—it’s that those corrections solve a slightly different problem than deconvolution does. Even with a prime lens, you have the issue of “focus breathing,” where the distortion changes as the lens elements rotate relative to one another.

Barrel/pincushion correction gets you in the ballpark, but sample-to-sample variation between lenses is still a thing, just as you suggest. Deconvolution can be applied to any lens at any focal configuration—which is both its strength and its weakness.

If you apply it to a given lens/focal length/focus position, you’ve still got atmospheric variations changing at effectively 10-20 Hz. Its specificity is its superpower and its kryptonite.

Of course there is: for a radially symmetrical lens, one function would cover it. And where to source these perfect, radially symmetrical lenses? Why, they’re hand-ground from cross-linked unicorn tears by a team of spherical cows (from Switzerland, natch).

You added the qualifier “in general,” which kinda implies that you’re looking for a grand unified lens transfer function. You’re right—there isn’t one.

But if you include, say, a calibrated granite slab or other “truly” straight line in the foreground, parallel to and a few pixels away from the horizon, you’ll be close enough for government work. :slight_smile:

If you view a photograph taken with a pinhole camera with your eye at the same distance to the photo as the pinhole was to the focal plane (well where the sensitive material is given there is really no such thing as a focal plane with a pinhole) you will see exactly the same image geometry as your eye would have seen observing the actual subject. We almost always look at photographs from far too far away. If you work out where the effective optical centre was for any given photograph, and place an eye there, there is a sudden and dramatic change in perception of the image. Even without stereopsis we get a clear perception of depth, as all the perspective clues in the picture suddenly snap into their correct relative locations.

For the question of a photograph worrying about lens aberrations, within the limits of diffraction issues, a pinhole can be a remarkable device. Just a little impractical.

I was assuming ideal, radially-symmetric optics. It still can’t be done. Imagine a scene with a pair of railroad rails, both straight lines: Look down the rails one way, and they converge at the horizon. Look down them the other way, and they converge at the opposite horizon. What kind of transformation are you going to apply to two lines that converge at two points, to make them both straight?

Is it straight, or is it parallel to the horizon? The whole point of this thought experiment is that the horizon isn’t a straight line.

First, I want to apologize to those that are metric diehards. I’m an old fart and I still use inches. I was just thinking of something of something and I thing I sprained something in my brain. If I was to take a piece of stable material, 16 inches square and 1 inch thick. In the center is a 12 inch diameter machined depression, 1/4 inch deep at the edge. The machined depression matches the general curvature of the earth at the equator. That would mean the center of the depression would be shallower than at the edges.

2 questions. What would be the difference between the center and the edge? And if water does follow the curvature of the earth, would that mean if the depression is filled with water to the top of the depression, the water would be the same depth for the entire 12 inch depression?

[Emphasis mine]
I already conceded that a single, generalized transform doesn’t exist, so I’m not sure what your point is with this example. I failed to state my assumptions in a rigorous way, but more on that below.

You’ve got me dead to rights. I wrote “parallel” when I meant “tangential.” It was a foolish oversight on my part. Peer review would have caught such an egregious error, and I suppose it kind of did here too.

I tried to emphasize that I was’t being especially rigorous by throwing in the bit about spherical cows and cross-linked unicorn tears. The bit is about being “good enough for government work” was another clue. But yeah, you nailed my semantic oversight.

And you’re not wrong to make that point—discussions like these require clear and consistent definitions and terminology. I was a little sloppy and also didn’t fully state my assumptions, including:

  • the “guide star” line and the horizon are tangential to one another (or nearly so)

  • both lines are in planes approximately parallel to the plane of the sensor.

  • the sensor itself is perfectly planar.

I could list many more implicit assumptions in my little scenario, but I see no need to beat a dead horse here.

No, there is no generalized transform for multiple straight lines in a single image, but we’d already established that. Yes, I made a small-but-arguably-important slip and used one term when I meant to use another. I agree that the difference between “parallel” and “tangential” is germane to the problem at hand. I should have been more precise.

But the immediate question (as I understood it) was “can lens distortion be deconvoluted enough that a photograph could be used to measure the curvature of the ocean horizon to some currently-undefined-but-subjectively reasonable degree of accuracy?”

I maintain that the answer is yes, and I’ve tried to articulate why I think so. Pulykamel has effectively said, “ok, I see that it might be possible to measure this way, but with what kind of precision?”

I think that’s a valid and interesting question. YMMV.

If I understand your scenario, you wouldn’t have a depression (concave-up) but rather an arced protrusion (concave-down), and the arc radius would be centered at the center of the earth.

In that case, yes, you’d theoretically have a constant depth of water across the protrusion. If the protrusion is cylindrical through the depth of the plate (as opposed to spherical) the water would be infinitesimally deeper at the center of the plate than at its front or back edges.

But in the real world and at the scale you mentioned, the water’s behavior will be dominated by its meniscus and other surface tension effects. Water’s viscosity would probably matter too at this scale, if you could keep it from evaporating immediately.

I may or may not do the trig tomorrow, but the height of the protrusion compared to the left/right edges of the plate may be best measured in angstroms. That is a metric unit, but we can use scientific notation to keep inches workable. :slight_smile:

Ah, I see. I thought that when you conceded the lack of a single, generalized transform, that you were just conceding that it wouldn’t exist because of the possibility of asymmetric optics.

But I think I understand what you’re describing now. If the world were a flat, infinite plane, then the horizon would be a straight line, and so you would be able to take another straight line, such as the edge of a ruler, and line it up exactly with the horizon. Even if our choice of lens and angle of field of view and so on distorted that particular straight line, then we would still see that in the photograph, the ruler exactly lines up with the horizon. In the real, spherical Earth, however, the horizon won’t be exactly straight, and so it won’t be possible to line up the ruler with the horizon: If the ends of the ruler line up with the horizon, then there will be a slight gap between the two in the middle.

I’m not sure how effective an argument this would be against a flat-Earther, however, since the flat-Earther could argue that you simply used a defective (curved) ruler in your staged photograph. One could, of course, take this into account, but I think it would end up being much more difficult than other, simpler proofs of the curvature of the Earth.

As I mentioned in an earlier post, my back of the envelope calculation was about 1 inch over 1000 feet.
So over 1 foot? Presumably 1/1000th of an inch- but this is a convex shape, so the edges would be 1/2000 of an inch lower than the center. As others mention, surface tension effect probably overrides this. Crustal density anomalies, small thermal currents, evaporation processes, etc. may also distort this.

A more specific example against flat earth can be fount in the boundary between Manitoba and Saskatchewan in Canada. It looks like a straight line on the map, but zoom in on Google Earth and the spherical distortion - lines of latitude get shorter as you go further north - means the border is actually a series of progressive jogs as the fixed size rural mile-square sections take up more degrees/minutes of latitude.

You could use the formula here to find the difference in height, but I expect the angle would be too small for most calculators to get anything other then 1 for (cos a). If we use the approximation (1 - cos (a^2/2)), (I’ve no idea how valid that would be) we’d get that:

The distance of 6 inches from the edge to the center is equal to an angle of 0.0000014 degrees. (assuming the Earth’s circumference is 1577756570 inches)

Using the small angle approximation above, the formula in the link and a radius of the Earth of 251095700 inches we get a difference in depth of 0.00025 inches. Or a quarter of a thousandth of an inch.

The back of my envelope with a lot of rounding for mental math came up with about 1/2000 or 0.0005, so I see we’re in the same ballpark.

Not so fast - in a recent discussion about this very phenomenon, it was persuasively argued that this is an optical illusion and that the stars below the celestial equator are also circling the Northern celestial pole.
Apparent motion of stars in the night sky - General Questions - Straight Dope Message Board

I meant to concede the whole thing, but in retrospect, I see why you initially read it as you did. Frankly, I could have been clearer, and I sincerely wish I’d been so.

Yes, that’s basically my argument. The idea is that after transforming the “truly” straight line into straight, we can then measure the curvature of the horizon to a reasonable degree of accuracy.

It’s likely not the most precise way to measure this, but it does work. And I was only trying to answer the question, “is this a workable option?”

I agree. This argument isn’t persuasive to that kind of person. It’s just one of many alternate proofs that flat-earthers reject.

All stars in either hemisphere are circling both poles. Or more precisely, they’re circling the axis that passes through both poles.