I had always thought that the zoom function with a wheel mouse was weird and annoying in Google Maps… it seemed impossible to centre the image and then zoom in. It always zoomed off centre. Why does it do that?
It always bugs me when programs don’t do that. A while back I wrote a real-time Mandelbrot set renderer that worked that way. Fun and easy to zoom in and out. Most other programs at the time just zoomed to the center, which is painful when you want to zoom to some interesting off-center point.
But if you just want to zoom in/out on the center of the screen (I think) and not centered on the mouse pointer, you can use the + and - keys to do that.
Now, perhaps someone can answer some Google Map questions for me:
I also have an Apple laptop that I’m only slightly familiar with, so I could use some tips on how to use it. Is there any equivalent for the mouse wheel there? When I use Google Maps (or Google Earth, which I do a lot), I can zoom in/out with the + and - keys, which zoom on the screen center. But I want to zoom on the pointer, just like with a mouse wheel. How do I do that with an Apple laptop? (Bearing in mind, also, that there is no right-button either, and no mouse wheel equivalent that I know of.)
There are other web sites that use the Google Maps API too (although possibly an older version). I often look at vfrmap.com, which displays aeronautical charts. It works substantially the same way.
I think you make a pinch gesture using thumb and forefinger on the mousepad (well, actually any two fingers - move them apart to zoom in and closer together to zoom out).
Another question: Does anyone know how to print out a black and white readable map? [I knew how to do it a few years ago but the current ones I try to print out are unreadable.]
Now that that’s answered, can someone tell me why satellite view quality took a giant leap backwards?
The sat view images used to be pictures. There were artifacts where the images were joined, but all in all the images were clear and crisp. Now they’ve done something so that the trees look like the best polygonal game images 1990 had to offer. And other portions of the images are all wavy and fuzzy.
It’s a 3D-rendering. If you pan the map to and fro, you’ll notice that from certain positions you can see one side of a building and not the other side - and then from other positions you can see the other side, but not the first side. Just as if you were actually hovering some distance above the ground.
the 3D aspect becomes more obvious if you are looking at a very tall building (e.g. the Willis tower in Chicago) and zoom down until you’re just above the rooftop.
look for the 2D/3D toggle button at bottom right, and it’ll even let you go to non-vertical view angles.
Whoa, that’s wild. They’ve made the “satellite” images 3D. It works a lot better on buildings than on trees, but I’m surprised that they were able to make it work at all. That’s a lot of computing power, even for Google.
They’re probably using graphics processors for the structure-from-motion processing (i.e., converting image sequences to 3D geometry). Plenty of horsepower there. As it turns out, GPUs are well-suited for both forward and inverse rendering.
…And I think I’ve figured it out. They don’t actually have the 3D data stored (or at least, not all of it), but do the 3D rendering on the fly, as it’s requested by a user. I was just tooling around a bit in Yellowstone Canyon, and it was constantly adjusting the elevation profile as I moved.
Next question is whether they store that information once generated, and whether they’re doing the computations on their own computers or the user’s.
No, that’s just level-of-detail optimization. They definitely have the full set of 3D geometry and textures already on their servers. But it takes a while to stream it to your system, so they start with low-detail meshes and increase the detail as more data comes in. Same thing as every open-world game nowadays, but more obvious because it comes over a network link instead of an SSD.
Oh, and to clear up something, the rendering is done entirely locally (using your computer’s graphics chip). Google sends you the mesh and texture data, and then your system renders that into a scene. Once downloaded, you can zoom and rotate without downloading anything extra.
OK, that’s…something. Sometimes it’s useful, I can see that. But the straight down “classic” views are so much worse than they used to be.
Sometimes it seems that the world is regressing. We used to have all this cool stuff that we no longer have. Maybe when it gets improved, it’ll be awesome and I’ll wonder why we didn’t always have it. But now, I can’t even recognize which car is parked in front of my house. It’s just a misshapen blob that looks like the after effect of a transporter accident.
Last year I was walking in my town and came across the Google Maps car. It was stopped at a stop sign in the cross walk so I walked around it. I figured sooner or later I would end up on Google maps maybe and sure enough about two months later there I was. I wa sin several images while I approached the car from the side, waked around it and went off to the other side. They blurred my face.
What’s funny is I just checked again and I’m not there anymore. That means they updated their maps twice in one year where prior to that my town had had the same images for like 8 years.