If you go into street view anywhere, your cursor becomes a white disc while it’s on the ground and a white rectangle anywhere it hits a building. It can even do this across plazas and such and won’t turn into a rectangle until it hits the buildings all the way on the other side. Not only that, it knows the planes that the buildings’ front facades lie on, and it follows their angle around corners and such.
I don’t know the exact technology involved, but I recall reading Google Street cars are equipped with scanners of some kind (maybe IR-based?) that can pick out roads from buildings. This also explains its relative short radius on the maps, as it only works up to a certain distance.
Take a look at this scene. It seems to me that the following things wouldn’t happen if it were map-based:
[ul]
[li]It somewhat follows the roof of the brown barn building. Even if they had parcel maps with buildings on them, would they really show roof slope?[/li][li]The left half of the barn gets cut off altogether, perhaps because a sensor was blocked off by the cars in front of it[/li][li]The side of blue building isn’t recognized, perhaps because the sensor is blocked by that bush?[/li][li]If you compare the “Kokte Ranch” sign in front of the blue building with the part of the building left of its door, it’s clear that there are at least two planes involved. And directly behind the sign, Street View confuses the plane of the sign with the plane of the rest of the house.[/li][/ul]
And also take a look at this other scene. Move your cursor around the building and driveway and you’ll see that the planes follow neither, but instead it seems to be based off the hedges in front, rather inaccurately.
This is of course all just speculation, not a foregone conclusion, but if it IS a sensor like Red Barchetta suggested, it’d be neat to see what kind of technology they use. Something like Kinect?
based on your examples they may have tied the data from plat maps into their data base. note that out buildings aren’t highlighted so it can’t be all image based.
now I don’t know. After playing with it I noticed it note only changed the plane of the square but accounted for bump-outs in buildings. The angle changed at less than right angles for bump-outs. that makes it look like an image recognition setup.
That makes sense. Now I wonder what all the different square sizes and shades mean. scanning down various streets it looks like maybe the LASERS were confused by trees because the shading and size of the squares change.
They didn’t always have this feature, and now that they do, it’s actually easier to click to go a specific place in Street View. Before, when everything was a 2D panorama, you could only vaguely double-click and say “go in that direction an unspecified distance” and it’ll scoot you toward it, sometimes too close, sometimes too far. Now, because they can display distance and range and perspective and all that, you can actually specify “take me to this building here” or “go to this part of the road” right from Street View, instead of having to zoom out to map view and re-place Pegman.
Street View isn’t necessarily the end game for Google. Knowing what visual areas map to road and which map to buildings might be helpful for self-driving cars or many other future navigational products Google’s working on.
I get the sense that Google’s general philosophy is to collect as much data from as many sources as quickly as possible, and figure out what the hell to actually *do *with it later.