self driving cars and speed limits

I don’t think the revenue stream from speeding tickets is that large - if we look at your cite, local and state governments gain $6.2 billion per year from traffic tickets, but local and state governments took in:

$497.6 billion in property taxes in calendar year 2014
$348.2 billion in sales taxes
$349.7 billion in individual income taxes
$57 billion in corporate income taxes

(source)

For a grand total of almost $1.3 trillion in 2014 (assuming the traffic ticket number was good for 2014.)
That means citations account for half a percent (0.5%) of local and state incomes from those five sources (not counting other sources of income like fees.)

Traffic tickets are really insignificant income in the big picture, but some small municipalities may be effected if they don’t have much in the way of tax revenue (usually tiny towns known for their speed traps - many of which exist on a questionable financial basis anyway.)

FTR: I am one of the biggest self driving car fanboys/apologists around here.

Self driving cars do have major issues with some times of inclement weather, epsecially snow as it tends to create what appear to be physical obstacles visually that any car can plow right through as well as blanketing roads and concealing the edges of the road and or lines/street markings that the car needs to help keep itself on the road. Surveys captured by hundreds of other cars during clear weather may eventually help augment navigation in inclement weather but for now there are known issues with poor weather driving.

Yes, there are problems with self-driving cars in the snow… but humans in those same conditions have even bigger problems.

Boomp!

This just in! Cop pulls over Google self-driving car, finds no driver to ticket, Don Melvin, CNN, Nov. 13, 2015.

Story highlights:
[ul][li] Car going too slow – 24 mph in 35 zone; several cars backed up behind it.[/li][li] No driver, so cop talks to the passenger.[/li][li] Cop decides that car wasn’t driving illegally anyway (wtf?) so no ticket. (The ticket would have been for impeding traffic.)[/li][li] Google acknowledges cars capped at 25mph; questions whether any cop would have stopped any live driver for that.[/ul][/li]My thoughts:
[ul][li] Interesting that Googlemobile recognizes when a cop is trying to pull it over. (Is that how it works? Or did the passenger have to tell the car to stop? Article doesn’t say.)[/li][li] Once self-driving cars become common, we will need some whole new paradigm about how traffic law enforcement will be handled. Note, Lord Feldon pointed this out way back in Post #3.[/li][*] Perhaps Google will program their cars to recognize when several cars are backed up behind it and respond by pulling over to let them pass. Kind of surprising they haven’t done something like this long ago.[/ul]

Talk about 21st Century problems!

:smiley:

There will be a new regulation requiring self-driving cars to be self-ticketing when necessary.

Here’s a thought: Google cars collect and record extensive data – all of the inputs provided from all of the car’s sensors, and all the driving actions taken by the car. After-the-fact, they upload this to their servers for analysis. (They use the data to run simulations, and they can also tweak the data to create all kinds of scenarios and run simulations on those.) (ETA: I learned all this from some videos I found on YouTube.)

With all that data being collected by these cars, we could expect “The State” to demand access to that data too. Cars may be required to upload their driving data to some State driving-enforcement office – possibly even in real-time – and cars’ behavior might be monitored that way.

This. And, as any number of futurists have already pointed out, we’ll have a new generation of high-tech roadways and other infrastructure along with our self-driving cars (now to be called Bulletmobiles), all integrated.

Here is a photo gallery of ten of the world’s most dangerous roads, many of them highly :eek: . It will be interesting to see when self-driving Googlemobiles can handle these.

The combination of the recent article and the article that no doubt inspired the thread is of note.

In this recent case what was the issue … the car was driving too slowly for traffic, not committing any traffic violation per the linked article, but resulting in back up and possibly therefore causing less safe conditions.

The conditions that a Google car would drive over the speed limit and why?

I hear Google’s marketing logic for capping the speed right now … but it is by their own previously stated logic having the vehicle do something that may present a danger and limits the value of the many trial miles by having the vehicle function at below traffic flow velocities.

As to the question of the op … if an owner gets a ticket for the car breaking the law (could be if it the last car in the line of speeders so the one picked off by the highway cop … you never want to be the first or the last in a line speeding on the highway!) then I would strongly suspect that the manufacturer would cover the consequences rather than have it be presented in the media as a product flaw. And points made already … when most cars are autonomous speed limits actually will be followed and followed much consistently than currently, likely with cars packed more closely together using V2V communication.

I’m all in favor of driverless cars as I hate driving, but if you think there’s going to be some new infrastructure to facilitate them, think again. The U.S can’t be bothered to properly maintain or upgrade the decades- or even centuries-old infrastructure it already has. In fact, the very reason we are moving to driverless cars at all is because we don’t have and won’t build the mass transportation that ought to be moving large numbers of people.

More likely it will be a gradual process as part of ongoing maintenance. As traffic lights get upgraded include emitters that communicate when the light is going to change, and the speed limit (in development using fast blinking LEDs as emitters) . As major arterial and highways are repaved embedding piezoelectric powered lane marker emitters (just riffing in my own head) at least in intersections and the approaches to them, that signal through slush just fine.

Vehicle to Vehicle (V2V) would be of more importance and can be included in cars of all levels from emitter only (speed, proximity, and lane change warning to other vehicles), to warning signals based on received information, to automatic accident avoidance, to ad hoc car-trains and fully automated driving vehicles).

The problem of course is that the challenges to automated vehicles are greatest when the technology is just emerging and most immature. An autonomous vehicle coming out now has to be able to handle most cars around it behaving unpredictably and erratically (the human driver factor) and without information coming to it from infrastructure or other vehicles. Given that, the system wide performance of automated vehicles will improve dramatically over time even if the technology remains stagnant just as a function of gradually fewer human bad actors on the road and more communicating vehicles as well. It’s getting over that initial hump, the activation energy so to speak, that is the tough part.

Are the freeways in the U.S. designed to be safe at 120 MPH (considering the degree of banking on curves and all…) ?

So you set the speed limit at 120 on the straightaways and 50 on the curves. Human drivers would get frustrated with the constant speed changes, but robots don’t care.

I’m a bit silly sometimes… I didn’t even think of that! Thanks!

I still wonder what the maximum safe speed of a banked curve on an average U.S. interstate is.

The only kind of speed limits that make sense even for (experienced) human drivers are those advisory signs on some curves, because they inform you of the speed you can take the part of the curve you can’t see. Speed limits on parts of the road you can see clearly make no sense at all. If you’re competent to drive a car at all, you can easily assess the maximum safe speed to drive. If you can’t be trusted to do that, you certainly should not be trusted to know when to turn left – and should not have a driver’s license. There are plenty of situations – boats, airplanes, some roads – where people pilot craft with no general speed limit, just the requirement to not wreck and with advisories where the best speed isn’t obvious.

General speed limits for a machine that can solve for the trajectories of all the moving objects nearby to millisecond and centimeter precision are even sillier. Just program the computer to not hit things, or allow itself to be hit, to keep its acceleration below human comfort levels, and otherwise to go as fast as possible.

Naturally to do this successfully, you need the computer to have a reliable model of other driver behaviour, including human drivers if they’re still around. For example, it needs to know that a human driver lacking radar vision won’t realize he shouldn’t pull out of a driveway in a pouring rainstorm because a robot car a quarter mile away is traveling at 190 MPH. The robot needs to know it can’t count on humans coping with speeds that high. Of course, this is necessary anyway, and the main reason I think robot cars will never actually happen, except in very restricted situations. If you had the insight to write a very reliable model of human driving behaviour, good for almost any individual human, you’re wasting your time programming cars – you could quite easily program robots to be psychiatrists, nurses, EMTs, and probably US Senators. You’re halfway to solving strong AI.

The carbon-based passenger units would get carsick.

Hey, it could be worse. You could be trying to teach humans to understand other humans.