In Australia there are both financial penalties and demerit point penalties for traffic offences. If you are caught by a camera the penalty notice is sent to the vehicle owner. They can then have the offence transferred to the actual driver if they were not driving. Obviously for cars owned by companies, particularly hire cars, this happens for thousands of offences. Sometimes multiple times. The same process could work for the financial component - nominate the real driver and your account isn’t debited, or if it already has been it can be credited back.
This kind of scheme would make little difference to the average driver. It would be just like having a credit in your tollway account before using the tollway. For the recidivist offender who inevitably loses their license and runs out of credit, again little difference, they will just continue driving around without a valid license. There are already huge numbers of unlicensed and uninsured drivers on the roads.
We currently live in a society where it’s legal to drive pre-WW2 cars with the equipment that was originally installed, even though such cars would not be legal if manufactured today.
Such attitudes and laws will have to change drastically before this point has the slightest chance of coming true.
Probably but there will be so many cameras in the future a tracking device won’t be necessary.
There are already a number of ways to get fined without knowing who the driver was at the time. Red light infractions, speeding cams, toll collection infractions, for example. Parking tickets go way back, of course. In all of these, the ticket goes to the registered owner who is responsible for the ‘administrative violation.’
I do, and it would call the manufacturer/programer to justify the car’s behavior, and perhaps that could cause the law to be challenged. Further I feel there will be a fund set aside by the manufacturers that will go to pay for any AI accidents, it would not be the person’s fault, and we would move away from the model of finding someone to blame. It would be justified when AI driving is proven safer than people driving as that is the best case use as even if it is flawed it’s still the safest option we have.
This is a massive ethical challenge when it comes to AI driving. It’s a kind of trolley problem.
Imagine you’re driving down a busy road and somebody runs in front of your car. Braking is impossible. You have three options: hit and probably kill the pedestrian. Swerve left and into oncoming traffic, putting yourself and other drivers in danger. Swerve right and into the sidewalk, putting large numbers of other pedestrians in danger. Do you kill the jaywalking idiot because he’s breaking the law? Do you swerve into traffic because the vehicle’s safety features will protect drivers more than pedestrians? Do you honk the horn wildly and hope people on the sidewalk are paying better attention? What if the jaywalker is pushing a baby carriage? What if oncoming traffic is a Proud Boys parade? What if the sidewalk is actually a sidewalk cafe full of nuns?
A human might act purely on instinct, with no time to really consider the various options and their potential consequences. A computer will have plenty of time to notate details, and it will act according to its programming. Somebody, somewhere, will have to make conscious decisions about when to prioritize certain lives over others.