the-ethics-behind-self-driving-cars the-ethics-behind-self-driving-cars

The Ethics Behind Self-Driving Cars


the-ethics-behind-self-driving-cars

The future is here: self-driving cars will soon be among us, driving through our motorways. Companies like Mercedes, BMW and Tesla have already released, or will soon release, self-driving features that pretty much allow the car to drive itself. Google started testing their self-driving cars on Californian roads last summer.

Part of the technology that will implicitly come with self-driving cars is a system that will determine how the car reacts in potentially dangerous situations, like for example, if a person were to fall into the road, right in front of a self-driving car and the only two options were for the car to either swerve and drive into a traffic barrier, potentially killing the passengers, or for the car to keep going and potentially killing the person who is in the middle of the road – what would it do? – This raises serious ethical questions, as human drivers can easily answer ethical questions, but it is not so simple for artificial intelligence. For this reason, the programmers behind this futuristic technology will have to decide whether they will define explicit rules as to how the car must behave in each given potentially dangerous situation, or just rely on general driving rules and hope for the best.

Google seems to have given more details about how their self-driving cats will handle potentially dangerous situations, stating that the initial idea is for the cars to “choose” to hit the smallest object. Hitting the smaller object is, of course, an ethical decision: it’s a choice to protect the passengers by minimizing their crash damage. It could also be seen, though, as shifting risk onto pedestrians or passengers of small cars. However, Google’s AV leader, at that Chris Urmson, described more sophisticated rules to the LA Times last summer, saying that “Our cars are going to try hardest to avoid hitting unprotected road users: cyclists and pedestrians. Then after that they’re going to try hard to avoid moving things.”

While the ethics behind how self-driving cars should behave in dangerous situations continues to be debated, the United Arab Emirates government has purchased 200 Tesla self-driving cars. Although the cars will initially be used in the “Autopilot” mode, which requires a human driver, they will come equipped with the hardware needed for complete self-driving capability.

With self-driving cars’ imminent arrival to our roads, how do you think the ethics behind this technology should be handled?