Killer Traffic

Columbus, Ohio (not pictured above, I’ll get to that later) recently received a $50 million grant to become the United States Department of Transportation (USDOT)’s ‘Smart City’. That money is going to go towards the development of a road system for driverless cars. They will have to build up an entire infrastructure – charging stations (the cars are going to be electric), traffic signals, a few GPS relays probably – to make this work. The USDOT is not alone putting up the money, either: putting together all public money and other committed sources, they’re about to put in $140 million. With self-driving cars, more people could get on the move, especially in urban centers like Columbus.

Of course, every new development has its share of problems, and this – cool though it may be – is no different. Driverless cars are great, but don’t forget that you will be sharing the road not just with other cars, but with pedestrians as well. A six-part study recently published in Science outlines some of the concerns regarding driverless cars, especially when dealing with causing inevitable harm to pedestrians. This is easily a problem for areas with built-up urban centers. See the picture leading this blog entry? This is a street crossing near Shibuya Station in Tokyo, Japan. Every time you see pictures or movies of thousands of people crossing a major intersection, surrounded by a glass canyon lit up with colorful adverts, it’s very likely to be this location. Now, you’ve got a driverless car, something’s gone wrong with the brakes or it’s got the wrong instructions somehow and it’s not stopping. Your car can’t slow down in time. What’s the car to do? This is your typical ‘no-win situation’ (or if you’re a fan of Star Trek, this is your typical Kobayashi Maru). Should the car save the passenger/s, which will harm the pedestrians, or do the reverse?

Here’s a summary of what the authors found with their participants:

  • People tend to think it’s “more moral” if the car would “sacrifice” the passenger as long as the car doesn’t hit multiple pedestrians, like by having the car steer itself into a wall (I’m using the authors’ term ‘sacrifice’ – they may be aware that it may not mean ‘kill’ the passenger outright)
  • The more pedestrians’ lives at stake, the more likely people will want their cars to sacrifice the passenger (the less pedestrians, the less likely they want the car to sacrifice the passenger)
  • If they were riding with family members, people would much rather the car protect the passengers
  • They want this tech for other cars, but not their own
  • Sacrificing the passenger may seem moral, but the participants didn’t want this legally enforced

This doesn’t seem to be a problem of tech as it is a problem of morals. Humans do tend to be self-serving, which isn’t a surprise since humans do what they need to survive. A lot of artificial intelligence uses logic, and morals can’t easily be solved using cold logic (this is as far as I’m willing to take this philosophical conversation, mainly because I’m really not good at it). Think about the movie 2001: A Space Odyssey. The on-board computer, HAL 9000 (what’s the statute of limitations on spoilers?), starts killing off the astronauts one by one so that they don’t find out the true nature of the mission, which Mission Control was withholding anyway. Terrible, yes, but it is logical, and it was somehow programmed into HAL 9000’s personality. Since driverless cars are driving on programs, which is really just complex logic when it comes right down to it, what’s the best option for both passengers and pedestrians? And how do you program things like morals and ethics, which don’t always have clear-cut answers, using logic and mathematics, which often have clear solutions?

For now, let’s keep our eyes on Columbus, Ohio. A lot of theories on modern society and artificial intelligence could very well play out in front of our eyes, and I’m hoping for the best. The human experiment continues.

I will admit I’m a little more serious in this post, but it is a serious topic, after all. I’ll be funny (OK, ‘smart-alecky’) next time, I promise.

Bonnefon JF; Shariff A; Rahwan I. (2016) The social dilemma of autonomous vehicles. Science. 352(6293): 1573-1576. 10.1126/science.aaf2654

Image credit: ‘shibuya crossing japan tokyo 2012 night’ Wikimedia Commons (Creative Commons license, user: chensiyuan)



  1. My brother brings up a valid problem with the law: who’s at fault when there’s a crash? The passenger? Or will google pay the bill when the lawsuit happens?

    Liked by 1 person

    1. Honestly, I don’t think the conversation’s arrived at legality and liability yet. If they figure out the moral and ethical calculus behind this, the law would probably be the next step.


    1. Oh, I’ll talk about the tech, because it’s a question of “what can we do”. Legality is clearly the next step – “what *should* we do”. I’ll save that for people with more expertise in law than me – I’m just a fan of infrastructure.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s