5 min read | 1350 words | 550 views | 0 comments
Autonomous vehicles are seemingly all the rage in many of today's tech lines. Tech companies like Tesla and Google just won't give up, will they?
For what it's worth, the likelihood of so-called "self-driving" cars taking off is slim. Sure, tech moguls say it's the next big thing, just like 5G, the Internet of Things, "smart meters", and the multitude of other tech disasters that are sprouting up across the country, mostly financed using misappropriated public funds. But why should you believe them? Given that autonomous vehicles would most likely necessitate V2V, or "vehicle to vehicle" communications, using high-frequency millimeter waves, it's safe to say that if too many of these ever get out on the road, it will be anything but safe.
The only real thing autonomous vehicles have going for them is the "safety" umbrella, which is really more an extension of our stupidity than it is a valid point on its own. If people paid attention to the road, traffic fatalities would be diminished incredibly. If alcohol and mobile phones stayed out of vehicles, the highways would be a much safer place!
Autonomous vehicles raise more than just health concerns, though. There are serious ethical implications that people must confront if they are going to seriously even consider unleashing these vehicles of destruction onto the roads.
MIT's Media Lab recently explored some of the moral dilemmas posed by artificial intelligence, which would play a large role in the realm of autonomous vehicle. After all, an autonomous vehicle must be able to make the call when it comes to the safety of its occupants and its surroundings. Many ethical questions are ambiguous. So, too, at least seemingly, is one of the questions that MIT researchers explored: who should die in a collision in which an autonomous vehicle is involved? We, as humans, have a moral compass that guides us in these types of dilemmas. Artificial intelligence, no matter how "intelligent", comes down to 1s and 0s at the end of the day, and has no such moral compass. MIT researchers explored whether people felt an autonomous vehicle should hit a young or elderly person in order to save its occupants.
According to ZDNet"we agreed across the world that sparing the lives of humans over animals should take priority; many people should be saved rather than few, and the young should be preserved over the elderly." Anyone else see a problem, here?
Patricia Burke sums it up perfectly in her article exploring this issue:
If the engineering behind self-driving cars can result in the possibility of a careening car’s intelligence deciding whether to hit the elderly lady in the crosswalk or the baby in the carriage on the sidewalk, we need to go back to drawing board. — The "Artificial" of Artificial Intelligence and MIT's "Moral Machine"
I can see why most humans would agree sparing the lives of humans over other animals is paramount. It's a moral decision with which most of us would agree. But what about blanket statements like "the young should be preserved over the elderly"?
Answers to these questions tended to be heavily culturally influenced. If youth is more valued as a culture than age, why not go with it? The problem is when decisions like these are programmed, no differentiation is made. Should an autonomous vehicle strike an elderly person, even if it's the president of the United States? Even if it's Paul McCartney? Even if it's your grandma? Will autonomous vehicle manufacturers program in certain "exceptions" to the "no-kill" list, in effect saying "everyone else is fair game"? These are serious ethical issues with which we must grapple, if there is to be any conversation about the future of autonomous vehicles.
Here's another question: should an autonomous vehicle, if forced to choose, decide to kill nearby pedestrians or kill its own occupants?
The American culture is highly individualized. We, as part of our culture, are more self-interested and self-motivated than are many other cultures, particularly in Europe and the East. So it would make sense that the autonomous vehicle act to save its owner; it would be only natural, right?
Again, here we have a serious moral dilemma. Who are autonomous vehicle users to say that their lives are so important that they should be automatically spared, and that whoever is nearby must die because of their decision to use a self-driving car?
It's selfish is what it is.
On a moral level, there's no right answer to this question. But from a perspective of justice and "what's right", there is, at least to this question, a clear answer.
There are people who want self-driving cars and there are people who don't. In general, generalizations are something we like to avoid, but as a general trend, younger people tend to be more comfortable with autonomous vehicles — after all, they already let technology run (and perhaps ruin) their lives, don't they?
In other words, you'll more likely find a college student roaming town in an autonomous vehicle than a senior citizen. Makes sense, right? Young people today are avoiding responsibility like the plague and have embraced all sorts of meaningless, purposeless technology and then gotten addicted to them. Autonomous vehicles are another such fad that are, in all practicality, no different. (Okay, enough bashing young people now.)
From a justice perspective, it makes little sense for autonomous vehicles to be programmed to target bystanders or pedestrians. After all, they're completely innocent, disentangled from the whole situation. Why should they be punished? Wouldn't it make more sense for a self-driving car to, if it has no alternative, kill its occupants instead?
It sounds extreme, outlandish, even. Of course, naturally. But put emotion aside and truly think about this analytically: if there are people who are willing to entrust their lives their lives to an autonomous vehicle tasked with making moral decisions it cannot, then they should be willing to pay the price if they have, indeed, misplaced their trust.
Why should innocent bystanders or pedestrians, who perhaps never advocated for or embraced this technology, be collateral damage when they have been warning others about it all along?
To be clear, we're not advocating that anyone should die; that would be inhumane. But, the stark reality of the matter is that people die, and people die in traffic accidents. And, if a choice must be made, it's only fair that the people who believed in and backed the technology and asked for it and bought it should pay the consequences if when it becomes necessary. There, I said it — if you are willing to trust an autonomous vehicle enough to use one, you should be willing to be the first victim when a potentially fatal decision must be made.
Logically, we believe this is a perfectly fair guideline. So take note of that autonomous vehicle manufacturers: if you're so confident in your products, then program them to kill their owners, not innocent bystanders! That's right, kill your customers! (Of course, that'll never happen, because the people who make autonomous vehicles are as selfish as the people who use them!)
And while we're not saying that young people are worth less than old ones, young people are more likely to support autonomous vehicles, so if one malfunctions, why not target a young person? The old people aren't asking for this technology; why should they be punished when it has faults?
Don't like this line of reasoning? Neither do we — it's just another reason why autonomous vehicles will likely never (and should not) become reality.
Driving is, relative to other things we do on a daily basis, an incredibly dangerous activity. A person would think nothing of jumping the curb to avoid hitting a child who darts into the road unexpectedly, a self-driving car would. If you think that we can safely, humanely, and ethically entrust this demanding responsibility to an embedded computer (that, inevitably, will be wireless connected and prone to hacking), then think again. Computers are amazing and powerful tools, but there are just some things in life you just gotta do yourself.
Log in to leave a comment!