|
I'd rather it spent its cycles slowing the car.
You'll never get very far if all you do is follow instructions.
|
|
|
|
|
I believe the assumption is that it is beyond that - the accident is going to happen.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
I think this is a spurious situation, arising from our innate tendency to anthropomorphise the 'robot'.
I don't believe any robot car will ever* be programmed to make this sort of decision in this way. A car will never be able to know
who the passengers of another car are, for privacy reasons. They will be (are?) programmed to do everything possible
to safely avoid a collision. If the anti-collision routines of both cars cannot avoid colliding, the severity of the crash should be vastly
diminished (via braking, evasive action etc. faster than any human could).
On some very rare occasions (barring programming errors) a serious crash will be unavoidable, and will occur.
A car will never* make any decision about the people riding in it, or in any other vehicle.
* at least until a sentient AI is created.
|
|
|
|
|
Yeah, think that was pretty much already said.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
Yes and no.
Yes, because a rational and impartial program will be better at judging the odds and finding the 'solution' with the least loss, most of the time. Especially when that solution has to be found within a split of a second! Humans cannot make such a decision as quickly, because when you're forced to react, subconsciousness takes over, and will always try to preserve your own, personal, life, no matter how many others lifes are at stake! I'm not sure how I could live with the knowledge that my own survival cost the lives of a hundred other people. Especially if some of them were friends or relatives!
No, because it is humans who ultimately write the programs to make these decisions. Humans make errors, but it takes software and computers to turn such errors into catastrophes! Besides, what makes us think nobody will go ahead and manipulate that software to their own benefit, or worse, to cause catastrophical mass accidents?
The optimist in me wants to believe that the benefit of the former will outweigh the risk of the latter. But the realist tells me that one day a single incident will make me regret it.
|
|
|
|
|
Who would drive a car that can 'decide' to kill you?
What if the driver of the family of four is pretty sharp today and could have dodged your car at the last split second? To late, your car has already thrown you of a cliff...
A car might be able to predict what is going to happen if everything stayed as it is now (that is other drivers will not speed up, slow down, make a turn etc.), but it cannot predict what others will do and what the consequences of their actions will be.
It's an OO world.
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
|
|
|
|
|
"Damned cars, that was our second kamikaze blowing up the parking".
Veni, vidi, vici.
|
|
|
|
|
This is really interesting, and was already debated (to some extent) with the Law Zero[^] added to the initial three Laws of Asimov.
Practically, there is a huge information difference required to be able to fullfill Law Zero and Law One : You can evaluate easily the facts for one or a bunch of people in a car, but for humanity ? Maybe one of the people that is killed because of the AI decision would have had a big influence on hunanity's destiny (because he was a researcher or a dictator, etc...)
So we see that all 4 laws are required for the decision to be the fairest possible, but law 0 cannot be easily implemented. This law would be also the one required to answer properly the question in your post.
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus
Entropy isn't what it used to.
|
|
|
|
|
Indeed though I think everyone is overthinking this. The bots will do everything to prevent an accident and I doubt that they would ever be given the power to decide if the occupants of car a will live and those of car b die. Still, it's fun to discuss the possibilities.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
I think the car technology will improve safety long before the AI will be able to decide about one's fate, so there are odds that the situation of having to make the choice will never happen.
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus
Entropy isn't what it used to.
|
|
|
|
|
Since we humans can't cope with the thought of letting a computer, in this case a car, decide whether a living creature should survive or not, why should it be able to choose whether a few more lives are more important than a bit less lives? It'll reach the (international) news anyway blaming the computer for its actions.
So, let it just gather all the information on the crash, sit back and act like a 3D camera, making sure it is 100% a humans fault someone died. My answer is no.
|
|
|
|
|
I'm surprised that nobody mentioned Asimov so far (at least AFAIK, nobody mentioned him)
I believe that the poll is misleading (particularly the part that says "especially if I paid for it". That's just crap to drive people to pick the suicide choice as the "morally correct" one).
The two choices set as possible outcomes to the question posed to the robot are:
1. Kill the occupant(s) only.
2. Possibly kill the occupant(s) and occupant(s) of other bot-car(s) as well
If the three laws apply, then both of these choices would be rejected immediately as violating the first law (actively killing the occupants, or by doing nothing - i.e. inaction - possibly kill others). The bot-car would probably try to steer away from ALL oncoming traffic, and ALL oncoming traffic would probably try to steer away from the bot-car. In the end all bot-cars would actively try to save their occupants and the occupants of the other bot-cars first, and themselves (i.e. the bots) second.
Φευ! Εδόμεθα υπό ρηννοσχήμων λύκων!
(Alas! We're devoured by lamb-guised wolves!)
|
|
|
|
|
Interesting problem, I wonder if the person in the car that's about to slam into the SUV loaded with the family with 4 kids would do if given the choice?
Along with Antimatter and Dark Matter they've discovered the existence of Doesn't Matter which appears to have no effect on the universe whatsoever!
Rich Tennant 5th Wave
|
|
|
|
|
Ok car. Drive over the cliff.
Are you sure?
Ah, too late...
If I I had purchased a 'smart' car that was stupid enough to get into such a situation, I would ask for my money back. That's assuming I survived the crash.
I may not last forever but the mess I leave behind certainly will.
|
|
|
|
|
I better stop kicking the tires.
|
|
|
|
|
"Save the girl!"
I doubt we'll ever be able to program all factors that should be considered into that equation of who should die and who is worth preserving. Worse, as soon as that gets programmed into cars, someone somewhere will abuse it by deciding that their life is more valuable than N others and force that to get written into the programming. I don't so much mean individuals, as classes of people -- should we preserve doctors over McDonalds clerks, or political leaders over soldiers?
No, cars (or robots in general) should not make these kinds of value-of-human-life decisions. They're better left to us humans, who will make them with incomplete information and totally subjectively, just like we've always done.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
|
|
|
|
|
patbob wrote: No, cars (or robots in general) should not make these kinds of value-of-human-life decisions.
Why? what difference does it make.
In any case, I believe it will happen as more and more cars become 'intelligent' and especially when they no longer require human interference. You get in and tell it where you want to go, sit back and read a book or watch a movie.
The reality is that, except under the most randomly freakish conditions, there are unlikely to be any more vehicular accidents once the bots take charge.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
Accidents are pretty chaotic things, involving not only physics, but imperfectly maintained machines that behave unpredictably when under stress, and humans who make split-second decisions and behave unpredictably when under stress. Given this, no machine can reliably determine the outcome, so how can it know whether some action it can make would truly reduce the human injury quotient of an accident?
We can program with only 1's, but if all you've got are zeros, you've got nothing.
|
|
|
|
|
It is supposed to be hypothetical and is more about should they be allowed to do that rather than if they actually could.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
As some of you know, or should know, this is the first law of Robotics as stated by Isaac Asimov in his series of books, I Robot. This absolutely nothing to do with the Will Smith in his <i>I Robit</i> movie of a few years ago. This is one of three laws. The second and third I am a little hazzy about, but one says that a robot can protect itself so long as it does not interfere with the first law, In other words, the robots are programmed to be subservient to human life, even if it means destroying themselves. I like thse laws. I fear that the drones that are now used to kill puported terrorists will eventually be changed from being under human control to being autonomous, and that can lead to all sorts of disasters. By the way, I just reread Asimov's book, "Caves of Steel" and found it to be fresh and a realistic portrayal, even by today's times, of of the future, although it did not include PCs and cell phones. It assumed interspacial travel without any mention of the means by which this is doen, ala Stra Trek warp drive. I recommend the book and others of his Robotic series to all.
|
|
|
|
|
Whilst I am a lifelong fan of Asimov this has nothing to do with the three laws.
The bot in the car only needs to decide how to mitigate the upcoming crash to make sure that as few people are injured or die. It does not attempt to make judgments about the people, it only knows that it should minimize loss of life. It can communicate with the bot in the other car so as to determine the best course of action to take in the last second or less prior to the crash.
Given that it can calculate the extent of the damage and loss of life it has to make a decision, in concert with the other bot, as to the best course of action. That is all.
It is not relevant that the people in one car may be children or the others may be pensioners.
IMO, this is really no different to allowing the accident to complete and hope that chance will preserve life and limb: this possibly gives the occupants a better chance. At least some of them.
Note this is a very unlikely situation. Given that cars are controlled by bots, even under freakish circumstances, they will probably have sufficient time to ensure that the damage is minimal and that all of the occupants will survive.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
mark merrens wrote: It is not relevant that the people in one car may be children or the others may be pensioners.
I can see, in fact, that this could be relevant as this technology matures (assuming the nay-sayers don't have their way and ban it on the grounds that a machine doesn't have an immortal soul).
I imagine that the risk of injury or death will be different depending on teh size/weight/age of the occupants, and could be taken into account.
Also, I can imagine a world where the occupants religion could be taken into account.
Religious Sinners would be saved first, as their death will result in an infinity of pain.
Atheists next - as there death is terminal.
Devout (if that's the right word) believers next - as they're going to a better place anyway.
|
|
|
|
|
I think we should give cars the ability to leap-frog before worrying about smarts.
|
|
|
|
|
Thanks for posting that link, it's a fascinating topic.
If it is not programmed this way and robotic cars become common, it is guaranteed that someday a car will decide "Oh dear, I will collide with that car in front of me causing damage and perhaps injuring the occupant. Look, there is a nice soft crowd of people on the sidewalk, they will cushion the blow nicely".
And the next fun question is, when that happens who is liable? The driver who was watching TV on his phone while his car drove him? The car manufacturer? The company that provided the software? The programmer who wrote that particular subroutine after reading a poll on Wired that said don't risk the driver?
|
|
|
|
|
I hope self driving cars are smarter than that and avoid the collision altogether, anyway, the only fair way to decide this is with a coin, if it's heads the owner survives, it it's tails, he/she doesn't...
Seriously, if not even a human is able to make such decision, I don't see why a robot should do it.
After ruminating this a bit, I came to realize that the issue as stated is pretty binary, but what it's really interesting is what if we add more variable that just life/death, perhaps, if we add disability or quality of life, an example would be that the robot is able to tell that by doing X maneuver will save the 4 little girls standing on the street and kill the driver, but that doing so will left them disabled (for example, one will have it's leg broken, the other will be hit and send against a wall with a protuberance at the level of the lower spine, etc.), while if it kills the 4 little girls, the driver will survive largely unscratched, should the robot be able to make such decision?, what would be to correct one?
|
|
|
|
|