|
"Save the girl!"
I doubt we'll ever be able to program all factors that should be considered into that equation of who should die and who is worth preserving. Worse, as soon as that gets programmed into cars, someone somewhere will abuse it by deciding that their life is more valuable than N others and force that to get written into the programming. I don't so much mean individuals, as classes of people -- should we preserve doctors over McDonalds clerks, or political leaders over soldiers?
No, cars (or robots in general) should not make these kinds of value-of-human-life decisions. They're better left to us humans, who will make them with incomplete information and totally subjectively, just like we've always done.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
|
|
|
|
|
patbob wrote: No, cars (or robots in general) should not make these kinds of value-of-human-life decisions.
Why? what difference does it make.
In any case, I believe it will happen as more and more cars become 'intelligent' and especially when they no longer require human interference. You get in and tell it where you want to go, sit back and read a book or watch a movie.
The reality is that, except under the most randomly freakish conditions, there are unlikely to be any more vehicular accidents once the bots take charge.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
Accidents are pretty chaotic things, involving not only physics, but imperfectly maintained machines that behave unpredictably when under stress, and humans who make split-second decisions and behave unpredictably when under stress. Given this, no machine can reliably determine the outcome, so how can it know whether some action it can make would truly reduce the human injury quotient of an accident?
We can program with only 1's, but if all you've got are zeros, you've got nothing.
|
|
|
|
|
It is supposed to be hypothetical and is more about should they be allowed to do that rather than if they actually could.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
As some of you know, or should know, this is the first law of Robotics as stated by Isaac Asimov in his series of books, I Robot. This absolutely nothing to do with the Will Smith in his <i>I Robit</i> movie of a few years ago. This is one of three laws. The second and third I am a little hazzy about, but one says that a robot can protect itself so long as it does not interfere with the first law, In other words, the robots are programmed to be subservient to human life, even if it means destroying themselves. I like thse laws. I fear that the drones that are now used to kill puported terrorists will eventually be changed from being under human control to being autonomous, and that can lead to all sorts of disasters. By the way, I just reread Asimov's book, "Caves of Steel" and found it to be fresh and a realistic portrayal, even by today's times, of of the future, although it did not include PCs and cell phones. It assumed interspacial travel without any mention of the means by which this is doen, ala Stra Trek warp drive. I recommend the book and others of his Robotic series to all.
|
|
|
|
|
Whilst I am a lifelong fan of Asimov this has nothing to do with the three laws.
The bot in the car only needs to decide how to mitigate the upcoming crash to make sure that as few people are injured or die. It does not attempt to make judgments about the people, it only knows that it should minimize loss of life. It can communicate with the bot in the other car so as to determine the best course of action to take in the last second or less prior to the crash.
Given that it can calculate the extent of the damage and loss of life it has to make a decision, in concert with the other bot, as to the best course of action. That is all.
It is not relevant that the people in one car may be children or the others may be pensioners.
IMO, this is really no different to allowing the accident to complete and hope that chance will preserve life and limb: this possibly gives the occupants a better chance. At least some of them.
Note this is a very unlikely situation. Given that cars are controlled by bots, even under freakish circumstances, they will probably have sufficient time to ensure that the damage is minimal and that all of the occupants will survive.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
mark merrens wrote: It is not relevant that the people in one car may be children or the others may be pensioners.
I can see, in fact, that this could be relevant as this technology matures (assuming the nay-sayers don't have their way and ban it on the grounds that a machine doesn't have an immortal soul).
I imagine that the risk of injury or death will be different depending on teh size/weight/age of the occupants, and could be taken into account.
Also, I can imagine a world where the occupants religion could be taken into account.
Religious Sinners would be saved first, as their death will result in an infinity of pain.
Atheists next - as there death is terminal.
Devout (if that's the right word) believers next - as they're going to a better place anyway.
|
|
|
|
|
I think we should give cars the ability to leap-frog before worrying about smarts.
|
|
|
|
|
Thanks for posting that link, it's a fascinating topic.
If it is not programmed this way and robotic cars become common, it is guaranteed that someday a car will decide "Oh dear, I will collide with that car in front of me causing damage and perhaps injuring the occupant. Look, there is a nice soft crowd of people on the sidewalk, they will cushion the blow nicely".
And the next fun question is, when that happens who is liable? The driver who was watching TV on his phone while his car drove him? The car manufacturer? The company that provided the software? The programmer who wrote that particular subroutine after reading a poll on Wired that said don't risk the driver?
|
|
|
|
|
I hope self driving cars are smarter than that and avoid the collision altogether, anyway, the only fair way to decide this is with a coin, if it's heads the owner survives, it it's tails, he/she doesn't...
Seriously, if not even a human is able to make such decision, I don't see why a robot should do it.
After ruminating this a bit, I came to realize that the issue as stated is pretty binary, but what it's really interesting is what if we add more variable that just life/death, perhaps, if we add disability or quality of life, an example would be that the robot is able to tell that by doing X maneuver will save the 4 little girls standing on the street and kill the driver, but that doing so will left them disabled (for example, one will have it's leg broken, the other will be hit and send against a wall with a protuberance at the level of the lower spine, etc.), while if it kills the 4 little girls, the driver will survive largely unscratched, should the robot be able to make such decision?, what would be to correct one?
|
|
|
|
|
It is a pity to install. Really. After trying more than 4 different "Download Managers" and "SDK Images" (all available from the Samsung Download page) I eventually got something that seems to do what it shall do - Install the damn SDK.
How hard is it to write a decent installer for an SDK? I mean even Microsoft managed to get their VS installer to run smooth.
Edit: Managed to install it. Hope the missing Intel Virtualization won't be an issue. And I'm not even surprised that they please me with yet another Eclipse variant...
As it is already 00:30 around here I'm going to sleep now.
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
modified 13-May-14 18:30pm.
|
|
|
|
|
Ouch. That doesn't bode well for Samsung
|
|
|
|
|
Installed it, see my update.
You might want to tell them that they *really* need to improve the error messages in their so called "Download Manager".
Website "http://" not found doesn't go well with developers using the software
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
I have seen that from a number of download managers. Usually the program had failed at reading the URL from some source, and went with the default.
What do you get when you cross a joke with a rhetorical question?
|
|
|
|
|
Am I the only programmer that gets upset when clueless coworkers invade personal space? Surely I can't be the only one right?
Jeremy Falcon
|
|
|
|
|
Wow! You have personal space?
I'm impressed!
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software
|
|
|
|
|
|
I have to go with Walt on this, I'm jealous.
Try NOT responding when 2 creatures from another team start screeching at each other in Chinese directly behind your chair.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: Try NOT responding when 2 creatures from another team start screeching at each other in Chinese directly behind your chair.
You got me beat man. That would drive me up a wall.
Jeremy Falcon
|
|
|
|
|
Distance doesn't matter; on a conference call today two or even three of my cow-orkers were chewing gum -- right into their elephanting microphones. I had to take my headset off.
You'll never get very far if all you do is follow instructions.
|
|
|
|
|
Ok so chewing gum is not an issue here in Singapore however they have no issues with sniffing here and they refuse to use a handkerchief/tissue so getting that on a conference call can really curl your toes.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Oh, I bet.
You'll never get very far if all you do is follow instructions.
|
|
|
|
|
No, you're not. And frankly, the more I work at home, alone, the more I can't stand working near other people. No, let me correct that. The more I can't stand working with people.
Marc
|
|
|
|
|
Marc Clifton wrote: The more I can't stand working with people.
That's just it though. I love people. It's just some people in IT or the "office life" have like zero freaking clue. You say "piss off" and they think "nah, let's stalk this guy and get in his space." Cuz ya know, that's how friend are made.
Jeremy Falcon
|
|
|
|
|
Jeremy Falcon wrote: Cuz ya know, that's how friend are made.
I could have fun with that.
Marc
|
|
|
|
|