|
Chona1171 wrote: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
I can't do that Dave.
The report of my death was an exaggeration - Mark Twain
Simply Elegant Designs JimmyRopes Designs
I'm on-line therefore I am.
JimmyRopes
|
|
|
|
|
I think you need to get out more.
Regards,
Rob Philpott.
|
|
|
|
|
The very first problem with Azimuth Asimov's laws that it takes for sure a mechanical brain that can be think with the complexity of human brain. In Asimov's time it was far even more than today, so you can't expect him to create a perfect rule-set... I'm sure that if it will come to reality to build such robots we will have to create some new laws...
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
|
|
|
|
|
You missed the zeroth law:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
=========================================================
I'm an optoholic - my glass is always half full of vodka.
=========================================================
|
|
|
|
|
No, he just never read the books : )
BTW, it took me forever to find forward the foundation, the greatest Sci-Fi tie in ever because it was out of print, then Amazon comes along : (
|
|
|
|
|
My favorite stories. Just bought a new hardback edition of the trilogy.
The greatest sci-fi author, bar none.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
Yeah, I like him too. And thought the same, until I realized he was essentially a communist.
If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams You must accept one of two basic premises: Either we are alone in the universe, or we are not alone in the universe. And either way, the implications are staggering.-Wernher von Braun Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein
|
|
|
|
|
ahmed zahmed wrote: I realized he was essentially a communist Just because he was of Russian descent?
Software Zen: delete this;
|
|
|
|
|
No, because if you read the novels closely you will see that. And also his political philosophy was at least socialistic.
If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams You must accept one of two basic premises: Either we are alone in the universe, or we are not alone in the universe. And either way, the implications are staggering.-Wernher von Braun Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein
|
|
|
|
|
Chris Quinn wrote: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
What if the robots decide that humans are the greatest threat to humanity and eliminate us?
|
|
|
|
|
|
If you read your Azimov, you'll see that he had the "brain" as hardwiring rather than as a form of byte code, and that the complexity of the positronic brain would prevent rewiring. This was cleverly documented in the first encounter with R Daneel Olivaw.
|
|
|
|
|
Nonsense.
As soon as an entity understands the concept law, it will understand that it can be defied.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Without thinking too hard about it, these are two issues I see immediately:
1. A human will have to encode these rules. How often do we infallibly develop perfect software?
Assuming we can get past item 1 -
2. If we let the robots self replicate, that will be the fatal flaw. The rate they will be able to evolve will be beyond anything that we are able to comprehend. There was only a single robot in control in iRobot. Imagine 1 robot for every human being on the planet thinking, self-replicating and evolving.
That would seem to end in the same scenario as the with the Black Goo and nanotechnology.
|
|
|
|
|
Paul Watt wrote: How often do we infallibly develop perfect software
Yeah, Asimov even based some of his stories on error like that.
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus
Entropy isn't what it used to.
|
|
|
|
|
I'm guessing you watched the movie, but didn't read the book.
|
|
|
|
|
In my busy schedule I like the summary that movies provide - not all the time though world war z (movie) was a giant let down
so yeah i didnt read the i-robot book.
Chona1171
Web Developer (C#), Silverlight
|
|
|
|
|
The only similarity between the book "I, Robot" and the film "I, Robot" is the title
=========================================================
I'm an optoholic - my glass is always half full of vodka.
=========================================================
|
|
|
|
|
Chris Quinn wrote: The only similarity between the book "I, Robot" and the film "I, Robot" is the title Truth.
|
|
|
|
|
The book is well worth the effort.
It's actually a series of short stories that deal with the what ifs of getting around the 3 laws. It's not a goofy action movie script.
|
|
|
|
|
The anthology I, Robot (or better yet, The Complete Robot, which adds several later short stories) should be just a start. By the time Asimov wrote Caves of Steel, he was already seeing the flaws in the Three Laws. By the last Robot novels, Robots of Dawn and Robots and Empire, he was setting up a way to abandon them completely and segue into the robot-less future he had created with Foundation.
If you have the time to read the whole lot (definitely a summer project) it is worth the time.
|
|
|
|
|
Really? Nice! I hadn't gotten that far at all.
I'll definitely bump those up in that huge sci-fi queue I have.
|
|
|
|
|
|
Yeah, I'm not THAT interested in Azimov. I'll read maybe a couple more.
I didn't realize they were all in the same universe.
|
|
|
|
|
If the potential to cause harm is included in the concept of causing harm or allowing it by inaction, then you two exceptions are covered by the 1st of your laws.
Simply put, a robot creating a robot that is not excluded from causing harm to humans (inaction via omitting said imperative), must do so without any idea that harm could be done by said robot's robot. They would then be creating a device that can harm humans - but that goes against (1).
&etc.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|