The Laws of Robotics
The fear of Robots developing a rogue AI has stricken mankind ever since robotic possibilities were imagined. It has influenced generations of media and entertainment as well as urban legends. The prestigious Science Fiction writer Isaac Asimov wrote laws that he felt must be written into robotic intelligence for the preservation of mankind.
Watch this video on the complications of such laws and respond to the questions.
1. What was the gist of Rob Miles’s message? Why don’t the laws of robotics work?
2. Using your own ethics and beliefs, tell what you think is the most effective way to program robots to never ever harm a human. What defines humanity? Be sure to address the complications that Rob talked about in the video.
1. The basics of Mr. Miles' message was that Asimov's Laws of Robotics are too open-ended. If one tries to define "human" or "harm" in the case of Asimov's laws, an entire can of legal and ethical worms is opened right in your face. It's a nightmare to tell a robot what is and is not human, and what does or does not constitute "harm".
ReplyDelete2. Well, in my opinion, "human" defines anything that has the capacity to design a robot, therefore a "creator", and any that the creator does not designate as a "target". I think an entity scanning system and a digital species "database" would be useful in this case. Also, "harm" would probably constitute something that would cause physical or emotional pain or the termination of life... but unfortunately that gets complicated for a completely different reason, since you would have to code in something for "pain". Maybe some sort of context-based action-softening matrix or something would be useful here.
1. Asimov's Laws aren't taken very seriously in the robotics/AI development community because its general consensus that they simply do not work (according to Rob Miles). This failure is accredited to the fictional aspect of the novels in which these laws both appear, and fail. The definitions are based around ethics and human morality which is something that most think is not currently possible. Even today, we don't have a solid conclusive answer for what is, and more importantly, what is not human.
ReplyDelete2. I can't imagine how it would be possible to not injure a human under any circumstance without removing the potential to do harm completely. It gets complicated when we have the word "harm" in there too, because that is far from specific. Not only is this task extremely difficult, it would have to b re-adapted and modified for every different situation.
1.) They don't work because it's too vague to the robot in specifying what an actual "human" is and whether or not it's doing actual "harm" to the "human" itself. It then can't decipher whether or not it's breaking a rule itself. It's too vague towards the robot so the laws of robotics don't work since we're all different.
ReplyDelete2.) There is no way. Some people are born without things so we can't tell if someone has a human thing that something else doesn't have to define it or else it will define the thing that the thing has as a human and we can't stop it if that thing attacks or harms humanity so really it's impossible.
1. Asimov's Laws of Robotics, in Mr. Mile's open have no end and are too vague. Rob believes that they are not taken seriously in the robotic world. Along with these laws, human and harm can not as easily be separated and defined.
ReplyDelete2. There will always be complication, and accidents happen, and I could never see a total outspread of harm caused by robots. Anything has the possibility of hurting someone, and as long as there restrictions in the programming of the robots, I don't see how harm could outbreak. Humanity comes with being a human, and there relations with the world, and we were placed on this world to create things, and as long as we have to power to take them out of this world if complications arise, there is no issue.
1.It doesn’t work because there are too many variables to deal with. The AI would not be able to find out if a person is human or not. Some AI would not work because it could end up being wrong. Some jobs need humans still and forever more.
ReplyDelete2.The simplest way to program a robot not to attack humanity is to give each person a chip that will identify the person as human. You can not define human and you can not give a AI the database of every person way too much information. Most things people want AI for would not work.
1. The reason it doesn't work is because the technology isn't there yet for us to define for the robot what is and is not human. Further more, trying to get rid of the potential to hurt humans would have to basically get rid the robots main functions. If a robot has the power to move large objects, it can harm a human.
ReplyDelete2. I do think that restraints should still be apart of creating AI, although I don't think we should stop progressing all together. Humans and AI should work together in harmony. Humans always watching over the AI, guiding it to success.
1. The reason it doesn't work is because the technology isn't there yet for us to define for the robot what is and is not human. Further more, trying to get rid of the potential to hurt humans would have to basically get rid the robots main functions. If a robot has the power to move large objects, it can harm a human.
ReplyDelete2. I do think that restraints should still be apart of creating AI, although I don't think we should stop progressing all together. Humans and AI should work together in harmony. Humans always watching over the AI, guiding it to success.
1) They do not work due to the large amount of variables that are present that the Artificial Intelligence wouldn't be able to determine what is and isn't human. It's derivatives are only what the creator makes them.
ReplyDelete2) As long as robots exist, there will always be the possibility that they harm humans, malfunctions happen, hacking could happen, lots of things that make the word "never" an impossibility. I'll leave it at I'll believe it when I see it, other than that, to me it's highly improbable.
1. He basically talks about how Asimov's Laws of Robotics leaves holes, like what is human and what hurts and there are a lot more holes like that which would need to be covered.
ReplyDelete2. I would define what harming is and what humans are. Human's being the species and harming being anything to lessen the health of a human, or humans.
1. He states how that the theory is flawed and the creation of feelings and human like movement is complicated.
ReplyDelete2. I think that the most efficient way is not to. Thats such a touchy subject that can have so many horrible possibilities. I believe we dont have the progress and information needed to go that far with robotics yet
1. The messages that the laws are trying to convey are too vague. If we were to try and define certain words like "human" or "harm," we'd have to take an ethical stand on almost everything. Everyone has their own interpretation of what is human and what it means to harm.
ReplyDelete2. The way I see to prevent robots from harming humans is to make sure that they have no way to harm humans. As long as we don't make robots made to act like humans, nothing in their programming should make them want to hurt humans in the first place.
1. The messages that the laws are trying to convey are too vague. If we were to try and define certain words like "human" or "harm," we'd have to take an ethical stand on almost everything. Everyone has their own interpretation of what is human and what it means to harm.
ReplyDelete2. The way I see to prevent robots from harming humans is to make sure that they have no way to harm humans. As long as we don't make robots made to act like humans, nothing in their programming should make them want to hurt humans in the first place.
1) The main problem of Asimov's laws of robotics is that one first has to define "human" and "harm". As language is a human creation, and to program language into a robot is impossible due to its nature and the nature of robots, these two words would have to be specifically defined, but who should we trust to properly define these laws.
ReplyDelete2) The best way to prevent robots from harming humans would be to limit the creation of Virtual and Artificial intelligence, and where they are created be sure to program them specifically to be able to identify humans and be sure not to cause *physical* harm to the person. That would of course require a universal definition of human, which is a difficult concept.
1. Rob Miles proposed that describing the idea of "human" or "pain" to a robot, is impossible. We can easily describe it to one another because we have sentient consciousness, however putting this description in code would have an infinite amount of variables that could constantly change or disappear all together. Our brains are too imperfect to explain how complex they are.
ReplyDelete2. I share a similar opinion to Rob miles, I think it is just impossible to perfectly replicate the human brain. Even if we could I still don't think it would stop robots from harming us, in fact I think it would increase those odds. The human brain grows and changes constantly, if that was in any being they too would have ideas and thoughts that would change. Maybe the idea of not hurting someone would change. In order for robots to know not to harm humans they need to know what a human is and what harming is, and those ideas are just too complex to be perfectly replicated-and are too imperfect to prevent it from happening.
1. Rob Miles proposed that describing the idea of "human" or "pain" to a robot, is impossible. We can easily describe it to one another because we have sentient consciousness, however putting this description in code would have an infinite amount of variables that could constantly change or disappear all together. Our brains are too imperfect to explain how complex they are.
ReplyDelete2. I share a similar opinion to Rob miles, I think it is just impossible to perfectly replicate the human brain. Even if we could I still don't think it would stop robots from harming us, in fact I think it would increase those odds. The human brain grows and changes constantly, if that was in any being they too would have ideas and thoughts that would change. Maybe the idea of not hurting someone would change. In order for robots to know not to harm humans they need to know what a human is and what harming is, and those ideas are just too complex to be perfectly replicated-and are too imperfect to prevent it from happening.
1) Rob describes that creating a value of life in the robot is impossible as a robot doesn't understand sentient consciousness. It is way too difficult to create similar computers as our brain due to the sheer complexity of our brains.
ReplyDelete2. I think that he is underestimating the true power of AI. Several breakthroughs in the technology is increasing the potential AI holds in our modern world. We are also producing increasingly powerful computers capable of intense processing, similar to a human brain. For example a quantum computer. Researches are creating AI which is capable of learning how to walk, and completely independently learning how to beat a video game perfectly.
1) He says that robots cannot read in human interactions. Like the robot will not hurt a human or learn that the human is harmed. They just cant do the simple or developed a sense of feelings. He also says that it is way to much to explain what will go on if the robot cannot complete the action.
ReplyDelete2) My method is to strap pillows on the harmful things on the robot. Also to only make it not do any harmful commands and or anything that would do damage. The kind of code I would use is safety. Like the ones they use on the kid toys. To mainly just to play with the robo. :^)