Artificial intelligence researchers often idealize Isaac Asimov's Three Laws of Robotics as the signpost for robot-human interaction. But some robotics experts say that the concept could use a practical makeover to recognize the current limitations of robots.
Self-aware robots that inhabit Asimov's stories and others such as "2001: A Space Odyssey" and "Battlestar Galactica" remain in the distant future. Today's robots still lack any sort of real autonomy to make their own decisions or adapt intelligently to new environments.
But danger can arise when humans push robots beyond their current limits of decision-making, experts warn. That can lead to mistakes and even tragedies involving robots on factory floors and in military operations, when humans forget that all legal and ethical responsibility still rests on the shoulders of homo sapiens.
"The fascination with robots has led some people to try retreating from responsibility for difficult decisions, with potentially bad consequences," said David Woods, a systems engineer at Ohio State University.
Woods and a fellow researcher proposed revising the Three Laws to emphasize human responsibility over robots. They also suggested that Earth-bound robot handlers could take a hint from NASA when it comes to robot-human interaction.
No comments:
Post a Comment