
Artificial intelligence researchers have turned to literature in an attempt to instill human ethics into machines.
Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology fed stories into an AI algorithm in order to teach it acceptable sequences of events and help it understand how to behave in a more human way.
The research, set to be unveiled at the AAAI-16 conference in Phoenix, Arizona this week, follows warnings from several high-profile academics and entrepreneurs that AI could pose an existential risk to mankind. According to Tesla CEO Elon Musk, advanced AI could be "more dangerous than nukes," while last year physicist Stephen Hawking suggested it could lead to the end of humanity.
In order to address this threat, professor of cognitive robotics at Imperial College London Murray Shanahan has suggested that AI should be "human-like" and capable of empathy. The way to do this, Riedl and Harrison believe, is to teach computers ethics in a similar way to how children learn to behave in human societies.
"The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature," said Riedl in a press release.
"We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won't harm humans and still achieve the intended purpose."

Riedl and Harrison developed a system called Quixote that teaches robots how to behave like the protagonist in a story when interacting with humans. Quixote builds on Riedl's previous "Scheherazade" research, which used the crowdsourcing of story plots from the Internet to allow AI systems to learn correct sequences of events.
By adding in a reward mechanism to the decision process—using a "reward signal" to reinforce certain behaviors—Quixote encourages AI to behave like the good guy.
"We believe that AI has to be encultured to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior," Riedl said. "Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual."