Robots Can Now Read Better Than Humans, Putting Millions of Jobs at Risk

robot reading books artificial intelligence AI
Luka, a picture-book-reading robot on display during the Consumer Electronics Show at the Las Vegas Convention Center on January 11. Alex Wong/Getty Images

An artificial intelligence algorithm has outperformed humans in a reading comprehension test for the first time, potentially putting millions of customer service jobs at risk of automation.

The AI algorithm, developed by Chinese retail giant Alibaba, outscored humans in the Stanford Question Answering Dataset—a global reading test consisting of more than 100,000 questions.

Using natural-language processing, the machine-learning model developed by Alibaba’s Institute of Data Science of Technologies beat rival humans with a score of 82.44 versus 82.305, the company said.

According to the researchers, the landmark result could have a significant impact in introducing the technology into roles typically performed by humans, as the AI algorithm can provide precise answers to questions when provided with vast amounts of information from resources like Wikipedia.

“That means objective questions such as ‘What causes rain’ can now be answered with high accuracy by machines,” Luo Si, chief scientist for natural language processing at the Institute of Data Science of Technologies, said in a statement.

“The technology underneath can be gradually applied to numerous applications, such as customer service, museum tutorials and online responses to medical inquiries from patients, decreasing the need for human input in an unprecedented way,” Si said.

Related: Can machines be conscious? Scientists say robots can be self-aware, just like humans

Researchers have previously used books to teach human ethics to AI, with one team from the School of Interactive Computing at the Georgia Institute of Technology feeding stories to an algorithm in 2016.

The exercise allowed the AI to learn acceptable sequences of events, as well as to better understand human behavior.

“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” said Mark Riedl, one of the researchers involved in the study.

“We believe that AI has to be encultured to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior,” Riedl added. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”

Editor's Pick
Polish minister Jews Goebbels Holocaust

Poland Leader Compares Criticism of Ruling Party

Law and Justice "is fanning political divisions further by spewing conspiracy theories and using language that plays on people’s emotions and strengthens a sense of victimhood,” Zselyke Csaky, an expert on Central Europe at Freedom House, told Newsweek.