British Police Searching for Illegal Nude Photos Keep Catching Desert Screensavers Instead

499466662
A general view shows dunes in the Liwa Oasis, southwest of the Emirati capital, Abu Dhabi, on December 1, 2015. KARIM SAHIB/AFP/Getty Images

British cops hope that within two or three years, artificial intelligence will be able to spare humans from the horrific trauma that comes with pouring through tens of thousands of images of potential child abuse. The digital forensics department of the Metropolitan Police can already scan successfully for things like guns or drugs, but according to the Telegraph it has a little work left to do when it comes to spotting child pornography.

"Sometimes it comes up with a desert and it thinks its an indecent image or pornography,” Mark Stokes, the Met's head of digital and electronics forensics, told the Telegraph. "For some reason, lots of people have screen-savers of deserts and it picks it up thinking it is skin color."

This was reasonably funny as a story arc on Arrested Development, when photos purported to be of an Iraqi landscape turned out to show merely a pair of testicles. But the sooner that AI becomes sophisticated enough to separate images of sexual abuse from ones of the desert, the better. Human bodies are a good deal less uniform than more easily recognizable shapes like guns. Attempts by AI to flag images of child abuse in the past, as Gizmodo explained, have variously turned up not just images of sand dunes but everything from dogs to donuts.

The issue of where to store the images is no less complicated. As the Telegraph explained, storing them with any cloud provider like Apple leaves them potentially vulnerable to hacking. Still, Stokes told the Telegraph that providers such as Amazon or Microsoft might ultimately be the best-suited, due to their colossal security budgets.

"We have been working on the terms and conditions with cloud providers, and we think we have it covered," Stokes told the Telegraph. That's not exactly a guarantee, however. The 2014 Apple cloud hack led to the widespread invasion of privacy among celebrities whose personal photos became publicly accessible.

The potential for AI to monitor graphic content has been a fraught subject for years. It can work in part by scanning videos for tags associated with things like child porn, as Wired explained earlier this year.

In 2016, an interdisciplinary team of academics developed a new tool called Identifying and Catching Originators in P2P Networks, or iCOP. The idea was to design something that could flag illicit content as soon as it was uploaded, rather than having to wait for it to become linked to databases already associated with child porn.

“The existing tools match the files that are being shared through existing databases, but our program detects new data,” computational linguist and project leader Claudia Peersman told Vocativ. “If you look at peer-to-peer networks, images are shared at a pace that’s just not feasible for any human to go through [manually].”

A separate Wired report from 2014 found that on the dark web—a portion of the larger area of the web that search engines can’t link to—80 percent of searches were related to sexual abuse of children. In recent years, large online platforms like Facebook have introduced AI algorithms specifically for flagging child porn or nudity. YouTube recently took down more than 150,000 videos following a BuzzFeed News investigation. The social video-sharing platform will be employing more humans, not software programs, to help monitor illegal and abusive corners, according to the Washington Post.