Will AI Become Self-Aware? 'Black Mirror' Season 4 Premiere 'USS Callister' Explores the Human Ego

black-mirror-callister2
Netflix

One of the reasons Black Mirror is so thrilling—and terrifying—is that every episode of Netflix's tech-focused riff on The Twilight Zone is built on a situation that feels theoretically possible, if not inevitable. And that's certainly true for the fourth-season premiere, "USS Callister"—a smart Star Trek spoof that finds horror in artificial intelligence, invasive breaches of privacy and male egotism.

The episode follows self-obsessed gaming engineer Robert Daly who devises a way to feed his coworkers' DNA into his code, then puts a Star Trek-esque mod on his company's program. Inside his digital starship, he tortures synthetic versions of his colleagues by turning them into hulking sci-fi monsters, banishing them to the outer planets Daly's starship explores and putting them through emotional hell. But the artificially-intelligent avatars are self-aware, eventually banding together to escape their creator's influence—by killing themselves.

BlackMirror_S4_Callister_00552_V1
Nanette's digital copy reacts to being trapped on the USS Callister. Netflix

"USS Callister" is a challenging (and funny) 71 minutes of television, and with a happy ending, it's one of the most uplifting Black Mirror episodes to date. It's also great science-fiction, forcing us to confront how we use and interact with technology while raising questions about where our Silicon Valley overlords are leading us. Some, most notably Elon Musk, warn about the doomsday potential of artificial intelligence. Listen to Jeff Bezos, Mark Zuckerberg, and Bill Gates, though, and you'll hear a lot of big talk about the glorious future of humanity in a world of AI.

From shop-floor robots to self-driving cars to Zuck's personal assistant, AI has certainly made major strides in a very short amount of time. But how close are we to creating AI that recognizes itself and has an identity? Just how unethical is the "Callister" game? And what will keep real-life coders from creating something similar? What exactly is life, and is it possible to code a self-aware mind?

BlackMirror_S4_Callister_00658_V1-1
The cast of 'Black Mirror' Season 4 Episode 1. Netflix

According to Dr. Melanie Mitchell, a computer science professor at Portland State University, the nightmare put forth in Black Mirror is absolutely possible — though artificial intelligence is nowhere near sentience.

"AI cannot currently 'understand' or 'know' things in a human sense, the way those terms are commonly used," Mitchell told Newsweek. "It's becoming more important, though, as AI becomes increasingly visible in our lives to define these terms so that we know what we're talking about before we ask it of AI."

"USS Callister" gives us a taste of what the future could hold. The digital clones of Daly's colleagues demonstrate machine learning by growing as the test the limits of their existence. Take Nanette Cole (Cristin Milioti), the newest "character" integrated into the game. She "thinks" she's the real deal at first. But once the copy learns new information it "understands" that it's a copy. Not only is digital-Nanette self-aware, it's conscious of the idea that its real self has more worth. Real-Nanette is "alive" in a way that digital Nanette isn't, so that means the AI system is sentient but still recognizes that human life is somehow more important.

BM_S4_USS_3
Jesse Plemons as Robert Daly in "USS Callister." Netflix

This isn't exactly groundbreaking as far as sci-fi goes—films like Metropolis , Blade Runner, Ex Machina and Her have explored the same ethics and themes—yet it feels more potent in Black Mirror. Still, we're a ways off from AI becoming as sophisticated as what we see in "USS Callister." One major reason why, according to Mitchell, is that current systems are missing a crucial component: common sense, which "comes from experience and human context."

Mitchell says some engineers are attempting to "teach" AI systems common sense, but that a commonly understood "truth" often comes with its own biases. Those biases, or bugs, could make AI systems faulty, even dangerous. "USS Callister" points out that coding skills and a genius-level IQ may make an engineer hyper-capable, but intellect doesn't always mean depth of character. Mitchell says we're already seeing the detriments of biased, imperfect humans like Robert Daly as they create AI systems.

black-mirror-USS
Still from 'Black Mirror' Season 4 Episode 1, "USS Callister." Netflix

One major area of concern is racial bias—from facial recognition software trained primarily on Caucasian faces (which ultimately devalues people of color ) to an AI system used in a 2016 court case to assess risk that flagged black inmates as possible re-offenders at the twice the rate of white convicts. But AI can be created with gender bias, too. Mitchell says systems used in hiring that suggests applicants for positions can pose problems if those choices are based on coded gender associations. Programmers l ike Microsoft's Fairness, Accountability, Transparency and Ethics in AI (FATE) team are working to rid AI systems of gender bias, but the process will take a while.

And then there's the chance that unsophisticated AI could be fatal. Mitchell explains that the technology powering self-driving cars is currently too susceptible to being hacked or tricked by people manipulating street signs. "Placing a simple sticker on a street sign wouldn't trick a human being, but they can hack self-driving AI into 'thinking' a stop sign actually says yield," Mitchell said.

In other words, we don't really have to worry about the horrors of "USS Callister" quite yet. The episode is based on technological advances so far off in the future that stressing out about it is almost like "worrying about overpopulation on Mars," Mitchell says. "It's scary, sure, but people working in AI have scarier issues to contend with right now."

Is that supposed to make us feel better?