The Aliens Have Landed—But They're Not Smart Enough to Take Over | Opinion

Alpha Mini robots that use artificial intelligence dance at the Las Vegas Convention Center during CES 2019 in Las Vegas on January 10, 2019 DAVID MCNEW/AFP/Getty Images

The aliens have landed. And if you're shaking your head or looking behind you, think again: They're everywhere. They're in your phone, your television, your computer, and your car. Yet they did not arrive from outer space, but rather, they've come from the inner space of our own minds.

AIs—Artificial Intelligences—are the "aliens" of which we speak. Numerous words, sentences, paragraphs, pages, and books have been written about our AI constructs, and these testimonials run the gamut from Disneyesque optimism to the pessimism displayed in "The Terminator."

Should we be exhilarated or afraid? Push the limits or place checks and bounds?

This is not the first time humans have dealt with intelligences that were not human. Cats have an intelligence. So do monkeys and whales, though we don't fear these intelligences. No one thinks a housecat is smarter than they are, or that it is coming to take your job (unless you're a mouse catcher!).

But in the case of AI, this is the first intelligence that does not share the phenomena of evolution—what brought us, humans, into being. AI is the first intelligence that came from a vacuum, developed from the cloud, not under them.

Fears of this "alien" form are also stoked when ominous headlines declare AI as soon being "billions of times smarter than humans." Or when it's suggest that humans need to merge with their tech or be lost for good.

The reality, however, is actually quite the opposite.

A lack of natural evolution is a critical weak point for AI. While AI is superb at finding correlations, it is quite bad at understanding causation. Judea Pearl, who has won the Turing Award—computing's Nobel Prize—recently noted that AI excels at detecting associations, such as this: "Customers who bought toothpaste also bought a toothbrush."

But in this case, we see also that AI has a clear limiting factor: it has trouble answering why a customer bought the toothbrush. And if you go a little further, AI's proficiency falls even more: "If a customer did not buy toothpaste, would she still buy a toothbrush?"

Through this, we see that we must consider intelligence beyond sheer "brain power." Excessive brain power isn't a direct correlation to intelligence. For example, a locomotive might have 50 times the horsepower of our cars, but in a car, we can make a left or right turn any time we want or need to. Intelligence is all about what you do with it, not the quantity of it.

It's pretty clear that AI will need the benefit of our evolutionarily-acquired intelligence and reasoning to move forward into any kind of independence. But there's a new thing to consider: Will there ever come a time when humans do trust AI enough to remove ourselves from the equation?

In our recent BioData Mining paper "More Human with Human" we argued that in the biomedical field human-absent systems may never become possible.

Simply, would you ever trust a robot cardiologist over a human one? When AI places an uninteresting ad in your social media feed or charts a driving route that's not the most direct, that's an annoyance. But if AI misdiagnoses cancer or misses something on an important medical test, that's a life.

Where medical AIs are concerned, issues of liability, insurance, legality, and so forth, rear their heads very quickly. If an AI is found guilty of malpractice will it be sent to

We feel that AI and humans can—and already do—have a very beneficial partnership. That partnership is beginning to blossom in the field of medicine, as AI helps detect associations that are naked to most human eyes, and human clinicians put those patterns to work keeping patients healthy.

So, yes, the "aliens" have arrived. But world domination will have to wait until they puzzle out some questions about toothpaste and toothbrushes.

Moshe Sipper is a Professor of Computer Science at Ben-Gurion University, Israel, and a Visiting Professor at the University of Pennsylvania. He has won numerous awards and published close to 200 scientific publications, as well as three research books, three novels, and a science fiction anthology. Jason H. Moore is the Director of the Institute for Biomedical Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.

The views expressed in this article are the authors' own.​​​​​