Skip to main content

The Truth About AI: A Secular Ghost Story

 

Some of Facebook’s AIs invented their own language, one incomprehensible to humans, at which Facebook’s researchers panicked and were compelled to pull the plug. At least, this was the story I heard on a Vanity Fair podcast. The host seemed deeply disturbed by the thought of these alien, almost Lovecraftian beings taking shape under the blithe gaze of an amoral tech giant.

I thought it was probably nonsense — scientists spin the truth all the time. I guessed that the underlying reality was that Facebook scientists had designed a program to evolve some kind of communication protocol which, for whatever reason, become hard to understand; seeking attention, they’d played up the drama to an in-house publicist by glossing the technical details and the publicist over-interpreted it to journalists, whose stories drifted still farther from the facts, until the emerging narrative ended up frightening an innocent podcast host.

As it turned out, I was right about the technology, but wrong about how the story got inflated. The Facebook scientists had made a sober and unassuming blog post about their research, which journalists took up and inflated without further encouragement. This is one of the fundamental mechanisms of the so-called AI Renaissance, which is essentially a cycle of money, hype and fear. 

Here’s what’s actually going on with AI technology: deep learning has come into its own. Deep learning is a technology that learns to recognize categories from exemplars — it’s had noteworthy successes in learning what is and isn’t a picture of a cat, for instance, or what is and isn’t a winning chess position.  Deep learning is an enhancement of neural networks, a learning technology that’s been around in some form for sixty odd years. It is now benefiting from faster computers, better networking and networking infrastructure, and vastly more data.

Note that neural networks have essentially nothing to do with neurons. Both are structured as networks. “So maybe they’re the same!” neural network enthusiasts have sometimes reasoned. This is the sole basis of the name “neural network,” but a superficial  similarity doesn’t imply a deep affinity.

There are threads in AI unrelated to deep learning but none of them have ever really worked. Consider machine translation, as implemented in Google translate.  It’s good enough for translating simple things, and can convey a general sense of a text, but with anything nuanced or complicated it immediately falls apart: Translated e-commerce websites are more or less usable, translated literature fails, translated poetry is unintentionally funny.

The state of the art in machine translation is to use statistical techniques to find roughly equivalent chunks of text in the source and target languages, and, lately, blending in deep learning to find higher order equivalences. There’s no real understanding or representation of the meaning of the text.

These limitations are inarguable and seemingly obvious but many techies seem to be in a haze of futurist denial.  I’ve spoken with Googlers who have the the glazed intensity of the newly converted. They say things like, “Google has solved machine translation.”  Such statements convey no useful information about the technology but do speak to how, especially with the younger employees, their affiliation with their company is a primary engine of meaning in their lives.  “Working at Google is like making science fiction!” I’ve heard many Googlers enthuse.

Historically, AI researchers have been prone to self-pity.  They complain that when a problem is unsolved its seen as an AI problem, but once a solution is found people say, “Oh, that’s not AI — that’s just an algorithm.”  Fair enough, but that argument is at root insincere — there’s clearly a computational essence of cognition.

I once went to a Google AI Night where a Google researcher posited that maybe computer intelligence was fundamentally different from human intelligence.  The best chess programs approach chess in different ways than the best human players. They use much more computation and a deeper search function instead of a human’s nuanced pattern recognition (or something like that, no one really knows how human chess masters think about chess, or how anyone thinks about anything.)

AI’s prominence in the general culture has been growing, due to the noticeable technical developments but also because of the way we interpret that technology through the lens of science fiction.  The two primary AI-narratives are Pinnochio—e.g., Lt. Cmd. Data longs to be a real boy! — and the Golem — e.g., the Terminator movies, the Matrix movies, and every movie that uses the phrase, “it’s gone rogue!”  Both narratives tacitly assume that an AI’s deep motivations would strongly resemble a human’s. It would either cherish the prospect of a genuine emotional life or else it would cherish the chance to crush humanity and build an empire. The Pinocchio narrative is successful because it reassures audiences that, no matter the technological advances, their humanity has intrinsic and enviable value. The Golem narrative offers an implacable, superhuman and amoral antagonist which the human heroes can destroy without the least moral qualm.

Even if there is someday real AI, neither narrative is likely to play out.  An actual AI would probably regard human beings as utterly alien, and perhaps interesting on that basis, but not obvious objects of emulation. There’s also no clear reason why an AI would want an empire: hierarchical social primates hunger for political and military power as an outgrowth of our hard-coded impulse to be top monkey, but an AI is unlikely to be engaged by that particular ladder.

Robotics are finicky, and using programs to understand images is hard. Some years ago the head of research and development at a big tech company told me that, in programming autonomous cars, it turned out to be prohibitively difficult to determine whether a given image contained a stop-light, but, if the machine already knew where the stop-light was, it could easily determine whether it was red, yellow or green. In the medium-to-near-term any full-fledged AIs are likely to be disembodied, existing in largely virtual and informational worlds, and to regard the real world as a sort of phantom realm, present but hidden, and hard to reach directly, much as most people regard the distant servers which hold, say, their social media posts (which are… where, exactly? As long as it works, who cares?)

And yet, the media continues to worry about the threat of AI.  To some extent, worrying about world-conquering AIs is a kind of ghost story for a secular age. It’s fun to be frightened, and in 2018 the nebulous malevolence in the dark reaches of the internet is more credible than dybbuks and djinni.  And apocalyptic predictions get more clicks than more realistic headlines such as,  “It’s hard to say anything definite about AI,” “AI will probably be reasonably benign” or “Real AI is probably a long way away from existing.”

There have even been calls for legislative limits on AI research, and for such research to be approached with thoughtfulness and caution. It’s hard to argue against thoughtfulness and caution. But as for legislation: I once asked my brother the yacht-broker what the difference was between a yacht and a mere boat. He said that if your mother-in-law asks, it’s a yacht, but if the IRS asks, it’s just a little old boat.  Similarly, if a venture capitalist asks about one’s project, then it’s definitely AI, but if the AI-police ask, then it’s just a regular old computer program.  It might come down to a modern Epimenides paradox: any program smart enough to contrive to be judged legal is too smart and thus illegal.

The mention of AI makes podcast hosts nervous but real AI remains chimerical.  People say it’s ten years away, but then again, they have been saying that for decades. There have been cycles of hype, exuberance and disappointment around AI before, and this is probably another. But, one day, sometime, real AI will arrive, and then we may know what the mind is, what thought is, and who we are.

 

Zachary Mason is a computer scientist specializing in artificial intelligence and a novelist. He lives in California.



from The Paris Review https://ift.tt/2LtQ1k2

Comments

Popular posts from this blog

The Sphere

Photograph by Elena Saavedra Buckley. Once when I was about twelve I was walking down the dead-end road in Albuquerque where I grew up, around twilight with a friend. Far beyond the end of the road was a mountain range, and at that time of evening it flattened into a matte indigo wash, like a mural. While kicking down the asphalt we saw a small bright light appear at the top of the peaks, near where we knew radio towers to occasionally emit flashes of red. But this glare, blinding and colorless, grew at an alarming rate. It looked like a single floodlight and then a tight swarm beginning to leak over the edge of the summit. My friend and I became frightened, and as the light poured from the crest, our murmurs turned into screams. We stood there, clutching our heads, screaming. I knew this was the thing that was going to come and get me. It was finally going to show me the horrifying wiring that lay just behind the visible universe and that was inside of me too. And then, a couple se...

DEMOCRACY DAY SPEECH BY PMB; MAY 29 2016

www.naijaloaded.com My compatriots, It is one year today since our administration came into office. It has been a year of triumph, consolidation, pains and achievements. By age, instinct and experience, my preference is to look forward, to prepare for the challenges that lie ahead and rededicate the administration to the task of fixing Nigeria. But I believe we can also learn from the obstacles we have overcome and the progress we made thus far, to help strengthen the plans that we have in place to put Nigeria back on the path of progress. We affirm our belief in democracy as the form of government that best assures the active participation and actual benefit of the people. Despite the many years of hardship and disappointment the people of this nation have proved inherently good, industrious tolerant, patient and generous. The past years have witnessed huge flows of oil revenues. From 2010 average oil prices were $100 per barrel. But economic and security co...

The Private Life: On James Baldwin

JAMES BALDWIN IN HYDE PARK, LONDON. PHOTOGRAPH BY ALLAN WARREN. Via Wikimedia Commons , licensed under CC BY-SA 3.0 .   In his review of James Baldwin’s third novel, Another Country , Lionel Trilling asked: “How, in the extravagant publicness in which Mr. Baldwin lives, is he to find the inwardness which we take to be the condition of truth in the writer?” But Baldwin’s sense of inwardness had been nourished as much as it had been damaged by the excitement and danger that came from what was public and urgent. Go Tell It on the Mountain and Giovanni’s Room dramatized the conflict between a longing for a private life, even a spiritual life, and the ways in which history and politics intrude most insidiously into the very rooms we try hardest to shut them out of. Baldwin had, early in his career, elements of what T. S. Eliot attributed to Henry James, “a mind so fine that it could not be penetrated by an idea.” The rest of the time, however, he did not have this luxury, as pub...