What A.I. Isn’t

Will it ever think like us?As I study data science I learn a little more about artificial intelligence each day.

I practice wielding the tools in my machine learning tool box, and I read articles — and the more I learn, the more annoyed I get by what I read.

Piece after piece of journalism adapts the same breathless tone toward AI.

An article will begin by describing the algorithms behind a real achievement but will always take a leap toward a vision of the future.

Some day it will do more, they say: more than play Go; more than flip a burger; more than guide a missile.

Some day it will do everything that you can do.

I don’t want to hear another vision of the future.

I want to know the steps that will take us to that moment when our machines’ intelligence matches ours.

Start by thinking about our own thinking.

We know in a broad way that intelligence means more than just mastery of a set of skills or a system of knowledge.

We grow and adapt, we dream and create, we delight each other and surprise ourselves.

We cannot quantify the entirety of our own intelligence, and indeed we are only in the infancy of our study of the brain and the gut.

But we can quantify the intelligence of the machines that we build.

We know how to do this because we have painstakingly constructed each model, framework, and algorithm.

A human hiding in the apparatusThis is what we have done in machine learning.

The “machine” in machine learning is the computer — never a being with a consciousness, always something cold and assembled.

It does not “learn” in the way that we learn but it does get better at its work over time.

To build a regression model, for example, a machine seeks out the equation that “regresses” toward the ideal curve among a set of data points.

(It’s the other way around from middle school algebra, in which we use an equation of a line to plot points.

) The model with this equation produces errors, errors spur adjustments to parameters, better parameters improve performance.

The curve of the equation snuggles into place among the points.

The machine will predict any new point along the curve, but it will not create a new model for a new occasion, and it will not extrapolate the big picture from its results.

Some algorithms predict a numerical output given inputs; others group like things together based on their features — this or that, dog or cat.

A simple axiom of probability, Bayes’ Theorem (²), drives many algorithms that classify objects.

A machine takes what it knows about the likelihood of an event, adjusts for new information, and finds a new probability for that event.

We can train it under supervision, with labelled data, e.

g.

“Here are some cats, here are some dogs.

” Or it can train without supervision, left to discover the categories on its own, e.

g.

“All these were sleek and scratchy; all those fetched and followed me.

” Bayesian inference guides the machine when it recommends to us what to watch next (based on what we watched before), or when it spots us in a photo (based on all our photos it has seen.

) Its power to predict lies in its application of past experience.

When it has less prior probability to draw on, its prediction is proportionately less accurate.

In machines what so often looks like intuition or inspiration turns out to be mathematical: exacting, linear, limited; models not cognition.

When I think about an intelligent machine this way, with all its best ideas coming from its human creators, it reminds me of the Mechanical Turk, an automaton who faced off against human chess players from the 1770s until the 1850s.

His opponents saw him making moves in pursuit of a strategy he had ticking in his gearbox brain.

But it was the person stuffed into his apparatus who had the agency.

(³)Regression and Bayesian inference are the workhorse bits of machine learning, but we can think of the ultra-high-tech methodologies in similar terms.

The author of an effusive New York Times article about a “deep learning” algorithm called AlphaZero admits as much.

(⁴) AlphaZero learned chess, shogi, and Go by playing “against itself millions of times and learn[ing] from its mistakes.

In a matter of hours, the algorithm became the best player, human or computer, the world has ever seen.

” At a high level, then, it responded to errors by adjusting its parameters, which sounds like regression.

It applied past experience to new situations, which sounds Bayesian.

Of course deep learning is more complex than that and operates on more levels: its “learning” is many layers “deep.

” Algorithms have never needed to explain themselves to us before because we wrote them.

AlphaZero is one of the new sort which are becoming too complex for us to understand but which can’t explain themselves.

The article invites us to surrender our comprehension and skip to a future in which “AlphaZero has evolved into a more general problem-solving algorithm.

” Having apparently gained the consciousness needed to describe its own methods, it would “seem to us like an oracle” and we would “sit at its feet.

”Machines don’t evolve on their own.

AlphaZero may have stunned its creators by its mastery of chess and Go but it was they, the creators, who equipped it before they set it loose.

The next step in deep learning will not come from a machine iterating on itself.

It will come when humans iterate on their previous work.

For now, we are still the agents hiding in the apparatus.

We still make the moves.

So much of the writing about the state of AI today is like this.

It starts by explaining some application of machine learning, its success, its limits.

But it always shifts to that same prediction: in the future, somehow, AI will be so much more.

It will save us, it will destroy us.

These articles skip the steps to get to that future.

The tone of the fact-based reporting shifts to something more like science fiction.

I ought to be thy AdamFor two hundred years (⁵) we have been predicting the future of AI by telling the same story over and over: we reach the pinnacle of our achievement when we build a machine in our image.

We give it more than just the competence to perform its tasks: agency, consciousness.

In some stories we do it on purpose, in some it’s a fluke.

We feel how God must have felt when he created us.

And just as we upset our Creator with our first choices, so our creation surprises us when it chooses to deceive us, or conquer us, or replace us.

The leap from reality to science fiction is always in the moment where the machine, whether by design or by fate, does not get a regular rattling toolbox of algorithms for a brain.

It gets the unquantifiable, unpredictable, self-aware genius of a human.

The steps in between, where we make the breakthroughs that allow for this, get skipped, so these stories can never serve as how-to guides.

But they’re not meant to.

That leap in science fiction serves a different purpose.

Science fiction depicts the future to tell us something about our present.

(⁶) It makes that leap to the machine with human intelligence to tell us a story about how we treat each other.

Those whom we treat as less-than-human cry out to us.

My mind is equal to yours; I am as conscious as you; you would feel the pain I feel.

Our outward differences must fall away when we remember how alike we are.

Journalism does not depict the present through a vision of the future; it does not to lock up its lessons in allegories.

It sparks us to act by showing us what is happening now.

So why can’t we shed this literary trope?In 1993 Verner Vinge prophesied the Singularity.

(⁷) He wrote that some time between 2005 and 2030 we should expect the first superhuman intelligence to “wake up” and invent its own successor, an AI even more intelligent than itself.

Humans won’t need to invent any more machines.

They will self-improve at an “exponential runaway” pace.

Vinge’s essay is already dated in some ways.

He describes various “intelligence amplification” gadgets that we will stick on our bodies to aid our senses.

He predicts the Borg with some specificity but he misses the slim glass networked computers that we carry in our pockets.

And his methods for measuring intelligence are strange.

He asks us to “imagine running a dog mind at very high speed.

Would a thousand years of doggy living add up to any human insight?”But his warning about the danger of the Singularity is bold.

It is hard to discard.

He insists that, “Its coming is an inevitable consequence of humans’ natural competitiveness and the possibilities inherent in technology.

” (For “humans,” maybe, read “men.

”) He tells us not to expect the incentives of the machines to align with our own.

They may not wish us well.

And he concludes by quoting Freeman Dyson as saying, “God is what mind becomes when it has passed beyond the scale of our comprehension.

” A struggle, a race, violence, and God: this is as vivid as the best science fiction, and I think this explains why we can’t shed science fiction’s vision of the future.

Vinge shook us with this essay, and it’s been on our minds for twenty-five years.

Vinge’s Singularity can be plotted.

(⁸) Computing power has increased exponentially in line with Moore’s Law.

Also seen here, less plausibly, are a mouse brain and a human brain.

Where’s the high-speed dog brain?The plot looks alarming until one thinks a little harder about the measurements used.

It is fine to plot computing power; we know how to measure that.

But the brain power of a mouse, of a human?Kevin Kelly tells us how to demolish the whole edifice of bad intelligence measurement.

(⁹) Think of the intelligence of different beings “not as a ladder but as an ecosystem spreading out,” with the mind of each species unique, evolved to specialize in different cognitive tasks.

It is imprecise to think of humans as “more” intelligent than say, squirrels, who lack our capacity to program computers but who surpass us in constructing mental maps of stored food.

Our machines’ specialties, too, are different than ours.

And Kelly suggests that because our solutions to problems often do not resemble nature’s solutions, the intelligence we construct need not resemble our own.

Nature allowed animals to fly by flapping wings; our flying machines do not flap.

Deep learning neural networks need not mimic our own neural networks.

AI, then, may look more like an “alien intelligence” than a “superhuman intelligence” — different, not better or worse.

It will think but not how we think.

A billion years of not falling downFor us to report on the present state of AI, then, we must get the measurements right.

Think again about our own thinking.

Some parts of human cognition are much older than other parts, more optimized by evolution.

Our abilities to sense and move in the world are honed by up to a billion years of natural selection.

We do it without effort or awareness.

Abstract thought, on the other hand, is perhaps only one hundred thousand years old, and we spend our lives struggling to master it — in school, in jobs, in conversation.

According to Moravec’s Paradox (¹⁰), the tasks that require the least conscious effort for humans, like our sensorimotor tasks, are the hardest to engineer in machines.

(Try Googling “robots falling down.

”) Those tasks which are hardest for us, like mathematical reasoning, turn out to be easy to teach machines.

So if we set aside the tasks that are “hard” to teach machines for a moment, we may ask, how easy are the easy tasks?Let’s use a functional definition to gauge how much intelligence one needs to reason, play chess, and so on.

Brandon Rohrer (¹¹) offers one: “generality,” the breadth of the tasks you can do, multiplied by “performance,” how well you do each of them.

We humans can do myriad tasks: carving, cooking, counting, cuckolding, caring, comparing: high generality.

We do it all well: high performance.

We tend to build our machines with one task in mind.

We maximize their performance in that task by simplifying their world with a set of assumptions.

For example, the machine must assume that the outcomes of its actions are predictable: when its foot hits the ground, it won’t slip and have to re-balance its weight.

It must assume “ergodicity,” that it can sense everything it needs to in the moment: a circular object in the sky must lack velocity; it is the sun, not a ball about to hit it in the face.

It must assume a short time horizon: it need not think several moves ahead, as in chess.

The tighter the assumptions, the higher the performance in the task.

If we loosen these assumptions, the machine may do more tasks but will be worse at each.

There has heretofore been a trade-off in AI between generality and performance, which Rohrer illustrates:A few computer programs are better than humans at games like chess.

They may be slightly better than humans at image classification and at playing video games.

They’re worse at everything else.

 viaA program that plays a game like chess or Go can outperform humans in that task but can’t win other games.

Some machines can do more generalized work, like driving, which requires high sensitivity to conditions on the road, and high adaptability to foreign objects in motion, and up-to-date maps.

But they can’t yet drive as well as humans.

AlphaZero, that deep learning grand master AI, bucked this trend.

It improved on its predecessor, AlphaGo, in generality and in performance.

It played more games, and it played each game better.

AlphaZero moved “up” in generality and “rightward” in performance from AlphaGo.

 viaMore humane than humanThis is a real step forward.

This is what I’ve been looking for.

If we pause before hurtling toward a prediction of that future moment when the machines’ minds surpass ours, we can consider what this step took.

Our next machines to inch closer to functionally-defined human intelligence will need to improve their generality and performance, as AlphaZero did.

(e.

g.

Humanoid robots will need to climb stairs, open doors, and re-balance before tipping over.

Self-driving cars will need to stop short.

&c.

) They may take advantage of models at the forefront of machine learning, as AlphaZero did with deep learning.

(Or not.

(¹²)) And, most important, it is humans who will create them.

In fact, no matter how far forward we extrapolate, we cannot remove the human creators from the advancement of artificial intelligence.

Human creators drove all the previous advances in AI.

Rather than assume that the machines will one day “wake up” on their own, it is more reasonable to expect that we will gently rouse them.

Whether the Singularity leads to our destruction or our salvation depends on us: we make the machines in our image, we are the agents hiding in the apparatus.

Some signs are not good.

We see our capacity for evil reflected in some of our intelligent machines already.

(¹³) But we may just as well spur the machines to act humanely if not humanly.

Rana el Kaliouby observes (¹⁴) that, “Intelligence is not correlated with the desire to dominate.

Testosterone is!” In predicting “inevitable competition,” Verner Vinge may not have considered that we can choose which of our impulses to quell and which to model.

Other oracles will speak to us if we will listen.

In 1843 Ada Lovelace, a British mathematician, wrote some notes (¹⁵) on the possible uses of Charles Babbage’s design for a machine that could take inputs and give outputs, store memory, and crunch numbers.

Lovelace saw that it could do more: the logical symbols of the language of mathematics could express abstract entities, could write music.

Her insight and her proposed algorithm for using Babbage’s Analytical Engine to compute Bernoulli numbers made her the first computer programmer.

But Lovelace had no illusions about artificial intelligence.

She wrote that Babbage’s Engine “has no pretensions whatever to originate anything.

” (¹⁶) Victorian journalists may have rhapsodized about the genius in the program; they should have acknowledged the genius of the programmer.

From the first step, hers, to the present day, it is our genius that has lain behind each advance in AI.

Lovelace said, “There are in all extensions of human power .

 .

 .

various collateral influences, besides the main and primary object attained.

” As we take our next steps forward toward machines made more and more in our image, we will see in their actions our capacities for good or evil reflected.

We will know, when the moment comes, that we have wrought our own future.

NotesTitle image (via): Fritz Lang’s Metropolis, 1927.

The first time I saw it, I got chills when the Machine-Man made its first tentative steps.

Perhaps the word “robot” hadn’t yet it made over from its first usage, in Czech, in the play Rossum’s Universal Robots.

P (A|B) = P(A) * P(B|A) / P(B).

The posterior probability of A given B is the prior probability of A times the likelihood of B given A, divided by the hypothesis, B.

See viaAn hilarious early AI scam.

via“One Giant Step for A Chess-Playing Machine,” Steven Strogatz.

The article is good and the description of the workings of deep learning are lyrical but the shift at the end to speculation about the future was all-too familiar.

viaMary Shelley published Frankenstein in 1818.

The doctor’s artificial, self-aware creature laments to his creator, “I ought to be thy Adam, but I am rather the fallen angel.

” via But artificial creatures and automatons have appeared in folklore throughout history.

viaUrsula K.

Le Guin says of science fiction that “A lot of it is very much about what is happening on Earth right now.

” But she also warns against intellectualizing it: “Any work of art consists of more than verbal thoughts.

” She was, and is, an oracle, and we sit at her feet.

Ursula K.

Le Guin: Conversations On Writing, Le Guin and David Naimon.

“Technological Singularity,” Verner Vinge, viaGraphic via“The Myth of a Superhuman AI,” Kevin Kelly, via“Moravec’s Paradox,” via“Getting Closer to Human Intelligence Through Robots,” Brandon Rohrer, viaDeep learning may fall out of favor over the next few years as the AI research world is gripped by a new trend.

“We analyzed 16,625 papers to figure out where AI is headed next,” Karen Hao, viaThe subject of Weapons of Math Destruction, Cathy O’NeilShe was one of only two women on a panel of seven “AI experts.

” The AI field needs more diversity in its workforce, not just for its own sake but also to allow us to hear more points of view to help us stay clear-headed when we try to predict the future of AI.

We’ve listened to men like Vinge long enough.

Let’s listen to women.

“5 Truths About Artificial Intelligence Everyone Should Know,” Rana el Kaliouby, via“Stepping Out of Byron’s Shadow,” Jenny Uglow, via(L) Sketch of the Analytical Engine Invented by Charles Babbage, Luigi Menabrea, translation with notes by Ada Augusta, Countess of Lovelace, via.

. More details

Leave a Reply