AI is a Big Fat Lie

No.

It can get really, really good at certain tasks, but only when theres the right data from which to learn.

For the object recognition discussed above, it learned to do that from a large number of example photos within which the target objects were already correctly labeled.

It needed those examples to learn to recognize those kinds of objects.

This is called supervised machine learning: when there is pre-labeled training data.

The learning process is guided or “supervised” by the labeled examples.

It keeps tweaking the neural network to do better on those examples, one incremental improvement at a time.

Thats the learning process.

And the only way it knows the neural network is improving or “learning” is by testing it on those labeled examples.

Without labeled data, it couldnt recognize its own improvements so it wouldn’t know to stick with each improvement along the way.

Supervised machine learning is the most common form of machine learning.

Heres another example.

In 2011, IBMs Watson computer defeated the two all-time human champions on the TV quiz show Jeopardy.

I’m a big fan.

This was by far the most amazing thing Ive seen a computer do – more impressive than anything Id seen during six years of graduate school in natural language understanding research.

Here’s a 30-second clip of Watson answering three questions.

Watson on JeopardyTo be clear, the computer didnt actually hear the spoken questions but rather was fed each question as typed text.

But its ability to rattle off one answer after another – given the convoluted, clever wording of Jeopardy questions, which are designed for humans and run across any and all topics of conversation – feels to me like the best “intelligent-like” thing Ive ever seen from a computer.

But the Watson machine could only do that because it had been given many labeled examples from which to learn: 25,000 questions taken from prior years of this TV quiz show, each with their own correct answer.

At the core, the trick was to turn every question into a yes/no prediction: “Will such-n-such turn out to be the answer to this question?” Yes or no.

If you can answer that question, then you can answer any question – you just try thousands of options out until you get a confident “yes.

” For example, “Is Abraham Lincoln the answer to Who was the first president?” No.

“Is George Washington?” Yes!.Now the machine has its answer and spits it out.

   And theres another area of language use that also has plentiful labeled data: machine translation.

Machine learning gobbles up a feast of training data for translating between, say, English and Japanese, because there are tons of translated texts out there filled with English sentences and their corresponding Japanese translations.

In recent years, Google Translate – which anyone can use online – swapped out the original underlying solution for a much-improved one driven by deep learning.

Go try it out – translate a letter to your friend or relative who has a different first language than you.

I use it a lot myself.

On the other hand, general competence with natural languages like English is a hallmark of humanity – and only humanity.

Theres no known roadmap to fluency for our silicon sisters and brothers.

When we humans understand one another, underneath all the words and somewhat logical grammatical rules is “general common sense and reasoning.

” You cant work with language without that very particular human skill.

Which is a broad, unwieldy, amorphous thing we humans amazingly have.

So our hopes and dreams of talking computers are dashed because, unfortunately, theres no labeled data for “talking like a person.

” You can get the right data for a very restricted, specific task, like handling TV quiz show questions, or answering the limited range of questions people might expect Siri to be able to answer.

But the general notion of “talking like a human” is not a well-defined problem.

Computers can only solve problems that are precisely defined.

So we cant leverage machine learning to achieve the typical talkative computer we see in so many science fiction movies, like the Terminator, 2001s evil HAL computer, or the friendly, helpful ship computer in Star Trek.

You can converse with those machines in English very much like you would with a human.

Its easy.

Ya just have to be a character in a science fiction movie.

Hal, from the movie “2001: A Space Odyssey.

”    Now, if you think you dont already know enough about AI, youre wrong.

There is nothing to know, because it isnt actually a thing.

Theres literally no meaningful definition whatsoever.

AI poses as a field, but it’s actually just a fanciful brand.

As a supposed field, AI has many competing definitions, most of which just boil down to “smart computer.

” I must warn you, do not look up “self-referential” in the dictionary.

Youll get stuck in an infinite loop.

The BatcomputerMany definitions are even more circular than “smart computer,” if thats possible.

They just flat out use the word “intelligence” itself within the definition of AI, like “intelligence demonstrated by a machine.

“If youve assumed there are more subtle shades of meaning at hand, surprise – there arent.

Theres no way to resolve how utterly subjective the word “intelligence” is.

For computers and engineering, “intelligence” is an arbitrary concept, irrelevant to any precise goal.

All attempts to define AI fail to solve its vagueness.

Now, in practice the word is often just – confusingly – used as a synonym for machine learning.

But as for AI as its own concept, most proposed definitions are variations of the following three: This is a duck.

 By the way, the points in this article also apply to the term “cognitive computing,” which is another poorly-defined term coined to allege a relationship between technology and human cognition.

.

. More details

Leave a Reply