Towards a Quantitative Measure of Intelligence: Breaking Down One of the Most Important AI Papers of 2019, Part I

The ability of a system to play Go doesn’t mean that understand Shakespeare or reason through economic problems.

As humans, we judge intelligence based on abilities such as analytical and abstract reasoning, memory, common sense and many others.

In the history or science, there have been two fundamental schools of thought that marked specific definitions of intelligence.

   Throughout the history of science, there have two dominant views of intelligence: the Darwinist view of evolution and Turing’s view of machine intelligence.

The Darwin’s theory of evolution, human cognition is the result of special-purpose adaptations that arose to solve specific problems encountered by humans throughout their evolution.

One of the maximum expressions of this theory was captured by AI legend Marvin Minsky when he outlined a task-centric definition of AI: “AI is the science of making machines capable of performing tasks that would require intelligence if done by humans.

” The evolutionary view of intelligence is directly related to a vision of the mind as a wide collection of vertical, relatively static programs that collectively implement intelligence.

For historical reasons, this vision of intelligence has become very influential in the field of AI creating systems that are extremely efficient on mastering individual tasks without showing any real signs of intelligence.

A contrasting and somewhat complementary perpsective of the Darwinist view of intelligence was pioneered by Alan Turing.

It a paper from 1959, Turing stated some interesting remarks relative to the characteristics of intelligence: “If we are ever to make a machine that will speak, understand or translate human languages, solve mathematical problems with imagination, practice a profession or direct an organization, either we must reduce these activities to a science so exact that we can tell a machine precisely how to go about doing them or we must develop a machine that can do things without being told precisely how.

” Turing’s vision of intelligence is inspired by British philosopher John Locke’s Tabular Rasa theory which sees the mind as a flexible, adaptable, highly general process that turns experience into behavior, knowledge, and skills.

The evolution of AI has been deeply influenced by both Drawing’s and Turing’s theory of intelligence.

The current generation of AI models are certainly focus on specific tasks but also accumulate knowledge based on the interactions with an environment and other agents.

The combination of the two foundational theories of intelligence originated a key concept in modern AI.

   The notion of generalization is omnipresent in AI and, particularly, modern deep learning algorithms.

Broadly speaking, generalization can be defined as “the ability to handle situations (or tasks) that differ from previously encountered situations”.

In its simplest form, generalization applies to how AI models are able to apply knowledge acquired during training to the test dataset.

In more ambitious forms, generation refers to the ability of AI models to apply knowledge acquired performing a specific task to a completely different task.

From a qualitative standpoint, there are several dimensions of generalization that are relevant in AI models:I.

 Absence of Generalization: The notion of generalization as we have informally defined above fundamentally relies on the related notions of novelty and uncertainty: a system can only generalize to novel information that could not be known in advance to either the system or its creator.

AI systems in which there is no uncertainty do not display generalization.

II.

 Local Generalization, or “Robustness”: This is the ability of a system to handle new points from a known distribution for a single task or a well-scoped set of known tasks, given a sufficiently dense sampling of examples from the distribution (e.

g.

tolerance to anticipated perturbations within a fixed context).

III.

 Broad Generalization, or “Flexibility”: This is the ability of a system to handle a broad category of tasks and environments without further human intervention.

This includes the ability to handle situations that could not have been foreseen by the creators of the system.

This could be considered to reflect human-level ability in a single broad activity domain.

IV.

 Extreme Generalization: This describes open-ended systems with the ability to handle entirely new tasks that only share abstract commonalities with previously encountered situations, applicable to any task and domain within a wide scope.

This could be characterized as “adaptation to unknown unknowns across an unknown range of tasks and domains”.

Interestingly enough, the different dimensions of generalization outlined above mirror the organization of humans cognitive abilities as laid out by theories of the structure of intelligence in cognitive psychology.

Furthermore, we can use the previous taxonomy of generalization to create a hierarchical representation of intelligence as shown in the following figure:I think, we can all agree that the current generation of AI systems is focus on task and local intelligence but its also evolving rapidly.

Using the previous outline hierarchy, we can start outlining a framework for measuring intelligence across broad skills and generally.

This will be the subject of the second part of this article.

  Original.

Reposted with permission.

Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.

disqus.

com/embed.

js; (document.

getElementsByTagName(head)[0] || document.

getElementsByTagName(body)[0]).

appendChild(dsq); })();.

. More details

Leave a Reply