A Curious Theory About the Consciousness Debate in AI

  I recently started a new newsletter focus on AI education.

TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read.

The goal is to keep you up to date with machine learning projects, research papers and concepts.

Please give it a try by subscribing below:  I was recently having a debate about strong vs.

weak AI with one of my favorite new thinkers in this market and it reminded me of something that I wrote over a year ago.

So I decided to dust it off and restructure those thoughts in a new article.

With all the technological hype about artificial intelligence(AI), I find it sometimes healthy to go back to its philosophical roots.

From all the philosophical debates surrounding AI, none is more important that the weak vs.

strong AI problem.

From a technological standpoint, I subscribe to the idea that we are one or two breakthroughs away from achieving some form of strong or general AI.

However, from a philosophical standpoint there are still several challenges that need to be reconciled.

Many of those challenges can be explained by an obscure theory pioneered by an Austro-Hungarian mathematician in the last century and by one of the leading areas of research in neuroscience.

In AI theory, weak AI is often associated with the ability of systems to appear intelligent while strong AI is linked to the ability of machines to think.

 By thinking I mean really thinking and not just simulated thinking.

This dilemma is often referred to as the “Strong AI Hypothesis”.

 In a world exploring with digital assistants and algorithms beating GO World Champions and Dota2 teams, the question of whether machines can act intelligently seems silly.

In constrained environments (ex: medical research, GO, travel) we have been able to build plenty of AI systems that can act as it they were intelligen6.

While most experts agree that weak AI is definitely possible, there is still tremendous skepticism when comes to strong AI.

   These questions have hunted computer scientists and philosophers since the publication of Alan Turing’s famous paper “Computing Machinery and Intelligence” in 1950.

The question also seems a bit unfair when most scientists can’t even agree on a formal definition of thinking.

To illustrate the confusion around the strong AI hypothesis, we can use some humor from the well-known computer scientists Edsger Dijkstra who in a 1984 paper compared the question of whether machines can think with questions such as “can submarines swim?” or “can airplanes fly?”.

While those questions seem similar, most English speakers will agree that airplanes can, in fact, fly but submarines can’t swim.

Why is that? I’ll leave that debate to you and the dictionary 😉 The meta-point of this comparison is that without a universal definition of thinking it seems irrelevant to obsess about whether machines can think ????.

Among the main counterarguments to strong AI states that, essentially, it might be impossible to determine is machine can really think.

This argument has its basics in one of the most famous mathematical theorems of all time.

   When we talk about greatest mathematical theories in history which have had a broad impact in our way of thinking, we need to reserve a place for Gödel’s Incompleteness Theorem.

In 1931, mathematician Kurt Gödel demonstrated that deduction has its limits by proving his famous incompleteness theorem.

Gödel ’s theorem states that any formal theory strong enough to do arithmetic (such as AI) there are true statements that have no proof within that theory.

The incompleteness theorem has long been used as an objection to strong AI.

The proponents of this theory argue that strong AI agents won’t be able to really think because they are limited by the incompleteness theorem while human thinking is clearly not.

That argument has sparked a lot of controversy and has been rejected by many strong AI practitioners.

The most used argument by the strong AI school is that it is impossible to determine if human thinking is subjected to Gödel’s theorem because any proof will require formalizing human knowledge which we know to be impossible.

   My favorite argument in the strong AI debate is about consciousness.

Can machines really think or just simulate thinking? If machines are able to think in the future that means that they will need to be conscious (meaning aware of its state and actions) as consciousness is the cornerstone of human thinking.

The skepticism about strong AI has sparked arguments ranging from classic mathematical theory such as Gödel’s Incompleteness Theorem to pure technical limitations of AI platforms.

However, the main are o debate remains on the intersection of biology, neuroscience and philosophy and has to do with the consciousness of AI systems.

   There are many definitions and debates about consciousness.

Certainly, enough to dissuade most sane people to pursue the argument of its role in AI systems 😉 Most definitions of consciousness involve self-awareness or the ability for an entity to be aware of its mental states.

Yet, when it comes to AI, self-awareness and metal states and not clearly defined either so we can quickly start going down a rabbit hole.

In order to be applicable to AI, a theory of consciousness needs to be more pragmatic and technical and less, let’s say, philosophical.

My favorite definition of consciousness that follows these principles comes from the laureate physicist Michio Kaku, professor of theoretical physics at University of New York and one of the creators of string theory.

A few years ago, Dr.

Kaku presented what he called the “space-time theory of consciousness” to bring together the definition of consciousness from fields such as biology and neuroscience.

In his theory, Dr.

Kaku defines consciousness as follows:“Consciousness is the process of creating a model of the world using multiple feedback loops in various parameters (ex: temperature, space, time , and in relation to others), in order to accomplish a goal( ex: find mates, food, shelter)”The space-time definition of consciousness is directly applicable to AI because it is based on the ability of the brain to create models of the world based not only on space (like animals) but in relationship to time (backwards and forwards).

From that perspective, Dr.

Kaku defines human consciousness as “a form of consciousness that creates a model of the world and then simulates it in time, by evaluating the past to simulate the future.

” In other words, human consciousness is directly related with our ability to plan for the future.

In addition to its core definition, the space-time theory of consciousness includes several types of consciousness:     Consciousness is one of the most passionate subjects of debate in the AI community.

By AI consciousness, we are referring to the ability of an AI agent to be self-aware of its “mental state”.

The previous part of this essay introduced a framework pioneered by known physicist Dr.

Michio Kaku to evaluate consciousness in four different levels.

In Dr.

Kaku’s theory, Level 0 consciousness describes organisms such as plants that evaluate their reality based on a handful of parameters such as temperature.

Reptiles and insects exhibit Level 1 consciousness as they create models of the world using new parameters including space.

Level 2 consciousness involves creating models of the world based on emotions and the relationship to other species.

Mammals are the main group associated with Level 2 consciousness.

Finally, we have humans that can be classified at Level 3 consciousness based on models of the world that involve simulations of the future.

Based on Dr’ Kaku’s consciousness framework we can evaluate the level of consciousness of the current generation of AI technologies.

Most experts agree that AI agents today can be classified at Level 1 or very early Level 2 consciousness.

Ranking AI agents at Level 1 involves many factors including mobility.

Many AI agents today have been able to achieve mobility and develop models of their environment based on the space around them.

However, most AI agents have a lot of difficulty operating outside their constrained environment.

Space evaluation is not the only factor placing AI agents at Level I consciousness.

The number of feedback loops used to create models is another super important factor to consider.

Let’s use image analysis as an example.

Even the most advanced vision AI algorithms use a relatively number of small number of feedback loops to recognize objects.

If we compare those models with the cognitive abilities and insects and reptiles they seem rather unsophisticated.

So yes, the current generation of AI technologies has the level of consciousness of an insect 😉   Steadily, some AI technologies have been exhibiting characteristics of Level 2 consciousness.

There are several factors contributing to that evolution.

AI technologies are getting more advanced understanding and simulating emotions as well as perceiving emotional reactions around them.

In addition to the evolution of emotion-based AI techniques, AI agents are getting more efficient operating in group environments on which they need to collaborate or compete among each other in order to survive.

In some cases, the group collaboration has even resulted on the creation of new cognitive skills.

To see some recent examples of AI agents that have exhibited Level 2 consciousness we can refer to the work of companies such as DeepMind and OpenAI.

Recently, DeepMind conducted experiments on which AI agents needed to live in an environment with limited resources.

The AI agents showed different behaviors when resources were abundant than when they were scarce.

The behavior changed as the agents needed to interact with each other.

Another interesting example can be found on a recent OpenAI simulation experiment on which AI agents were able to create their own language using a small number of symbols in order to better coexist in their environment.

Pretty cool huh?There are still very early days of mainstream AI solutions but enhancing the level of consciousness of AI agents is one of the most important goals of the current generation of AI technology stacks.

Level 2 consciousness is the next frontier!   At the moment Level III consciousness in AI systems is still an active subject of debate.

However, recent systems such as OpenAI Five or DeepMind Quake III have clearly shown the ability of AI agents of long term planning and collaboration so we might not be that far off.

   The short, and maybe surprising answer, is YES.

Applying Dr.

Kaku’s space-time theory of consciousness to AI systems, it is obvious that AI agents can exhibit some basic forms of consciousness .

Factoring the capabilities of the current generation of AI technologies, I would place the consciousness of AI agents at Level I (reptiles) or basic Level II.

  Original.

Reposted with permission.

Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.

disqus.

com/embed.

js; (document.

getElementsByTagName(head)[0] || document.

getElementsByTagName(body)[0]).

appendChild(dsq); })();.

Leave a Reply