The Future and Philosophy of Machine Consciousness

The Future and Philosophy of Machine ConsciousnessExploring the exciting and scary possibility of sentient robotsShayaan JagtapBlockedUnblockFollowFollowingFeb 13These days, science fiction is more eager to explore the possibilities of human-machine relations than ever before.

Complex, emotional, and often thought provoking, these stories have gained massive popularity with fans and futurists alike.

Some notable, and rather dark, examples are:Black Mirror (S1E2) Be Right Back — Where a grieving widow recreates her late husbands personality into an uncanny robot bodyEx Machina — A man falls in love with a robot that betrays him to earn her freedom from captivityHer — A recently divorced, heartbroken man finds companionship in a sophisticated Chatbot, and it eventually turns into loveThese stories are scary because they force us to confront possible realities that we may not understand, or know the answer to.

One day, we won’t be able to turn off the TV and return to a life where those situations aren’t at your front door.

But, even if that day isn’t here yet, it’s important to start exploring the philosophical questions that become increasingly relevant as these futuristic machines slowly become reality.

Today, we’ll explore questions related to machine consciousness, such as:Can a machine thinkCan a machine experience emotionsCan a machine be consciousWe’ll try to keep things objective, taking diverse perspectives from multiple sources.

Here goes:Can a machine thinkShort answer: Sure, why not.

Long answer: It’s complicated.

In 1950, Alan Turing, known as the father of modern computing and artificial intelligence, wondered the same thing.

In an attempt to answer, he coined the famous Turing test.

Put simply:“The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

”Turing argued that thinking machines can exist.

Contemporaries argue both ways, but the needle tips towards agreement with Turing’s original arguments.

But, what is thinking?.This question might be more subjective that it first appears.

A common thought experiment that argues against the validity of the Turing test as a proxy for whether machines can think is called the Chinese Room.

John Searle, author of the paper that proposed this thought experiment, argued that a Chatbot does not think — it merely manipulates symbols it has no understanding of, which is not thinking.

Essentially, the Chinese room highlights that what we perceive as machine intelligence is just a kind of computation.

But, if silicon chips and electrical circuits are just the machine analog to our fatty tissue and chemical brains, what’s the difference?This is the crux of the brain simulator response (a computationalism argument) to the Chinese room.

The computational theory of mind states that the human mind is just information processing in a physical system.

The brain simulator response argues that on the smallest scales, brains and processors are just two systems of information — they exchange data, update state, and produce outputs.

The atomic units of what make our respective processorsSo, let’s assume that there is no soul, and that thinking really is just an emergent property of information processing.

In that case, are emotions part of thinking?.Or are they a uniquely biological thing?Can a machine experience emotionsIf “emotions” are defined only in terms of their effect on behavior or on how they function inside an organism, then emotions can be viewed as a mechanism that an intelligent agent uses to maximize the utility of its actions.

Given this definition of emotion, Hans Moravec believes that “robots in general will be quite emotional about being nice people”.

As stated above, emotions grant utility.

For example, the two most primal emotions an animal can experience are fear and attraction.

The gut stirring fear of dangerous environments, predators, and situations has stuck with us because it kept our ancestors alive.

Attraction towards nutritious food, safe environments, and reproductive partners is no different.

Those emotions helped us survive, and pass on our genes.

It’s worked pretty well so far.

As all pet owners will tell you, to suggest that emotions are a purely “human” trait is wrong.

Animals across the intelligence spectrum experience emotions.

So, why would robots be any different?Hans Moravec argues that robots “will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement.

You can interpret this as a kind of love.

”This argument is based on the fact that learning machines are based in function optimization.

Whether it’s maximizing a reward or fitness function (in Reinforcement Learning and Genetic Algorithms) or minimizing a cost function (In Supervised Learning), the goal is analogous: achieve the best score possible.

Knowing this, it’s easy to see how Moravec’s argument makes sense.

Define the robots objective function as pleasing you, and kick your feet up as it tries to get a high score.

So in short, yes — from an outside observer, robots can display emotions.

But, emotions are also feelings.

Asking if a machine can ever truly feel angry, happy, sad, touches on if the question of consciousness, which we will explore next.

Can a machine be consciousThe words “mind” and “consciousness” are used by different communities in different ways.

Is consciousness, essentially, an emergent property of intelligence?.This would make sense, as we can say that a dog is more conscious than a chicken, which is more conscious than an ant.

With this line of thought, consciousness is not binary, but a continuum.

And, who is to say the scale ends with us?.Genetically speaking, we’re hardly different from apes.

To declare humans as the pinnacle of consciousness is hubristic, and is surely not the case (in terms of what is theoretically possible).

John Searle, an American philosopher, theorized two types of AI:Strong AI: A physical system that can have a mind and mental statesWeak AI: A physical system that can act intelligentlyHis goal was to distinguish the two in order to focus on the more “interesting” issue at hand — referred to as the hard problem of consciousness.

This problem is “hard” since we understand and can replicate many of the subsystems of consciousness (such as mental states, information processing, etc…), but not how they come together form a beings qualia.

We simply don’t understand a mechanism through which the function of consciousness is performed.

However, AI researchers today are not concerned with the difference between Strong and Weak AI, since a General Intelligence would only need to “act” as such, unless it is proven that there is some secret extra ingredient to “consciousness” that is required.

If we can replicate a brain with software, it would (in theory) have all the capabilities of a human brain.

In summary, these arguments show that until we find a quantifiable, objective mechanism responsible for consciousness, we must assume that a thinking, learning, generally intelligent machine is just as conscious as we are.

But, some people disagree.

Some new age thinkers describe consciousness as an “invisible, energetic fluid that permeates life and the mind.

” This thinking follows the same vein as a soul, spirit, or some other otherworldly, ethereal part of a person that makes them more than a piece of flesh.

But, to consider this possibility, we again visit the problem of binary consciousness.

Do dogs, chickens, ants, etc…, also possess souls?.If not, then they are all equally conscious, which certainly doesn’t seem like the case.

There have been a number of experiments done to quantitatively measure the presence of a soul — none of which have given experimental confirmation.

You may say that the soul, and therefore consciousness, is not quantitatively measurable.

If that’s the case, an invisible, unmeasurable thing has no weight (get it) in our discussion.

Overall, the arguments against machine consciousness can be summed up as spiritual, and have no credibility as far as experimental evidence.

Overall, the scales tip towards the possibility of machine consciousness.

There are no real roadblocks to say it’s not possible.

In the end, we can’t say with certainty either way.

So, what now?These are incredibly complex topics.

The ideas we’ve explored are barely a drop in the ocean of perspectives that others have been pondering for decades.

There is no black and white answer to any of these questions.

In the near future, nothing will change.

Our robots are far (enough) from anything available on Netflix to really cause any panic.

In the future, who knows?.That’s an open ended topic for another day.

I’ve simply laid out what I think are the most reasonable answers.

The important things is to keep an open mind, and always be open to new information, especially regarding such complex topics.

If you found this post interesting, I encourage you to read further into the topic (check out the resources below).

Thanks for reading!Further Reading (many of which I used for ideas for this article):Superintelligence by Nick BostromPhilosophy of AI Wikipedia PageChinese Room argumentAnimal ConsciousnessSimulated Brain.. More details

Leave a Reply