Hybrid Humans and Conscious Robots

If we assume a theory of mind was developed in order to read the minds of others, the better to predict if they were going to help or hinder us, then there would be a survival edge to be gained by foiling this adaptation and finding a method to deceive it.

If a conscious theory of mind can construct an alibi for the things I’m doing, even when they are expressly untrue, this could give me a survival edge in a highly social environment.

In the game of Cops and Robbers, if you develop a truth detector, then I need to develop a better lie fabricator, and round and round it goes.

At our present juncture, fake news may be spurring a similar kind of red queen race.

With the aid of computers, the ability to create false narratives using such tools as Photoshop has made it ever more difficult to differentiate truth from fiction.

Spinning false narratives has never been easier, and as a result there is increased need for counter weapons to unravel these yarns.

Such countermeasures are also likely to take the form of an alogirthym, albeit one residing in a computer.

For instance there are algorithms that are trained to differentiate real images from ones that have been retouched in Photoshop.

Increasingly, it may be necessary for individuals to adopt the use of such algorithms to differentiate fake news from truthful accounts.

In such a manner we may find ourselves put under a kind of evolutionary pressure to fold such forms of artificial intelligence into our biological wetware, relying upon them to keep us one step ahead of the lie mongers.

Where such back and forth measures are likely to end is difficult to say, but the addition of “hybrid” forms of consciousness to our present biology is not out of the question.

Already there have been strides in creating a neural lace by the Elon Musk backed company called Neuralink, which could potentially enable such forms of hybrid consciousness.

Another source of competitive pressure that could lead to hybrid forms of consciousness is coming from the marketplace.

There currently exists considerable competition to gain admittance to top ranked academic institutions , such that students and job seekers will avail of drugs like Adderall and Ritalin to secure the necessary grades and exam scores to gain access to these highly coveted positions.

If an applicant whose consciousness was aided by a neural lace or a CRISPR enabled genetic modification showed a propensity to win access to the best schools and jobs, then it is likely that such adaptations would quickly spread throughout the entire population.

The red queen effect can give rise to curious and unforeseen consequences and it could be wise to carefully consider the competitive pressures we are presently subjecting ourselves to, lest they paint us into corners we later find to be distinctly dystopian.

Some recent experiments involve what might be considered “inanimate consciousness,” including synthetic agents that possess a theory of mind.

A theory of mind has been one measure used in differentiating artificial forms of intelligence from human intelligence.

To understand this difference there is a simple experiment called the ultimatum game.

In the ultimatum game, one player receives a hundred dollars (or other unit of value) and must decide how they wish to split it with a second player.

The second player, can either reject the offer in which case both players get nothing, or can accept it in which case they receive the money according to the division specified by player one.

If player one is perfectly rational and lacks a theory of mind, it will offer far less than humans, as it will assume the other player will accept any offer greater than zero because receiving something is always better than receiving nothing.

Or so runs the logic.

Lacking a theory of mind, player one doesn’t take into account that the other player is likely to feel slighted if only offered a single dollar and therefore reject the proposal outright.

While a purely rational agent would be better off accepting the dollar than rejecting it, humans will frequently reject such low ball offers.

This has been a thorn in the side of economists and psychologists because the entire dogma of utility theory is based upon the presupposition that people maximize utility and would accept such low ball offers the way a computer playing the game would.

Interestingly, in autism, a disease frequently described as “mind blindness” because the afflicted often show a marked inability to read the minds of others by observing facial features, patients tend to behave much more like the purely rational agents than “normal” people.

In such games — autistic people will expect others to accept low ball offers and be fine accepting them themselves.

This evidence from autism supports the idea that humans are using a theory of mind when they operate in many strategic environments and this influences their decision making.

One reason advanced for humans rejecting low ball offers is that we are accustomed to playing iterated prisoner dilemma games, that is, games that are repeated many times and possess a solution in which, when both players cooperate, they are better off than had they both acted selfishly.

If as a player in the ultimatum game, I assume we are living in a world with iterated prisoner dilemma situations, then I have a motive to punish you for making a selfish offer so that the next time we play you will not make a low ball offer again.

How might one go about creating an algorithm that uses a theory of mind to guess about the others person’s strategy and thus achieve an outcome more similar to a human one?One of the groups making progress on this is the OpenAI foundation.

In collaboration with researchers at Oxford University, the OpenAI team created a reinforcement learning algorithm that possessed a theory of mind about another player when updating its own strategy in the ultimatum game (Jakob N.

Foerster, 2018).

The secret ingredient in their recipe was adding a term in the reinforcement learning equation that captures changes in a second player’s strategy, that is — a theory of the other players mind.

While the math behind this can get a bit hairy, the principle is simple enough — if I know that you are learning too, then I need to take into account changes in your strategy when I formulate my own.

Their approach seemed to work — in an iterated prisoner dilemma game, the algorithm learned reciprocative strategies that far more resemble human players than the strategies of purely selfish agents.

This could well mark an important step towards creating artificial secondary consciousness.

Assuming for a moment that consciousness is a theory of mind turned in upon itself, then it would be a logical first step to model a theory of mind turned outwards, as in the OpenAI experiment.

This certainly seems to be a precursor in nature to more complex theories of mind.

For instance, dogs arguably possess a simple theory of mind.

You can observe dogs scanning your features and gestures, trying to guess if you are about to refill their food bowl or take them for a walk.

While I know of no such attempts to modify reinforcement learning to include a theory of mind about one’s own actions, it may not be far off.

While initially such nascent forms of machine consciousness would hardly be recognizable as similar to our own, perhaps in time they would develop the colorful psychological nuances like personal narratives, shame, denial and projection that characterize our own conscious minds.

It is important to note that even were this to happen, it does not presuppose a synthetic consciousness’s would pose a danger to humans.

Their goal states might still be pre-determined and their actions limited to virtual tasks in virtual worlds.

As we examine the implications of AI and reinforcement learning it is important to organize and prioritize it’s repercussions.

While an uprising of conscious robots might pose an existential threat to humans in the distant future, there are likely to be more proximate challenges, for instance, those resulting from increased job automation leading to ballooning wealth disparities.

Only after navigating these near term hurdles are we likely to encounter the sensational horrors betokened in so many science fiction novels and films.

.

. More details

Leave a Reply