When Identity Becomes an Algorithm

Those of a religious persuasion would likely add to this list, the interests of the soul.

The resulting dialogue between these different entities is one that can be fairly divisive.

And the locus of control between them is increasingly flexible.

Consider that parts of our extended phenotype are already developing interests of their own.

Tools that contain advertising are at the forefront of a new type of extended phenotype that introduces the potential for a conflict of interest to exist between a tool and its user.

When a tool acts in direct obedience to the brain controlling it, there is no question of a conflict of interest.

A hammer, for instance, contains no interests of its own and is in complete subservience to the person wielding it.

The Facebook App running on a smartphone is another matter entirely.

Your interests in using the Facebook App may depart from the motivations of the app.

Facebook’s business model works through advertising, so the motivation of the app is for you to click on one of advertisements it is displaying to you.

As author Andrew Lewis has quipped “if you’re not paying for the product, you are the product”.

While this may seem only a nuisance at the moment, it hides a darker subtext.

As the engineers behind such advertising platforms get more adept at manipulating you into clicking things you would not otherwise have done, the chances that you will find yourself sidetracked and turned to shopping or researching products, when you meant to write a thoughtful note to your friend on their birthday are increasing.

This is a very important departure from tools of the past.

Previously, tools tended to remain in direct alignment with the interests of their user.

With the advent of embedded advertising, software becomes like a Trojan horse, hiding its own agenda.

The question of whose interests a tool is serving will increasingly be up for grabs.

The story of how this came to be is an interesting one, with important consequences.

Travel backwards in time to Silicon Valley in early 1980s, buzzing with young programmers riding the cresting wave of the personal computing boom.

Many of these young visionaries belonged to the free software movement.

Contrary to public opinion though, programmers need to eat to and these young idealists found themselves at a crossroad — price their software or go broke.

Then a solution emerged, enabled by the internet.

Software could remain free it would contain advertising.

This made way for the profit models of companies like Facebook and Google.

Ostensibly, a great deal for everyone, the programmers got rich while distributing their wares without a price tag.

But as they say in Texas “there ain’t no free barbecue” and unbeknownst to many users of the software, a catch was lurking in this business model.

Advertising, coupled with interactive software, leads potentially towards addiction and behavior modification.

It is almost a mathematical certainty guaranteed by the profit maximization principle.

When a piece of software makes money through advertising, it now serves two masters — one being the user, and another being the people who are paying for ad space.

The interests of these parties are guaranteed to diverge.

If a company makes money by how frequently a user clicks on an ad rather than how effective the product is, then the product’s real purpose becomes getting the user to click on ads rather than accomplishing something useful with it.

Like the interests of the gods and goddesses of old, the interests of a company like Facebook are entirely imaginary, existing only in the collective imaginations of humans.

But while such corporations may only exist as fictitious entities, their interests once embedded in silicon chips are very real indeed, and may conflict with the interests of the humans using the software.

This is an important point to note as we begin an inventory of our extended phenotype.

Consider that it is already difficult to identify the interests of one’s brain versus the interests of one’s genes.

While they are closely allied, they’re not identical.

Now that our tools are also beginning to have interests of their own, the confusion compounds.

As people identify more strongly with their non-biological extended phenotype, i.

e.

social media personas and the algorithms that run them, their interests will to a large degree be modulated by the interests and requirements of this extended phenotype.

The needs pertaining to maintaining one’s Facebook identity could in fact predominate over the interests of the genes.

We already have examples of this in the form of video game players who have forgotten to feed themselves or their family by identifying so strongly with the goals of characters within the game.

One of the most crucial questions to consider going forward is where we will come to place the locus of control for our identity.

Will the tool chest of the genes stage a comeback in the form of genetic technologies like CRISPR, reuniting us with our biological containers, or will we continue the long march towards a non-biological extended phenotype, outsourcing more and more decision to computers while gradually replacing our biochemical algorithms with inorganic ones.

In this light, deep reinforcement learning would seem to be a tremendous leap forward for the inorganic extended phenotype since it will enable this inorganic extended phenotype to tackle problems that previously only our brains could.

Whats more, if human life is indeed a drama of decision making, as our art, religion, and even legal system treat it to be, then it follows that machines possessed of reinforcement learning abilities are in effect, moral agents.

Either something will have to shift in what we consider a moral agent to be, or we will have to expand our thinking and treatment of such machines to encompass moral agency.

Untangling the legal responsibilities and protections that follow from this chain of logic is likely to prove a daunting yet unavoidable task.

Right now these are mere fringe issues with little relevance outside academia, but they are almost certainly destined to become questions of enduring importance.

What’s more, the window for making meaningful progress on them is likely to be far shorter than previously believed.

Consider the increasingly common phenomena of Facebook pages or email accounts that survive their owner’s death.

If such digital personas were endowed with reinforcement learning algorithms allowing them to continue responding and adapting to the stimuli they received through posts, messages etc.

, perhaps in accordance with the style and goals laid down by their original user, then in a very real way, one’s inorganic extended phenotype could persist long after ones biological death.

Should such inorganic phenotypes be afforded any legal protection?.If we were to remove a person’s brain and keep it alive inside a computer such that it could continue to write messages and communicate, we might consider that person still alive in some sense and offer them some legal protection.

The comparison is not altogether unreasonable and such bizarre questions of identity are likely to be thrust upon us much sooner then we realize.

While our present phenotype could be described as a kludge of biochemical algorithms assisting in the survival and replication of the species, our future phenotype is likely to resemble a kludge of inorganic algorithms, whose purpose and design will be far more variable than those dictated by the strict terms of evolutionary fitness.

While we currently give our extended phenotype a limited degree of autonomy, i.

e.

, we may allow Google to generate automated responses to emails or schedule appointments for us, that autonomy is growing.

There is certainly a danger in off-shoring too much of our decision making to faculties not wholly owned by ourselves.

If Google or Facebook owns large portions of our extended phenotype, then we must add to the locus of our decision making the interests and wants of the corporations owning our extended phenotype.

A careful inventory of one’s extended phenotype, and various interest groups whose influence or control it is under, is perhaps the most under explored region of our education today.

When we walk into a friend’s house and ask for the wifi password, whose interests are we serving — those of our genes, our brain, or of our extended phenotype in the form of the Facebook app?.By failing to realize that are all different entities cohabiting within our extended phenotype, we easily fall under the thrall of the one with the loudest agenda.

Certainly, the brain and body must sign off on the order to ask for a WIFI password, since they represent choke points in the decision process.

However, the real string puller may be the Facebook app when we find ourselves diverted to clicking on ads after we get online.

Like a type of fungi found in the Brazilian rain forest that spreads by invading an ant’s body and turning it into a zombie, marching the fungi to a new location before dying, so we may find ourselves co-opted by manipulative software and reprogrammed to do its bidding.

This is certainly not the future we would hope for, but it may be the unintended consequence of certain profit models.

Going forward, it will be increasingly crucial to make a sincere accounting of our extended phenotype in order to avoid coming under the thrall of manipulate software that will hijack our bodies and pursue goals we would not otherwise have chosen.

.. More details

Leave a Reply