AI & Ethics: Are We Making It More Difficult On Ourselves?

Well, not always.

Contrast these ideas with widespread accusation and belief there are biased algorithms everywhere in Silicon Valley.

That certain groups benefit from what should be (at least in certain minds) unbiased equations.

Now, consider how these algorithms are created.

Or more importantly, where they are created.

The Walled Off Data ProblemIn the past, we were accustomed to obtaining data from a single source.

Or at least, very few sources.

And by “data” we mean thousands upon thousands of bits of information that all put together creates a coherent, workable model of algorithmic goodness.

The problem which is perhaps unintentionally created by restrictive data protection laws is that it makes data harder to come by legally.

Because of concerns regarding AI & ethics, we’re walling off data like never before.

Keeping it restricted.

Now that may not sound like a bad thing if your mind conjures up images of a telemarketing firm looking to create a model so they knew who to call and bother at dinnertime.

It may be a bad thing if you’re a University Medical Research Department building a model to predict, diagnose, or even cure disease.

We’ve spoken at length in the past about “the silo problem” as it relates to development and deployment.

Specialized teams are able to exhibit hyper-focused attention to one specific aspect of the problem.

However, it doesn’t necessarily yield the best results or the best end product.

The same can be said of approaching data in a silo.

To tackle the world’s problems, or even attempt to do so, we need access to a lot of data.

And as growing restrictions further cordon off that data, we run the risk of biasing our own data pool.

To be clear: when we talk about being able to gather data in one place: we mean a wide array of data which is accessible from a single source; but not a wide array of data that originates from a single source.

Let’s Bake Some BreadFor example, it’s great to be able to go to a supermarket where we can purchase, bread, milk, meat, and vegetables all in one place.

The supermarket is a great source of a lot of different types of products (data).

If we wanted to build an algorithm to track or predict what groceries people purchase, a supermarket would be a good place to start.

Why?.Because we know that the shoppers there are going to purchase a wide variety of items, across different types and variants.

We’ll be able to view a veritable ton of data to build our model.

Now, let’s suppose that supermarkets didn’t exist.

Indeed, it may be seen as “safer” or “better” to get your milk from a milkman, your produce from a vegetable market, or your bread from a baker specifically.

However, it’s far less convenient and far more restrictive.

If you are purchasing your bread from a single source: you are beholden to that single source and all the characteristics of that source.

How then, are we do build a model to track grocery purchases when we only have easy access to the baker’s data?This is how we wind up unintentionally biasing our own algorithms.

Open Boarders DataThis is not to say there is not culturally significant social biases that can be built into algorithms or data practices.

They absolutely can be.

However, it is becoming increasingly difficult to build culturally significant and culture-spanning models because of the increasing difficulty in legally obtaining data across certain roadblocks.

As a result, a model built in Silicon Valley might reflect the demography of Silicon Valley.

A model built in India might reflect the demography of India, and so on.

And one of the issues we are faced with is this one-size-fits-all approach is that it becomes difficult to meaningfully create a model from a set of data which may not reflect all users, all components, or even reach a realistic, ideal, or meaningful outcome if the data has been previously biased in one way or another.

Again, not a bad thing if we’re stopping telemarketing in most people’s eyes.

It can be a bad thing if we’re using concerns of AI & ethics to cut off our own nose to spite our face.

The future of data collection and analysis is likely to look more like this: collect locally, repeat globally.

It’s a longer and more involved process to be sure.

However, the greater the push for enhanced data protection, the more restrictive access will be come.

So What Do We Do?To a large degree, the conversation over AI & ethics is just getting started.

And that’s a good thing.

Because as we said earlier, we believe there’s an inherent responsibility for those who operate in this space to continue to ask these questions.

Namely, are we behaving ethically?.Are we contributing meaningful thought as well as action to the public space and public debate surrounding these questions.

As the technologies evolve, these questions need to continue to be asked.

To a degree, we believe that personal (and corporate responsibility) have to come into play.

Government regulation can and will assist in pointing out the correct path.

However, it will come with its own drawbacks and downsides as mentioned above.

There are good reasons for wanting regulation such as GDPR and the tightening regulations in the USA as well.

However, there are unintentional downsides such as those outlined above.

It also makes it difficult for newcomers to the space to get started.

This relegates operations to a select few who have the means, resources, an connections to move in this space.

To a degree, the ethical treatment of AI may ultimately rest with those who control it.

We may be a long way off from having to realistically worry about a robot uprising.

Thankfully.

That doesn’t mean there aren’t concerns with regard to bad actors in this space.

We have a responsibility to use AI responsibly.

That doesn’t mean there won’t be mistakes, missteps, and mishaps along the way.

It would be foolish to think otherwise.

However, the question of AI and ethics is also a fundamentally human one.

As human as the human beings who write the code which implements Asimov’s Three Laws of Robotics.

What happens when a bad actor “forgets” or omits this code?.What happens when those charged with safeguarding data seek to misuse it?.Not to wax too philosophical, but the question surrounding how ethical AI can be will, for the time being, rest ultimately within the confines of the ethical possibilities of human behavior.

We, of course, have free will.

Unlike our robot underlings.

For now.

Originally published at https://introspectdata.

com on May 9, 2019.

.

. More details

Leave a Reply