Ethical AI: EU’s New Guidelines and the Future of AI Trustworthiness

By Nathan SykesArtificial intelligence is a part of our daily lives now, and it represents an unprecedented combination of potential and possible harm.

Youre almost certainly familiar with the sci-fi-tinged worst-case scenarios concerning malevolent AIs overtaking and replacing us.

However, the “black box” of AI development and behavior represents somewhat more mundane, though no less worrying, problems as well.

Whether the rollout of AI around the globe spells liberation or disaster is a question we can begin answering by introducing formal standards for how this technology is designed and deployed.

  To that end, the European Union has issued a set of guidelines, called the “Ethics Guidelines for Trustworthy AI.

” The goal is to help EU member nations and their tech companies steer a course toward ethical and inclusive AI as we come to terms with the potential and pitfalls of this technology.

Heres a look at what these guidelines mean for the future of AI development.

   The EU is not the first governing body in the world to lay out recommendations for the ethical development of artificial intelligence, although its efforts may be some of the most specific to date.

During the presidency of Barack Obama, the National Science and Technology Council — with participation from dozens of relevant government agencies — provided its own set of broad guidelines called “Preparing for the Future of Artificial Intelligence.

“The European Unions efforts appear to be somewhat more actionable, since they contain a checklist — a “practical assessment list” — for companies engaged in the development of artificial intelligence.

How were they created, and what do they say?For a start, the EU collaborated with 52 experts on the subject of AI and drew on feedback from 500 members of the public who submitted comments.

Its important to note that these guidelines are, presently, not legally binding.

However, they do cover an impressive amount of ground in several major categories:If the ultimate purpose of creating artificial intelligence is to improve human life on earth, these tenets seem like a solid foundation on which to build it.

The EU goes further by providing specific and actionable guidelines for current and future architects of AI systems.

   Its worth noting that the EUs “Ethics Guidelines for Trustworthy AI” has not yet reached its final form.

The institution refers to the guidelines as a living document.

As such, they have issued an open invitation to technology companies and public advocacy groups to render their own input and help shape future drafts.

The EU appears cognizant of the fact that, like AI itself, our rules for governing its development and use should be equally flexible and open to change as we learn more.

The checklist for trustworthy AI, in its current form, is a plain-English set of questions that any chief technology officer, CEO or member of the public should be able to understand.

Heres a small handful of them, lightly paraphrased for brevity:Its not difficult to imagine some of the specific cases the EUs slate of experts had in mind as they drew up these guidelines.

Given the number of potential applications of AI in human life, theres an emerging sense of urgency when it comes to formulating common-sense guidelines — followed closely by enforceable laws — for how technology companies engage in the design of AI systems.

   Tesla promises fully autonomous functionality in its cars by the end of 2019.

Elon Musk is on record saying his car company will accept liability in the event of an accident, provided the software made a mistake or a leap in logic.

This means our AI-powered driverless cars must collect a variety of data as they operate, and at all times while the vehicle is in motion.

In parts of the U.

S.

, artificial intelligence is being actively explored as a means to predict the likelihood of offenders committing crimes again in the future.

Closer studies of the accuracy of these systems revealed they awarded higher “crime likelihood scores” to blacks than to whites.

Chinas social credit system relies on artificially intelligent algorithms to judge citizens creditworthiness and grant or restrict privileges and rights based on their public behavior.

Artificial intelligence even has the potential to supplant the research of human geneticists and chemists on the hunt for life-saving medications and to help pharmaceutical companies bring drugs to market faster.

The focus of such efforts must be the greatest good rather than the greatest profitability.

The EU signals its belief that the public has a right to an accounting of the type and variety of data these systems gather from the world around them, the potential for human biases to infiltrate their governing algorithms, the explainability of the logic informing an AI systems decision-making process and much more.

Other groups throughout the world are lending their own voices to this timely conversation.

Its a good sign for things to come, but governing bodies need to follow through by turning guidelines into laws to keep our innovators honest and protect the public from potential harm.

  Bio: Nathan Sykes is a business and technology freelancer and blogger from Pittsburgh, PA.

To read his latest articles, check out his blog, Finding an Outlet.

Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.

disqus.

com/embed.

js; (document.

getElementsByTagName(head)[0] || document.

getElementsByTagName(body)[0]).

appendChild(dsq); })();.

. More details

Leave a Reply