What do the EU’s ethical guidelines for AI mean for American companies?

The release of the EU’s ethical guidelines for AI shouldn’t come as a surprise to technology insiders.

With the U.

S.

and China emerging as global leaders in AI, the EU is keen to position itself in the space.

As countries across the EU promote AI development, it makes sense for the EU to publish guidelines as a precursor to future regulations.

Additionally, the General Data Protection Regulation (GDPR) provides a framework for the EU to develop expectations for AI solutions.

The GDPR regulates the capture and use of data to protect users and prohibit large players from gaining an unfair advantage.

The next logical step is to ensure the transparency of AI systems related to the data these systems use and generate.

Finally, it’s important to note that the EU is not the first player to release guidelines around the use of AI.

Last year, Microsoft, Google and IBM established principles on which they will base their AI development in the future.

Additionally, the Netherlands has published its own AI manifesto.

But while other guidelines and manifestos focus on what companies should or shouldn’t do with AI, the EU’s document speaks to the trustworthiness of AI.

In addition to the guidelines, the EU’s report provides a checklist organizations can use to determine whether or not an AI solution passes muster for trustworthiness.

The EU guidelines for ethical AI matter to U.

S.

businesses In the U.

S.

, private businesses are currently pushing harder for AI guidelines than government agencies.

In addition to publishing their own ethical guidelines, some companies (e.

g.

, Microsoft) have actively promoted the importance of government awareness about how AI works and the tenets of responsible AI adoption.

    But the EU guidelines will likely have a ripple effect in the U.

S.

since many American technology companies provide AI solutions and services to the EU.

The EU’s posture on AI will impact U.

S.

companies that are potential acquisition targets for EU investors as well as companies that plan to expand into European markets.

If the EU guidelines evolve into regulations sooner than expected, it could create significant challenges for U.

S.

companies that do business abroad.

In some cases,  companies could face additional costs to comply with regulations, hindering their ability to compete — a potentially burdensome scenario for companies that haven’t taken steps to align their businesses with the principles articulated in the EU’s guidelines.

How U.

S.

companies should respond to the EU’s guidelines for trustworthy AI Although the EU’s new guidelines are geared primarily toward European firms, U.

S.

companies need to adopt a global mindset to avoid limiting their future potential in the EU.

A wait-and-see approach won’t suffice — it’s better to prepare for the probability of AI-related regulations in the EU than to be caught short when the moment actually arrives.

1.

Conduct an internal assessment.

One of the first steps U.

S.

companies should take is to assess their AI solutions and offerings based on the principles defined in the EU document.

The EU guidelines state that trustworthy AI should be lawful, ethical and robust (from both a technical and social perspective).

Ultimately, AI systems and processes should not cause harm, either intentionally or unintentionally.

The EU guidelines also provide a list of seven requirements that AI systems should meet.

This list offers a natural starting point for an internal assessment, a checklist that can guide an evaluation of your organization’s AI technology and processes.

Requirements include: Human agency and oversightTechnical robustness and safetyPrivacy and data governanceTransparencyDiversity, non-discrimination and fairnessSocietal and environmental wellbeingAccountability The individual who owns responsibility for your organization’s AI solution or application should lead the assessment process.

However, it may also be valuable to identify an external partner with the technical expertise to thoroughly evaluate the architecture of your AI solution.

2.

Develop company guidelines or a manifesto.

At Globant, we believe AI should be used to make the world a better place, so we developed our own AI manifesto.

Leveraging insights from the guidelines published by other organizations and leaders in the AI idea space, we crafted a set of principles that describe the things our company will — and will not — do with AI technologies.

Going into the process, we understood the result we wanted to achieve with our manifesto.

We consulted our internal ethics team and people involved in AI at our company to ensure we covered the appropriate issues and concerns.

But in the end, the principles we developed simply reflected the ideals we were already living as an organization.

And that’s what the process of developing AI guidelines or a manifesto should involve: formalizing the ideals and values that already exist in your business.

By documenting your position, you can create additional opportunities for buy-in across the organization and establish a standard you can revisit and revise on an ongoing basis.

3.

Communicate your guidelines to all global employees.

Guidelines or a manifesto demonstrate that your organization takes the ethical implications of AI seriously.

But they are more than showpieces.

When used properly, AI guidelines help your organization engage problems in a more thoughtful and intentional way.

However, it’s important to remember that your AI guidelines are only as effective as your ability to communicate them across your organization.

Ideally, your AI principles should become ingrained in your organizational culture and embraced by all employees — regardless of geographical location.

In addition to communicating your AI guidelines or manifesto through traditional channels (e.

g.

, email, social media, etc.

), consider posting them in meeting rooms and other locations to improve visibility.

You may also want to include your AI principles in the employee onboarding process and offer internal sessions about the proper use of AI in the business.

The EU’s guidelines prescribe a “human-centric” approach to AI and underscore that companies should always use the technology to make the world a better place.

By evaluating your company’s alignment with the EU guidelines now, you can prepare for the day when the EU and other jurisdictions regulate the use of AI.

But just as importantly, you can ensure that your organization uses AI responsibly and in ways that reflects your core values.

Sign up for the free insideBIGDATA newsletter.

.

. More details

Leave a Reply