The AI Ethics Deficit — 94% of IT Leaders Call for More Attention to Responsible and Ethical AI Development

94% of IT Leaders believe more attention needs to be paid to corporate responsibility and ethics in AI development 53% place primary responsibility on the organizations developing the AI systems, while 17% place responsibility on the individuals working on AI projects 50% believe organizations will take guidance and adhere to recommendations from independent expert advisory groups 87% of IT Leaders believe AI development should be regulated to ensure it serves the best interests of business, governments, and citizens alike Ethical and responsible AI development is a top concern for ITDMs, according to the research results, which found that 94% of ITDMs across the US and UK believe more attention needs to be paid to corporate responsibility and ethics in AI development.

 A further 87% of ITDMs believe AI development should be regulated to ensure it serves the best interests of business, governments, and citizens alike.

googletag.

cmd.

push(function() { googletag.

display(div-gpt-ad-1439400881943-0); }); Who Bears Responsibility?.When asked where the ultimate responsibility lies to ensure AI systems are developed ethically and responsibly, more than half (53%) of ITDMs point to the organizations developing the AI systems, regardless of whether that organization is a commercial or academic entity.

However, 17% place responsibility with the specific individuals working on AI projects.

What’s striking is that respondents in the US are more than twice as likely as those in the UK to assign responsibility to individual workers (21% vs.

9%).

A similar number (16%) see an independent global consortium, comprised of representatives from government, academia, research institutions, and businesses, as the only way to establish fair rules and protocol to ensure the ethical and responsible development of AI.

A further 11% of ITDMs believe responsibility should fall to the governments in the countries where the AI systems are developed.

Independent Guidance and Expertise Some independent regional initiatives providing AI support, guidance, and oversight are already taking shape, with the European Commission High-Level Expert Group on Artificial Intelligence being one such example.

ITDMs see expert groups like this as a positive step in addressing the ethical issues around AI.

Half of ITDMs (50%) believe organizations developing AI will take guidance and adhere to recommendations from expert groups like this as they develop their AI systems.

Additionally, 55% believe these groups will foster better collaboration between organizations developing AI.

However, Brits are more skeptical of the impact these groups will have.

15% of ITDMs in the UK stated that they expect organizations will continue to push the limits on AI development without regard for the guidance expert groups provide, compared with 9% of their American counterparts.

Furthermore, 5% of UK ITDMs indicated that guidance or advice from oversight groups would be effectively useless to drive ethical AI development unless it becomes enforceable by law.

A Call for Regulation Many believe that ensuring ethical and responsible AI development will require regulation.

In fact, 87% of ITDMs believe AI should be regulated, with 32% noting that this should come from a combination of government and industry, while 25% believe regulation should be the responsibility of an independent industry consortium.

However, some industries are more open to regulation than others.

Almost a fifth (18%) of ITDMs in manufacturing oppose the regulation of AI, followed by 13% of those in the Technology sector,and 13% of those in the Retail, Distribution and Transport sector.

In giving reasons for the rejection of regulation, respondents were nearly evenly split between the belief that regulation would slow down AI innovation, and that AI development should be at the discretion of the organizations creating AI programs.

Championing AI Innovation, Responsibly AI is the future, and it’s already having a significant impact on business and society,” commented Gaurav Dhillon, CEO at SnapLogic.

“However, as with many fast-moving developments of this magnitude, there is the potential for it to be appropriated for immoral, malicious, or simply unintended purposes.

We should all want AI innovation to flourish, but we must manage the potential risks and do our part to ensure AI advances in a responsible way.

Data quality, security, and privacy concerns are real, and the regulation debate will continue.

But AI runs on data — it requires continuous, ready access to large volumes of data that flows freely between disparate systems to effectively train and execute the AI system.

Regulation has its merits and may well be needed, but it should be implemented thoughtfully such that data access and information flow are retained.

Absent that, AI systems will be working from incomplete or erroneous data, thwarting the advancement of future AI innovation.

” About the research The research was conducted by independent research house Vanson Bourne in February 2019 on behalf of SnapLogic.

A total of 300 IT decision-makers participated in the study, representing organizations with more than 1,000 employees across the United States and the United Kingdom.

Contributed by Daniel D.

Gutierrez, Managing Editor and Resident Data Scientist for insideBIGDATA.

In addition to being a tech journalist, Daniel also is a consultant in data scientist, author, educator and sits on a number of advisory boards for various start-up companies.

    Sign up for the free insideBIGDATA newsletter.

 .

. More details

Leave a Reply