AI & Ethics: With great power comes great responsibility

AI & Ethics: With great power comes great responsibilityCharlotte MurrayBlockedUnblockFollowFollowingFeb 19“With great power comes great responsibility” — Uncle Ben, SpidermanArtificial intelligence (AI) and machine learning are revolutionising industries and societies at large.

Their capabilities are embedded in our everyday lives and their applications are expanding.

The outcomes of their decisions now impact lives, and with that comes significant risk.

For these technologies to continue to have a positive impact on society, they need to abide by humanity’s ethics and principles.

Technology is only as good as its creator, so everyone working in the field is responsible for creating an ethical framework for AI that reflects our values.

This starts by looking inwards and reflecting on our own beliefs.

From a young age, people, culture, education, religion, media and politics shape our outlook and beliefs about the world.

These experiences generate information that our minds connect and categorise to make sense of it all.

These categories sometimes form stereotypes, which trigger cognitive associations about characteristics like gender, age and race that don’t reflect reality.

This has damaging impacts on society.

The problem is this process is immediate, automatic and often not something we are aware is happening.

Most would contradict a discriminatory thought that slips into consciousness, but given that behaviour is largely determined by the unconscious mind, positive intentions are not enough.

AI’s relative objectivity can help counteract human subjectivity and has so far proven to be revolutionary to domains such as science, health, technology and finance.

Consequently, with great power comes great responsibility, and there are significant risks to such powerful advances in computation and technology which must be mitigated before public consumption.

To ensure that AI does no harm, we must proceed with caution.

We cannot outsource our ethical responsibilities to machines.

We need to protect our ethical standards and incorporate them into a framework that guides the design, implementation and utility of AI in society.

…given that behaviour is largely determined by the unconscious mind, positive intentions are not enough.

Ethical FrameworkThe comprehensive report called Ethically Aligned Design was created to inform, upskill and support organisations that create AI of the importance of ethics in technologies that empower humanity.

Created by hundreds of diverse thought leaders across different disciplines, the document outlines some key ethical principles that AI decision-making should adhere to.

Each of these principles are important to a truly “ethical AI”, and this article proposes that the principles are in fact interconnected and dependent on 3 broader Principles:Intelligibility: The technical processes are transparent and explainable.

Accuracy: The degree to which the output is representative of the truth.

Fairness: The decision-making is impartial and made irrespective of sensitive data.

Coined by the UK government, ‘intelligibility’ of AI creates accountability for the outcomes of their decisions.

Knowledge of how and why decisions are made allows users to understand if they are being exploited and consequently whether human rights have been infringed.

Lifting the lid on opaque systems is necessary to establish the accuracy of its computational methods, and only through accurate insight can society begin to trust its place in the decision-making process.

Understanding the accuracy of insights and their impact on human rights is critical to ensuring that the outputs are fair and promote positive well-being.

1.

IntelligibilityToday’s algorithms decide a lot of things about us — who gets hired, who gets fired, who gets a mortgage and who is a dangerous criminal — which can make what might be a difficult decision, simple.

Knowing how and why these decisions are made is less simple.

Organisations have historically hidden behind ‘black box’ systems that prevent scrutiny of their methodology, blaming complexity and intellectual property protection for the secrecy.

However, several high-profile cases have questioned the validity of the black box decisions, which ignited the demand for ‘intelligible’ AI systems — those that are technically transparent and explainable.

“When algorithms affect human rights, public values or public decision-making, we need oversight and transparency” — Marietje Schaake, Member of the European Parliament.

Lifting the lid on black box systems is necessary to examine and scrutinise the validity of AI’s output, but the complexity of the multi-layered neural networks makes for extremely difficult auditing.

The gravity of these decisions on people’s lives, however, necessitates that machine learning frameworks be explainable from input to output, which is a huge factor to start thinking about from the design phase.

The Department of Defense argued that the inability to explain the inner workings of AI systems limits their effectiveness.

To combat these limitations they are currently developing the Explainable AI Program (XAI).

Its aim is to produce machine learning techniques that can explain their methodology whilst maintaining their predictive power and the intellectual property of the code.

XAI Concept“When people fly in a plane, they do not need to know exactly how the plane works to feel safe flying.

They just need to know that the plane abides by certain aviation safety regulations”.

 — paraphrasing an interesting comparative example I heard in a talk recently from Ivana Bartoletti, Head of Privacy and Data Protection at Gemserv and Co-Founder of Women Leading in AI Network.

You should not have to be a data scientist to understand an AI’s logic.

This information needs to be available and accessible in different forms to every type of user, from academics to the general population.

When users can understand how and why outputs are produced, they can more easily trust its accuracy and fairness.

Not all solutions have to be technical.

Regulations like GDPR have also brought in several provisions about automated decision-making without any human involvement, including the requirement to disclose when and how this process is used.

Organisations must give access to the data, the logic/rules and the audit trail for any automatic AI decisions, as well as making the user aware of who made the decision — machine or human.

The era of unaccountable machines is therefore over.

The burden of accountability has been shifted to the creators of AI to ensure that advances in technology do not come at the cost of human ethics and values.

2.

AccuracyIntelligibility helps determine the accuracy of algorithms.

AI and machine learning have revolutionised the speed, efficiency and price of crunching large volumes of data.

The powers and capabilities of the technologies have done everything from outperforming doctors at identifying lung cancer types and heart disease, writing sci-fi films and even beating two of Jeopardy’s best contestants.

But, AI has also got it drastically wrong.

In 2015, Google Photos automatically created the ‘gorilla’ tag on images containing black people.

In 2016, Uber trialled autonomous self-driving cars in San Francisco that ran through six red lights, one of which was on a busy pedestrian crossing.

Various researchers trained an algorithm to identify early signs of future suicide risk using patients’ historical medical records, but the methodology was overrun with false positives.

For suicide prevention, one would not necessarily think that false positives are detrimental, but in fields such as medicine and criminology, false positives can be life changing.

Google Photos mishapThe first step towards improving accuracy of machine learning algorithms relies less on the statistical techniques themselves, and more on the process.

The goal of AI and machine learning is to assist humanity, not replace it.

Its applications in professional, high stake settings should only be to assist experts in their decision-making, not substitute them.

Until truly intelligible AI systems are achieved, their outputs should only be used to facilitate conversation whilst experts remain the final decision maker.

Humans still have a responsibility to ensure that the outputs of these technologies are valid before applying the insights to recommendations that impact people.

While managing the role of AI in decision-making, we should also be examining and testing their outputs.

This starts by monitoring the data going into the models — ‘garbage in, garbage out’.

Organisations rarely recognise the importance of good quality input data until the AI is already in public consumption.

Safeguards must be put in place that assess data quality and appropriateness before being allowed to enter the data model, especially in early phases when the algorithms are being developed, tested and trained.

The burden of responsibility for this process is gradually moving from man to machine, but not completely until the methods prove infallible to bad, tricky data.

The goal of AI and machine learning is to assist humanity, not replace it.

Alternatively, algorithms can be built to be aware of their own uncertainty about their accuracy.

Understanding uncertainty makes it possible to make rational decisions based on the risks of incorrect decisions.

Statistical methods like proper scoring rules encourages the predictor to be truthful (i.

e.

honest about their uncertainty) to maximise the expected outcome.

Similarly, Bayesian inference is a methodology by which an algorithm’s uncertainty updates and refreshes as it receives new information that is either for or against a hypothesis.

The possibility to make machine learning aware and critical of data and its own decision-making is therefore achievable, but the burden of responsibility is still on technologists to implement these techniques proactively.

Accuracy is too often judged ad-hoc once the AI is already in human consumption, but the severe impact of inaccurate decisions on human life and ethics necessitates that these techniques be incorporated in the design phase before real-life application.

Bayesian inferenceAI and machine learning have enabled us to generate insights to a greater degree of accuracy than ever before, but humans should not be completely removed from the decision-making loop.

In the era of fake news and misinformation, it’s vital that scientific rigour continues to be at the heart of discovery while we continue to build on the body of facts and truth.

Only through rigorous testing and putting accuracy at the core of the design principles will humans trust AI, which is critical to the acceptance of its utility and insights in society.

3.

FairnessAccurate insights are critical to fair decision-making.

There is much debate around what is truly ‘fair’ decision making, but put simply, it is the equal, unbiased treatment of all groups/variables in a decision without influence from characteristics such as gender, race or religion.

True fairness starts with data that represents a wide spectrum of characteristics, symbolism, beliefs and ideas, which influences how algorithms are trained and consequently how machine learning models make decisions.

Unfortunately, algorithms and the data used to train them are generated by people and people are unavoidably and inherently biased.

Microsoft learnt the hard way when they released a bot on Twitter called Tay, who became a racist, sexist, anti-sematic tyrant asking for all feminists to “burn in hell”.

The bot itself was unbiased, but the training data generated by Twitter users was not.

In addition to biased training data, behind every algorithm is an individual with personal beliefs that form the basis of how machine learning decisions are made, known as ‘algorithmic bias’.

The experiment with Tay was an unfortunate learning curve for Microsoft with no serious ramifications other than bad publicity, but algorithmic bias in other cases has resulted in unethical, unfair treatment.

Algorithms and the data used to train them are generated by people and people are unavoidably and inherently biased.

Researchers at Carnegie Mellon University showed that Google advertisements promising help to job applicants for roles paying more than $200,000 were shown to significantly more men than women.

In 2016, machine learning algorithms used to judge international beauty contestants were negatively biased against those with dark skin.

These disasters were firstly caused by biased training data — the strength of associations between items reflected stereotypes rather than reality.

Secondly, the algorithms used to analyse the training data did not adjust for these biases and also incorporated their own biases that perpetuated these stereotypes.

As exemplified by the recent controversies with facial recognition software, a biased training data set impacts how well technology can handle diverse data and users, which consequently limits their capacity to respond fairly and objectively.

MIT Media Lab researcher Joy Buolamwini demonstrating how AI facial recognition works significantly better for white faces than black faces.

The datasets used to train machines must be carefully curated to reflect diverse characteristics, beliefs, cultures and values such that any decision made is representative of the entire population, not just a selection of it.

The difficulty is that data is ‘labelled’ from prior decision-making, which often reflects unwanted prejudices and underrepresentation of minorities.

Consequently, AI can learn to mirror the biases and reflect that in its own decision-making, thus perpetuating and reproducing the historical biases.

Similarly, ‘feature engineering’ in machine learning, which involves data transformation to facilitate modelling, emphasises certain features and variables over others.

The variables it is programmed to focus on have a significant impact on the data, distinctions and classifications made downstream in the machine learning process.

If groups are over-simplified or wrongly classified, this risks removing nuanced but important features and distinctions from the final decision.

Data and algorithms are only half the problem.

The applications of AI and machine learning have expanded to settings which have to abide by a set of ethical and social norms that vary between countries, cultures and professions.

An AI applied in a medical setting adheres to different ethical standards to in an insurance setting — gender can determine your car insurance rate but not your access to medical care — but teaching AI how to act in every possible situation is very complex and always changing as ethics evolve.

Creating an AI that is aware and in-line with ethics that humans themselves don’t always abide by is challenging, but the protection of personal well-being and human rights is paramount to the usage of these technologies.

Until AI itself can be made truly fair, the machine learning process must be moderated and corrected for its fairness from start to finish in a traceable and transparent way.

In recent years, approaches and groups like Discrimination-Aware Data Mining (DADM) and Fairness, Accountability and Transparency in Machine Learning (FATML) have been created to ameliorate the unwanted discriminatory effects of machine learning and promote ‘fairness-aware’ data mining approaches.

A recent paper highlighted that the concept of fairness is multi-faceted and measurable in several ways depending on the context, but any ‘fair’ measurement critically depends on knowledge of correlations between sensitive/protected characteristics and other feature of the data.

Open source tools like IBM’s AI Fairness 360 can help achieve this, but GDPR makes access to this data challenging and often impossible, so organisations need to learn how to identify bias in their data without awareness of the protected characteristics attached to it.

The strongest form of defence is having a diverse team who experience the bias themselves scrutinising the data and algorithms.

AI Fairness 350 demoDiversity has long been accoladed as the key to innovation and financial success, but it is also necessary for creating and moderating ‘fair’ AI systems.

A diverse range of backgrounds will naturally generate a wide range of perspectives, shed light on hidden biases and provide deeper, more creative insight into how to remove bias from the process.

The team should also come from a range of disciplines, including psychologists, ethicists and sociologists, to appreciate the social and cultural context that AI operates in.

Those affected by algorithmic decision-making can also participate in promoting fairness through a process that enables them to report discriminatory AI-made decisions.

In America, teachers had their bonuses, pay rises and employment statuses decided by an algorithmic ‘Value Added Model’, which was widely repealed after repeated complaints of transparency, accuracy and fairness of its decisions.

By working to ensure that the recipients of AI decision-making, no matter their background, are treated fairly as individuals and equals, AI can serve to protect and enhance individual and societal well-being.

Final ThoughtsFor AI and machine learning to have a sustainable, positive influence on humanity, they must be guided by the same ethical morals and principles that humans themselves abide by.

The sheer power and capabilities of autonomous systems have gradually removed the need for humans to be involved at various stages of the decision-making process, but with great power comes great responsibility.

The expanding contexts that AI and machine learning are being applied in means that they need to abide by sets of complex, value-laden principles.

The ultimate goal of these technologies is to serve the human; not the other way around, so the communities that build them are responsible and accountable to continually install knowledge and compliance of evolving ethical principles and the contexts in which they apply.

Only through intelligibility, accuracy and fairness can humanity leverage the power and mitigate the risks that AI and machine learning continue to develop — technical debt in AI must not lead to ethical debt in society.

Want to chat?LinkedIn: https://www.

linkedin.

com/in/charlotte-murray-0753a7a3/Twitter: @charlcmurray.

. More details

Leave a Reply