The Hidden Dangers in Algorithmic Decision Making

Math as a construct, along with its properties, exist as a product of human thought, which leaves it vulnerable to human subjectivity just the same as other measures.Training Data ‘Fairness in Classification’We’ll start with how algorithms are trained — machine learning algorithms are trained based on datasets that are chosen by the programmers..With this training data, they recognize and leverage patterns, associations, and correlations in the statistics..For example, an algorithm can be trained to distinguish between a cat and a dog by being fed thousands of pictures of different cats and dogs..Classification is the easier of the tasks; applying an algorithm to a judgement call based on a human is much more multifaceted than that..For example, in the case of AI in the criminal justice system, specifically assisting judges in making a decision whether or not to grant parole to an offender — engineers can feed thousands of decisions and cases that were made by humans in the past, but all the AI can understand from that is the outcome of a decision..It still does not possess the sentience to understand that humans are influenced by so many variables, and rationality is not always the top tier of human decision-making..This is a problem coined by computer scientists called ‘selective labelling.’ Human biases are learned throughout many years of societal integration, cultural accumulation, media influences, and more..All of these learned biases seep into the algorithms that learn — just as humans, they don’t start off biased..However, if given a flawed dataset, they might end up as such.Societal ReflectionAlgorithms are taught to make predictions based on information fed to it and the patterns it extracts from this information..Given that humans show all types of biases, a dataset representative of the environment can learn these biases as well..In this sense, algorithms are like mirrors — the patterns they detect reflect the biases that exist in our society, both explicit and implicit.Tay, the Artificial Intelligence chatbot designed by Microsoft in 2016.Take Tay, the original Microsoft chatbot, for example..Tay was designed to simulate the tweets of a teenage girl from interactions with Twitter users — however, in less than 24 hours, the internet saw Tay go from tweeting innocent things like “humans are super cool” to quite worrisome ones, such as “Hitler was right I hate the jews,” simply in virtue of the surrounding tweets on the internet.. More details

Leave a Reply