AI can be biased, studies reveal
AI has become a handy tool for simplifying our life. AI is already a key player behind online giants like Google Facebook. The technology has seen so much advancement that it can even pick the right medical treatment for a patient. However, it has also been found that AI can favor prejudice at times. While AI programs look perfect with their decision-making capability, there are data available to show that they can mimic existing biases.
It has been found that the AI algorithm COMPAS has been prejudiced against African Americans. Used by the US, COMPAS is a program that helps decide on declaring the punishment for an offender by forecasting the probability of a defendant repeating offenses. In May 2016, ProPublica let out the results of a study that it had carried out to check whether COMPAS is prejudiced. The study showed that COMPAS forecasted that African Americans have more probability of repeating the offenses and put in jail than they really do. Apparently, AI was not even better than random internet users in making the right decision in this case.
Unjustifiable focus on certain minorities
The algorithm PredPol was popular as a program that predicts the time and location of future crimes. However, in 2016, the Human Rights Data Analysis Group revealed based on a study that PredPol could make police officers unwarrantedly put the focus on certain localities. When a simulation was carried out on PredPol for drug-related crimes in Oakland, it was found that the program often directed cops to places where ethnic people live. In another study, Suresh Venkatasubramanian of the University of Utah showed that the program can make the ethnic biases even worse, as it causes a “feedback loop”. This is because the program gets its inputs from police reports rather than actual crime rates.
Preference to white people
Law enforcement officials these days use face recognition for picking the race and gender bias. MIT’s Joy Buolamwini has shown that popular gender-related algorithms like Microsoft, IBM, and China’s Megvii can pick an individual’s gender from a snap ninety-nine percent of the time. However, this was effective only for white males. The AI program succeeded only thirty-five percent of the time for black females.
Google Facebook Works:
Prejudice against women
While twenty-seven percent of US CEOs are women, only eleven percent of image search results on Google for “CEO” showed females, according to a study. This indicates that even AI can be biased like humans are. And there’s a study by Anupam Datta of Carnegie Mellon University that has proven that Google’s ad system correlated high paying jobs to males a lot more than to females.
FB’s translation system triggers hullaballoo
In 2017, Israeli cops arrested a Palestinian construction worker because he had made an FB status update with a snap of him standing near a bulldozer with the Hebrew title “yusbihuhum”. Actually, the Arabic for “good morning” looks similar to “attack them,” and FB’s translation system saw it as “attack them.” The cops carried out an enquiry on the Palestinian man, assuming that he was likely to use the bulldozer for an attack. FB later apologized for the error.