Systems of AI can aggravate and reflect social inequalities, resulting in outcomes that harm groups and individuals, particularly those who are already disadvantaged. Humans are prejudiced and likely to make mistakes by nature. But when we make a deliberate attempt to act fairly, our biases and prejudices might be so deeply embedded that we are unaware of their impact. In principle, justice should be “blind,” but prejudice is evident in court systems all over the world. On the other hand, computers and algorithms are emblems of precision and efficiency. They are intended to perform in an unbiased and fair manner, like machines and digital code. It would seem logical to develop AI algorithms and software to mitigate these human flaws, and a start has been made by the use of AI to assist courts in the expectation of delivering more equitable and less biased punishments.
On the other hand, our trust in the impartiality of computer systems is incorrect. These systems combine the prejudices of their programmers and designers as well as the wider society in which they are formed because they are conceived and created by people and trained on human-supplied data (Friedman et al., 1996). A prejudiced computer system, according to Friedman and Nissenbaum (1996), is a “systematic and unjust bias against some people or individual groups in favour of others.” For AI applications, this definition is still valid. Moreover, there is already proof that these apps can discriminate in important aspects of people’s lives methodically and unfairly. This is unsurprising, given AI’s capacity to influence individual decisions regarding service access, job placement, and financial assistance. Prejudiced applications of AI could be the cause of major injustice and inequality in society because of their uncontrolled and unregulated spread.
What do you think?