Published March 19, 2024, by The Conversation.
Making AI fair isn’t straightforward, and there are no one-size-fits-all solutions. It requires a process of continuous learning, adaptation, and collaboration. Given that bias is pervasive in society, I believe that people working in the AI field should recognize that it’s not possible to achieve perfect fairness and instead strive for continuous improvement.
The pursuit of fairness in AI isn’t merely an ethical imperative but a requirement to foster trust, inclusivity, and the responsible advancement of technology. However, ensuring that AI is fair is a major challenge. On top of that, my research as a computer scientist who studies AI shows that attempts to ensure fairness in AI can have unintended consequences.
Why fairness in AI matters
In our study on network pruning—a method to make complex machine-learning models smaller and faster—we found that this process can unfairly affect certain groups. This happens because the pruning might not consider how different groups are represented in the data and by the model, leading to biased outcomes.
The multifaceted nature of fairness means that AI systems must be scrutinized at every level of their development cycle, from the initial design and data collection phases to their final deployment and ongoing evaluation. This scrutiny reveals another layer of complexity. AI systems are seldom deployed in isolation. They are used as part of often complex and important decision-making processes, such as making recommendations about hiring or allocating funds and resources, and are subject to many constraints, including security and privacy.
Unfairness can stem from two primary sources: the input data and the algorithms. Research has shown that input data can perpetuate bias in various sectors of society. For example, in hiring, algorithms processing data that reflect societal prejudices or lack diversity can perpetuate “like me” biases. These biases favor candidates who are similar to the decision makers or those already in an organization. When biased data are then used to train a machine-learning algorithm to aid a decision maker, the algorithm can propagate and even amplify these biases.
Why fairness in AI is hard
Fairness is inherently subjective, influenced by cultural, social, and personal perspectives. In the context of AI, researchers, developers, and policymakers often translate fairness to the idea that algorithms shouldn’t perpetuate or exacerbate existing biases or inequalities.
Artificial intelligence’s capacity to process and analyze vast amounts of data has revolutionized decision-making processes, making operations in healthcare, finance, criminal justice, and other sectors of society more efficient and often more effective.
However, measuring fairness and building it into AI systems is fraught with subjective decisions and technical difficulties. Researchers and policymakers have proposed various definitions of fairness, such as demographic parity, equality of opportunity, and individual fairness.
Why the concept of algorithmic fairness is challenging
Ethically, fairness is a cornerstone of building trust and acceptance of AI systems. People need to trust that AI decisions that affect their lives—for example, hiring algorithms—are made equitably. Socially, AI systems that embody fairness can help address and mitigate historical biases—for example, those against women and minorities—fostering inclusivity. Legally, embedding fairness in AI systems helps bring those systems into alignment with anti-discrimination laws and regulations around the world.
In addition, fairness can’t be distilled into a single metric or guideline. It encompasses a spectrum of considerations including, but not limited to, equality of opportunity, treatment, and effect.
Unintended effects on fairness