I. Misgendering = Misrepresentation
Artificial intelligence is a constantly evolving field that has a significant impact on millions of people around the world. One of the most prominent applications is automatic gender recognition (AGR), which is a subfield of face recognition that aims to identify the gender of individuals from images. Despite the existence of efficient neural networks for this task, facial recognition systems are flawed, manifesting gender bias and favouring one segment of society over others. The discussion around this bias has gained prominence in various spheres, such as academia, technology, and politics, highlighting how learning datasets can reflect pre-existing inequalities in society.
(De)coding Gender Bias addresses the social and cultural impacts of this phenomenon, seeking to deconstruct gender bias in facial recognition systems. To this end, it highlights the moments in the process when errors occur, revealing limitations, biases, flaws, fallacies, or vulnerabilities of AI, which are not restricted to technical limitations but rely on social biases.
The project seeks to highlight how machine learning systems can reflect and perpetuate gender stereotypes, echoing Kate Crawford’s words when questioning that “although technical systems present a veneer of objectivity and neutrality” they are often “designed to serve and intensify existing systems of power.” Automatic Gender Recognition (AGR), as described by Paquinelli and Joler, aims to illustrate two facets of machine learning simultaneously: its functioning and its failures. This involves enumerating its main components, as well as the wide range of errors, limitations, approximations, biases, faults, fallacies, and vulnerabilities inherent to its paradigm. This dual operation emphasizes that AI is not a monolithic paradigm of rationality, but a complex architecture composed of adaptive techniques and strategies. Furthermore, the boundaries of AI extend beyond technical limitations and are intricately intertwined with human bias.
This project was developed by Joana Costa as part of the coursework for the classes Project II and Laboratory II, within the Master's program in Communication Design at FBAUL, during the academic year 2022/2023.
All texts used in this project are from the following references:
- OS KEYES. The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition. University of Washington, USA.https://ironholds.org/resources/papers/agr_paper.pdf
- JOLER & PASQUINELLI (2020). The Nooscope Manifested. https://nooscope.ai/
- MATTEO PASQUINELLI (2019). How a Machine Learns and Fails – A Grammar of Error for Artificial Intelligence. https://spheres-journal.org/contribution/how-a-machine-learns-and-fails-a-grammar-of-error-for-artificial-intelligence/
machine learning learns nothing in the proper sense of the word, as a human does.
The quality of training data is the most important factor affecting the so-called ‘intelligence’ that machine learning algorithms extract.
Representation bias occurs when the development sample under-represents some part of the population, and subsequently fails to generalize well for a subset of the used population.
To combat gender bias in algorithms, it is crucial to address the underlying societal factors that perpetuate such biases. Taking a comprehensive approach is essential for rectifying this issue.
Thank you for reading.