Good AI? Humanity, Technology & Justice
In this talk, Eleanor debunks some myths about AI, combatting misinformation and illuminating some of the blind spots in our current understanding of what AI is and what it can do. She gives an overview of the key ethical issues and explains why we need to rethink our relationship to technology. Along the way, she explores nature of quantification ("why some things can't be counted”), the disquieting implications of AI's encroachment on civil liberties and protests, and how feminist and anti-racist ideas are enriching AI development and regulation.
Artificial intelligence is not an ethereal “mind,” but a network of materials, human labor, and decisions that have consequences. Research from Cambridge warns that the myth of technological neutrality dresses up old prejudices in new clothes. The key to “good” AI is therefore ethics, regulation, and social responsibility, not just rapid innovation. AI emerges from natural resources and manual labor—from single-crystal silicon to thousands of people who label data. What we often call the objectivity of machines is in fact a set of subjective decisions about how to “simplify” the world into a database. The image of AI as a neutral technology is therefore more wishful thinking than fact. In hiring, tools are circulating today that estimate a candidate’s personality from video based on the face and the “Big Five.” A team from Cambridge showed that changing the contrast, brightness, or saturation of a photo changes the “personality” score—proof that the model measures the image, not the personality. Revived ideas of “cultural fit” recall old phrenology: the promise of simple answers to complex people.AI has a body: from silicon to annotators