Event overview
Thao Phan and Fabian Offert (paper 1) and Maya Indira Ganesh (paper 2).
Are some things (still) unrepresentable?
Thao Phan and Fabian Offert
“Are some things unrepresentable?” asks a 2011 essay by Alexander Galloway. It responds to a similarly titled, earlier text by the philosopher Jacques Ranciére examining the impossibility of representing political violence, with the Shoa as its anchor point. How, or how much political violence, asks Ranciére, can be represented? What visual modes, asks Galloway, can be used to represent the unrepresentable? In this talk, we examine two contemporary artistic projects that deal with this problem of representation in the age of artificial intelligence.
Exhibit.ai, the first project, was conceived by the prominent Australian law firm Maurice Blackburn and focuses on the experiences of asylum seekers incarcerated in one of Australia’s infamous “offshore processing centers.” It attempts to bring ‘justice through synthesis’, to mitigate forms of political erasure by generating an artificial record using AI imagery. Calculating Empires: A Genealogy of Power and Technology, 1500-2025, the second project, is a “large-scale research visualization exploring the historical and political dependence of AI on systems of exploitation in the form of a room-sized flow chart.
On the surface, the two projects could not be more unlike: the first using AI image generators to create photorealistic depictions of political violence as a form of nonhuman witnessing (Richardson), the second using more-or-less traditional forms of data visualization and information aesthetics to render visible the socio-technical ‘underbelly’ of artificial intelligence. And yet, as we argue, both projects construct a highly questionable representational politics of artificial intelligence, where a tool which itself is unrepresentable for technical reasons becomes an engine of ethical and political representation.
‘I became a Woman of Colour in 2013’: De-centering Whiteness for Savarna-ness in thinking about technology and power.
Maya Indira Ganesh
This is a short and early conversation about re-configuring studies of AI and bias away from a positionality that centres Whiteness chiefly because it obscures the axes of power and discrimination that matter in the lived realities of the global majority. Indian Dalit, Bahujan, and Adivasi (DBA) scholars have already developed a body of work showing the intersections of caste power, discrimination, and technology. As their work, and that of new Dalit intellectuals argue, caste is not just another demographic marker to be ticked off in a bias mitigation toolkit. This is partly to do with how caste privilege and hierarchies present and are experienced. But a more compelling reason is that we are still in need of radical and speculative approaches that might flip the script by drawing attention to Savarna privilege and how power works rather than how oppression is experienced.
Dates & times
Date | Time | Add to calendar |
---|---|---|
27 Mar 2025 | 5:00pm - 7:00pm |
Accessibility
If you are attending an event and need the College to help with any mobility requirements you may have, please contact the event organiser in advance to ensure we can accommodate your needs.