Asking the Wrong Questions about Generative AI

Asking the Wrong Questions about Generative AI: Emergent Ethics & Aesthetics in Machine Collaboration

 

Sarah Ciston considers how creators interested in using generative tools like ChatGPT and Midjourney should account for the hundreds of thousands of unwitting co-authors whose content was scraped online for their proprietary models. Ciston presents alternative techniques that can make space for new aesthetics and ethics to emerge through community-centred approaches to machine learning: namely, conscientious dataset stewardship, small dataset curation, data sovereignty, and reimagining machine learning models from scratch.

Sarah Ciston

 

Sarah Ciston is a poet-programmer who loves building tools to bring Intersectional approaches to machine learning and building community through accessible creative-critical coding. They are an AI Anarchies Fellow with the Akademie der Künste, a Mellon PhD Fellow in Media Arts and Practice at the University of Southern California, and an Associated Researcher at the Humboldt Institute for Internet and Society, plus author of "A Critical Field Guide to Working with Machine Learning Datasets" from the Knowing Machines research project. Their work has been featured in Leonardo Electronic Almanac, Ada Journal of Gender, New Media, and Technology; and CITAR: Journal of Science and Technology of the Arts. They also lead Creative Code Collective, a student community for co-learning programming using approachable, interdisciplinary strategies. Sarah’s projects the Intersectional AI Toolkit, as well as an interactive natural language processing (NLP) database to "rewrite" the inner critic and a bot that tries to explain feminism to online misogynists. Sarah is currently developing a ‘queer love corpus’ that experiments with alternative methods of conscientious data stewardship in order to counter large language models like ChatGPT.

Previous
Previous

Voicing the Unspoken