<We_can_help/>

What are you looking for?

RESPONSIBLE AI ART GUIDE

Machine learning tools provide the potential for unprecedented creative expression, but also the potential to cause unintended harms.

“Making AI Art Responsibly: A Field Guide” is an illustrated zine composed of questions and case studies to help AI artists use AI techniques responsibly and with care. 

We suggest that artists using AI should consider themselves part of the broader responsible AI community. As a result, it is important to consider factors such as consent of people represented in your datasets, labor involved in the model development and pre-existing codebase/tools, and AI infrastructure and environmental costs of training machine learning models. We hope that artists will attend to the potentially unintended harmful consequences of work as understood in domains like information security, misinformation, the environment, copyright, and biased and appropriative synthetic media.

With this guide, we believe that by reflecting on what is “responsible” for their own creative works, artists can push forward best practices employed by all AI practitioners.

Doing so can ensure that they harness the expressive potential of AI, responsibly.

The Responsible AI Art Field Guide has been featured by Gray Area, DISEÑA, and Mozilla Festival.

Read the full guide here: Making AI Art Responsibly: A Field Guide


Leibowicz, Saltz, and Coleman bring together interdisciplinary perspectives from media studies, human-computer interaction, AI research, and fine arts. They believe that when it comes to the rapidly evolving AI field, artists using AI techniques have a responsibility to engage in conversation with other disciplines, especially the responsible AI community. The authors create exploratory prompts that are not prescriptive, but instead guide audiences to reflect on their own practices in order to create work with more care and attention to broader societal impacts.

Claire Leibowicz

Claire Leibowicz leads the AI and Media Integrity program at the Partnership on AI. Claire holds a BA in Psychology and Computer Science, Phi Beta Kappa and Magna Cum Laude, from Harvard, and a Master’s in the Social Science of the Internet from University of Oxford as a Clarendon Scholar.

Emily Saltz

Emily Saltz is a Research Consultant at the Partnership on AI studying misinformation. Before that, Emily led UX for The News Provenance Project at The New York Times. She has presented work on new media at Eyeo, CHI Play, WordHack, and more. She has a Master’s in Human-Computer Interaction from Carnegie Mellon University.

Lia Coleman

Lia Coleman is an artist, AI researcher, and educator who teaches machine learning artwork at Rhode Island School of Design. She has presented work at NeurIPS, New York University, Mozilla Festival, Gray Area, and Partnership on AI. Her writing on AI art has been published by Princeton Architectural Press and Neocha Magazine. She holds a BSc in Computer Science from Massachusetts Institute of Technology.


Watch Vídeo