Red-teaming genAI raises not only methodological questions — how and when to red-team, who should participate, how results should be used — but also thorny conceptual questions: whose interests are being protected? What counts as problematic model behavior, and who gets to define it? Is the public an object being secured, or a resource being used? In this report, Ranjit Singh, Borhane Blili-Hamelin, Carol Anderson, Emnet Tafesse, Briana Vecchione, Beth Duckles, and Jacob Metcalf offer a vision for red-teaming in the public interest: a process that goes beyond system-centric testing of already built systems to consider the full range of ways the public can be involved in evaluating genAI harms.
The premise of this illustration was to show hands from different backgrounds coming together to examine/evaluate a never ending cubic hole but using traditional tools to measure something that is not traditional and yet to be fully defined. This message is less about collaboration even when it is suggested and focuses on the challenges of red teaming.