Andrea Downing
They deployed Health AI on us. We’re bringing the rights & red teams.
ABSTRACT
AI is rapidly reshaping healthcare—from diagnostics to mental health chatbots to surveillance inside EHRs—often without patient consent or clear oversight. The Patient AI Rights Initiative (https://lightcollective.org/patient-ai-rights/) lays out the first patient-authored ethical framework for Health AI. Now it’s time to test it like any other system: for failure, bias, and exploitability.
We’ll introduce the 7 Patient AI Rights and challenge participants to stress test them through the lens of security research. Working in small groups, you'll choose a Right and explore how it could break down in the real world.
Together, we’ll co-create early prototypes for a “Red Teaming Toolkit for Health AI” to evaluate Health AI systems based on the priorities of the people most impacted by them: patients.
This session is ideal for patient activists, engineers, bioethicists, and anyone interested in building accountable, rights-respecting AI systems from the outside in.
