Ishanu Chattopadhyay

Assistant Professor of Medicine

University of Chicago

DATA 

AI

ETHICS

Friday the 13th, October 2023

Legal framework for ensuring that machine learning technologies

protect fairness, transparency, privacy, human safety, confidentiality of personal records.

AI governance

Fair housing laws prohibit financial officials from making loan decisions based on race, gender, and marital status in their assessments.

 

Yet AI designers inadvertently can find proxies that approximate these characteristics and therefore allow the incorporation of information about protected categories without the explicit use of demographic background.

  • It is difficult to impossible to measure fairness in general
  • It might even more difficult to measure equity

Autism prevalence is less among Black and Hispanic children.

 

It is not clear if this under-diagnosis due to biases in heathcare access, or there is an underlying biological mechanism

 

Tools to make diagnosis more efficient might reflect this same "inequity".

Ian Cero, Peter A. Wyman, I. Chattopadhyay, Robert D. Gibbons, Predictive equity in suicide risk screening, Journal of the Academy of Consultation-Liaison Psychiatry, 2023. https://doi.org/10.1016/j.jaclp.2023.03.005

AI

Equity

&

Fairness

Helping patients with suicidal ideation

Suicide is a major public health concern

1 death by suicide every 40 seconds

 

As per the data from the CDC, in 2019, there were over 47,500 suicide deaths in the U.S., with an age-adjusted rate of 13.9 per 100,000 individuals.

 

10th leading cause of death in the United States

Screening Tests are Increasingly common

Columbia-Suicide Severity Rating Scale (C-SSRS)

Patient Health Questionnaire-9 (PHQ-9)

Ask Suicide-Screening Questions (ASQ)

These screening tools are not meant to be diagnostic but rather to help identify individuals who may need further evaluation or intervention to prevent suicide.

Primary Care

Emergency Dept

School & Community

The increasing standardization of suicide risk screening suggests predictive models balance not only accuracy, but also fairness for the different groups of people whose futures are being predicted

Accuracy

Fairness

Group A

Group B

Ask Suicide-Screening Questions (ASQ) has high and equivalent sensitivity and specificity for suicide ideation across black and white youth in the emergency department.

Black

Sensitivity

Specificity

Non-Hispanic White

Equal across groups

ASQ

Different Base rates (prevalence)

6.11 per 100,000*

15.68 per 100,000*

Non-Hispanic White

Black

*CDC 2019 Data

Uneven base rates

Mathematically unavoidable trade-off between model accuracy and fairness

Predictive disparity is likely caused by uneven base rates on the outcome being predicted*

UCM Data

Blacks

Non-Hispanic Whites

AUC~90%

AUC~88%

Universal SCreening for Suicidal Ideation / Attempts

UCM Data

Universal SCreening for Suicidal Ideation / Attempts

UCM Data

Universal SCreening for Suicidal Ideation / Attempts

15

Assume you have $1,000,000 to allocate to the post-screening followup service

67%

33%

25

Number of actual individuals helped

Demographic breakdown at UCM

=40

9

Assume you have $1,000,000 to allocate to the post-screening followup service

44%

66%

49

Number of actual individuals helped

Demographic breakdown at UCM

+

Differential

base

rate

=58

Race-blind followup

17

Assume you have $1,000,000 to allocate to the post-screening followup service

77.5%

22.5%

17

Number of actual individuals helped

Equal outcome

allocation

=34

The Ethics Question

Distribute resources race-blind

Distribute resources to make equal outcomes

Lives saved

58

34

The new frontier of predictive fairness in suicide prediction

  • Large scale and prospectively designed studies are needed to investigate the full scope of the problem and optimal alternatives, considering not only traditional cost measures but also screening mistakes and community stakeholders' preferences.
  • Suicide prevention research can be informed by progress in algorithmic fairness, such as predictive models constrained by a fairness budget and survey methods to elicit desired fairness trade-offs from community members.
  • New best practices in predictive modeling of suicide risk should include optimization of both accuracy and fairness.
  • Practice guidelines for individual clinicians need to be developed based on prospective research studies, with caution against making ad hoc adjustments to screening and risk thresholds.

References

  • [1] Coley RY, et al. JAMA Psychiatry. 2021;78(7):726–34.
  • [2] Kearns M, Roth A. Oxford University Press; 2019.
  • [3] Wang X, et al. Manag Syst Eng. 2022;1(1):7.
  • [5] Kleinberg J, et al. ArXiv160905807 Cs Stat. 2016.
  • [8] Jung C, et al. arXiv. 2020.
  • [11] Zafar MB, et al. arXiv. 2017.
  • [12] Dwork C, et al. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. 2012.

Crime Prediction 

Predictive Technologies in Policing

  • Predictions cannot be self-fulfilling prophecies
  • Transparency
  • Explore the interaction between crime, social factors and enforcement
  • Ability to predict law enforcement response to crime as well

 

  • Analyze if such reponses reveal existant policy biases
  • High accuracy        and  actionable
  • Probe existing inefficiencies and biases
  • Almost no assumptions on the nature and distribution of crime
  • No manual feature selection: minimizes implicit bias
  • Maximize transparency: de-identified data and open source code
  • Not based on hotspot detection

Performance verified in 8 other US cities

Chicago, LA, Philadelphia, San Francisco, Detroit, Austin, Portland, Atlanta

  • Optimize resource allocation
  • Preventive interventions possible
  • Can testdrive different policies before deployment in high-fidelity simulation

Rotaru, V., Huang, Y., Li, T. et al. 

Event-level prediction of urban crime reveals a signature of enforcement bias in US cities. Nature Human Behavior 6, 1056–1068 (2022).

https://doi.org/10.1038/s41562-022-01372-0

For every 10 crimes,

11 flags, 3 false, 2 missed

(1 week advance, with 2 city blocks)

Infer cross-dependencies at different spatial and temporal scales

Signature of enforcement

inequity

Results corroborated in signature observed in raw data

The Problem of Free Will

3 day ahead prediction

Jan 1 2019

to

April 1 2019

Play Movie

Triangles: actual events

 

heatmap: predicted risk 3 days ahead

Triple homicide incident

Jan 7 2019

https://www.inquirer.com/crime/kensington-triple-shooting-homicide-philadelphia-police-20190107.html

Triangles: actual events

 

heatmap: predicted risk 3 days ahead

Question:

 

How do we effectively leverage AI for social good?

 

AI Governance

By Ishanu Chattopadhyay

AI Governance

Predictive modeling of crime and rare phenomena using fractal nets

  • 133