Title Text

 

University of Delaware

Department of Physics and Astronomy

 

federica bianco

she/her

 

Data  Science Institute

Biden School of PublicPolicy & Administration

UD Center for Science, Ethics, and Public Policy

Ethics and diversity in AI and ML

This deck

fbianco@udel.edu

Reactive Framework

NASA has the opportunity to set up its AI operations embedded in an ethical framework

Proactive Framework

Organizations inspect their ethical responsibility after incidents

  • Ethics of AI
  • Responsible AI
  • Human-centered AI
  • Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
  • An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
  • An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
  • A set of techniques, including machine learning that is designed to approximate a cognitive task.
  • An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decisionmaking, and acting.

from

Ethics of Science

to

Ethics of AI

The Guidelines on Ethics from the American Physical Society state: As citizens of the global community of science, physicists share responsibility for its welfare.

The Dual-use-dilemma: Research delivers knowledge that can be used for unintended purposes, including illegal and societal- and ethically-disputable applications. The responsibility of the tools’ creator has long been the subject of debate. The same is true for innovation in data science, machine learning, and artificial intelligence.

 

unexpected consequences of NLP models

 

Vinay Prabhu

unethical applications of Facial Recognition

unethical applications of Facial Recognition

Who is responsible?

Tool creator? 

AI developer

 

Practitioner that selects the tool?

Sheriff/DA office?

 

Decision maker that interprets the tools results?

Police?

Ethics of AI

Ethics of AI

moral implications of the technology we label AI.

Information philosophy

Data Ethics 

Futurism

Scify

machine ethics

is concerned with ensuring that the behavior of machines toward human users

Human-Robot Interaction

is concerned with ensuring that the behavior of machines toward human users

Privacy and Surveillance

collection access to information

Algorithmic bias and trust in AI

embedding bias and opacity of ML algorithm

Ethics of AI

 

Information philosophy

Data Ethics 

Futurism

Scify

machine ethics

is concerned with ensuring that the behavior of machines toward human users

  • First Law—A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law—A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

isaac asimov laws of robotics, 1942

Ethics of AI

 

Information philosophy

Data Ethics 

Futurism

Scify

Human-Robot Interaction

 

Blade Runner, Ridley Scott, 1982

Sophia Robot, Citizen of Saudi Arabia, 2016

David Hanson

Ethics of AI

Privacy and Surveillance

Information philosophy

Data Ethics 

Futurism

Scify

Who collects the data, Who accesses the data?

Balance between Open data, reproducibility, and preserving privacy?

 

 

NASA RELEVANCE: imaging satellites, GPS

Ethics of AI

 

Diebold, Inc., has crossed over the digital threshold with new observation technology that the company believes will transform the surveillance world ... Diebold's answer to analog stemmed from a Space Act Agreement with NASA's Glenn Research Center, in which the North Canton, Ohio-based company acquired the exclusive rights to video observation technology that was designed for high-speed applications and does not require human intervention.

Ethics of AI

Algotirhmic Bias and Trust in AI

 

 

Information philosophy

Data Ethics 

Futurism

Scify

ML models "learn" from data examples by minimizing  some an objective function. 

 

 

ML models "learn" from data examples by minimizing  some an objective function. 

 

 

Information philosophy

Data Ethics 

Futurism

Scify

It can learn and amplify bias in data

ML models "learn" from data examples by minimizing  some an objective function. 

 

 

(covarience is gonna get you)

If the data is biased, model learn and amplify the bias

1.

Ethics of AI

Algorothmic Bias

 

 

ML is a representation of the "world"

 

what world?

 

the world we live in?

 

or the world we would like to live in?

 

who decides what that looks like?

Ethics of AI

Trust in AI and AI opacity

 

Information philosophy

Data Ethics 

Futurism

Scify

DARPA’s Explainable Artificial Intelligence Program Gunning & Aha

generalized additive models

decision trees

SVM

Random Forest

Accuracy

univaraite

linear

regression

Deep NeuralNets

univaraite

linear

regression

If model cannot be interpreted can they be trusted?

can they fulfill the Right to Explanation? 

 

interpretability 

Ethics of AI

Trust in AI and AI opacity

 

Information philosophy

Data Ethics 

Futurism

Scify

DARPA’s Explainable Artificial Intelligence Program Gunning & Aha

trivially intuitive

generalized additive models

decision trees

SVM

Random Forest

Accuracy

univaraite

linear

regression

we're still trying to figure it out

Deep NeuralNets

univaraite

linear

regression

interpretability / time

If model cannot be interpreted can they be trusted?

can they fulfill the Right to Explanation? 

 

Information philosophy

Data Ethics 

Futurism

Scify

Deep NeuralNets

generalized additive models

decision trees

Random Forest

Accuracy

univaraite

linear

regression

interpretability / time

univaraite

linear

regression

univaraite

linear

regression

univaraite

linear

regression

SVM

Ethics of AI

Trust in AI and AI opacity

 

 

number of features

If model cannot be interpreted can they be trusted?

can they fulfill the Right to Explanation? 

 

Information philosophy

Data Ethics 

Futurism

Scify

Deep NeuralNets

generalized additive models

decision trees

Random Forest

Accuracy

univaraite

linear

regression

interpretability / time

univaraite

linear

regression

univaraite

linear

regression

univaraite

linear

regression

SVM

Model selection need to balance interpretability with accuracy

2.

Ethics of AI

Trust in AI and AI opacity

 

 

number of features

Ethics of AI

Fairness in AI

 

Information philosophy

Data Ethics 

Futurism

Scify

What should be maximized?

ML models "learn" from data examples by minimizing  some an objective function. 

 

 

the hypothetical trolley problem suddenly is real

Ethics of AI

Fairness in AI

 

 

 

 

 

Information philosophy

Data Ethics 

Futurism

Scify

Public safety vs unfair punishment

A fully harmless objective function may not exist. How do we balance potential harm?

3.

the environmental footprint of AI

https://dl.acm.org/doi/abs/10.1145/3442188.3445922

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

Bender, Gebru et al 2021 “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources”

Ethics of AI

A framework for enabling AI within ethical boundaries

Responsible AI

The study of ethical implications of the creation, use, and coexistence with autonomous systems

Responsible AI

Responsible AI is the practice of designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society

AI testing

Adversarial Robustness and Privacy

Fairness, Accountability, Transparency

 

Fairness, Accountability, Transparency

 

Uncertainty quantification

Explainable AI

The Next Generation Internet initiative by the Digital Single Market of the European Commission.

https://ngi.eu/wp-content/uploads/sites/48/2018/07/Responsible-AI-Consultation-Public-Recommendations-V1.0.pdf

 

 

Reactive Framework

Starting now, NASA has the opportunity to set up its AI operations embedded in an ethical framework

Proactive Framework

Set up AI operations embedded in an ethical framework

ASK:

 

What is the decision process?

 

Who needs to be involved?

 

What are the values that we should be encoding in the processes that leads to build, deploy, and use of AI?

"Human-Centered AI"

What is the decision process?

Ben A. Schneiderman - UMD

"Human-Centered AI"

What is the decision process?

I can build it, but should I build it?

Who will be impacted?

Who will be accountable?

Whose job will it replace?

Whose jobs will this enable?

Who is the new workforce and how are they trained?

 

yes, but WHO is testing?

Who needs to be involved?

Diversity is key to identify potential harm

"Human-Centered AI"

models are neutral, the bias is in the data (or is it?)

Why does this AI model whitens President Obama's face?

Simple answer: the data is biased. The algorithm is fed more images of white people

Complex answer: the testing and validation framework validated the bias

model: PULSE (GAN)

Who needs to be involved?

Good and bad implementation strategies

  • Define values and strategic actions ahead of time
  • Commit to maximizing representation in your workforce
  • Commit to algorithmic transparency
  • Hire ethicists to work alongside AI developers
  • Liaise with AI ethics centers and peer organizations
  • Proactively renew education and training for the new jobs enabled by AI

suggested strategies

(suggested by me.... please do consult many other experts and don't take it as an exhaustive list!!)

References:

papers and reports

The National Security Commission on Artificial Intelligence

The National Security Commission on Artificial Intelligence

https://www.nscai.gov/

Ben Schneiderman 2020 "Human-Centered Artificial Intelligence: Trusted, Reliable & Safe" http://www.cs.umd.edu/hcil/trs/2020-01/2020-01.pdf

Bender et al. 2021 On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜https://dl.acm.org/doi/abs/10.1145/3442188.3445922

Stanford Encyclopedia of Philosophy https://plato.stanford.edu/entries/ethics-ai/#PrivSurv

https://ai.google/

References:

programs, plans, and others

Dara Responsibly Comics https://dataresponsibly.github.io/comics/

Microsoft Responsible AI plan https://docs.microsoft.com/en-us/learn/modules/responsible-ai-principles/4-guiding-principles

Google Responsible AI plan https://ai.google/responsibilities/responsible-ai-practices/

DARPA’s Explainable Artificial Intelligence (XAI) Program https://ojs.aaai.org/index.php/aimagazine/article/view/2850

An interview with PwC  Maria Luciana Axente  https://www.datacamp.com/community/blog/the-future-of-responsible-ai

PULSE: https://github.com/adamian98/pulse#what-does-it-do

GTP-3 https://arxiv.org/abs/2005.14165v4

GTP-3 text generator https://deepai.org/machine-learning-model/text-generator

COMPAS (wiki entry, no paper) https://en.wikipedia.org/wiki/COMPAS_(software)

References:

models mentioned in the talk

NASAethicsAI

By federica bianco

NASAethicsAI

NASA ML/AI retreat, presentation on Ethics and diversity in ML and AL - Jan 2022

  • 567