Ethics and AI

with Ben Byford

Why AI?

Why is it exciting?

What's this about Ethics?

What is AI?

Artifical Intelligence
Machine Learning

Data
Data science

What is AI for?

What is Ethics?

wikipedia

 

Ethics or moral philosophy is a branch of philosophy that involves systematizing, defending, and recommending concepts of right and wrong conduct.

...systematizing conduct.

 

Ethics is an agreed
code of conduct for societies.

A framework / a protocol

(This is a working definition for this talk)

“Ethics is the glue of society”

 

Ben Byford, just now

What is AI Ethics

"…ethics must become a central piece for any individual or organisations developing AI."

 

DigitalCatapult

Machine Intelligence Garage Ethics Committee
chaired by Luciano Floridi

 

Politics & law

Business Ethics

Design Ethics

Data Ethics

Tech Ethics

Robo Ethics

Machine Ethics

Philosophy of mind

Existential risk

Politics & law

Business Ethics

Design Ethics

Data Ethics

Tech Ethics

Robo Ethics

Machine Ethics

Philosophy of mind

Existential risk

Where do these decisions get made?

 

Govenments

Business

Designers / Researchers / Data scientists

Technology / Data

Terms to look out for:

 

Responsible AI
Trustworthy AI
Ethical AI
Human-centric AI
Value sensetive design

Getting into Data Science is also getting into the Ethics of AI
by necessity

Why AI Ethics?

(Why should we care?)

In what way do
machines participate in ethical questions?

Education / knowledge and awareness, Appropriate AI uses, Surveillance, AI alignment, Diversity, Direct harms, Human rights and measurements of human flourishing, Job automation, Democracy / Political exploitation, Accountability, Inclusion, Fairness, Impersonation, Environmental impact, Unintended consequences, Transparency, Interpretability, Data protection, Trustworthy, Consent, Unwanted Bias, Data manipulation, Obfuscation and duplication of personal data, Security, Safety, Monitoring, Explainability, Perversion, Third party services, Robustness, Personhood

(Intended) manipulation

 

Dark patterns, obfuscation, algorithmic power, promoting certain behaviours over others

“It is impossible to work in information technology without also engaging in social engineering.”

 

Jaron Lanier, You Are Not a Gadget

Data is biased

(and thats a good thing)

When is bias problematic?

So what's fair?

Robo ethics

(where and when AI should be used at?)

Where to get started?

Who the hell is
this guy?

“Integrity without knowledge is weak and useless, and knowledge without integrity is dangerous and dreadful.”

 

Samuel Johnson

Help us build a
better future

"I'm just a
a developer,
a researcher,

a business"

Extra credits:

Recommendations
for your projects

1. Hire us: www.ethicalby.design - talks, research, consultancy, workshops.
 

2. Create an AI governance group: designers, users, directors, data scientists, allow these people to "own" the process of scrutony for your projects... They're like AI fire wardens.


3. Risk assess your service / product against ethical issues and unintended consequences, use this to put processes and culture changes into place. Use Horizon Scanning, Red Teaming and other techniques to work out possible outcomes and impacts.

4. Revisit your AI processes, evaluate them and keep up to date with new AI issues in data science and ML algorithms.

 

5. Endeavour to incorporate practises of ethical reflection like the deon.drivendata.org checklist for data science.


6. Publicly declare your intent, processes and values... you don't have to give up your secret source just give us as much as possible to be able to see you're doing well and engaged.

Ethical
frameworks, guidelines, manifestos, declarations, principles practises, questions, recommendations, toolkits...

“...its just not good enough to publish your ethics principles”

 

Projects by IF

OECD AI Principles

 

www.oecd.org/going-digital/ai/principles/

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.

  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, AND they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.

  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity.

  • They should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.

  • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.

  • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.

  • respect humans

  • respect the planet

  • safeguards

  • transparent

  • robust

  • accountable

The good

 

OECD AI Principles

 

https://www.oecd.org/going-digital/ai/principles/


IBM - everyday ethics

https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf


Digital catapult - machine intelligence garage,
Luciano Floridi

https://www.migarage.ai/ethics-framework/

Machine Ethics

“Perhaps interacting with an ethical robot might someday even inspire us to behave more ethically ourselves.”

 

Anderson, Anderson 2012

Automated Cars

Super-intelligence

"If trained with a narrow set of data, an artificial intelligence may learn behaviors that are completely unintended and undesirable."


Nick Bostrom

Job displacement

“Our device isn’t meant to make employees more efficient.
It’s meant to completely obviate them.”


Rise of the Robots

What we need to do now?

ALOT

 

Govenments -> Regulation(?), Sign-post good practise,
help us decide on basic principles for tech ethics and more.

 

Businesses -> hold themselves accountable,
create services in society's interest, expect unintended consequences

 

Designers / developers / data scientists -> Collaborate on ethical issues, whistleblow where human rights obligations are not being met, make ethical thinking part of your process.

 

Research -> continued research on algorithmic ethical agents (machine ethics) which needs to be more transparent and less click baity (terminator / your car is going to kill you), and transparent algorithmic decision making

Capitalism 

We work (including our ethical thinking) within existing power structure. We need to be reminded what these systems enable
and leave out

Machine Ethics podcast episodes to start with

Recommended reading

instagram: @machineethicspodcast #AiEthicsBookClub

 

  • AI ethics

  • Who owns the future?

  • Future Ethics

Copy of AI Ethics AQA

By Ben Byford

Copy of AI Ethics AQA

Presented at Women's tech hub data science conference 2022

  • 88