Ben Byford
AI ethics | freelance web and games designer
Autonomous Warfare; AI cyberattacks; bad actors; tools, research, data and models pooled into centralised organisations–monopolies; pervasive surveillance; job automation / job displacement; up skilling; using third party algorithms; company acquisition; team diversity for diverse outcomes; impacts on currently held freedoms e.g free speech, democracy; accountability; secrecy; when is it not appropriate to automate? unintended consequences; interpretable; explainable; transparent; impersonation; exploitation; false positives and true negatives; dogmatic systems; behaviour change / adjusting social norms; discrimination; failure states; model drift; bias data and models; appropriate data-set size; data reliability; data consent; personal data use; reproducing anonymised data; thresholds for acceptable model outcomes in any individual domain; Environmental impact of training models; Safety of new services; AGI goal alignment; AI Personhood
Autonomous Warfare; AI cyberattacks; bad actors; tools, research, data and models pooled into centralised organisations–monopolies; pervasive surveillance; job automation / job displacement; up skilling; using third party algorithms; company acquisition; team diversity for diverse outcomes; impacts on currently held freedoms e.g free speech, democracy; accountability; secrecy; when is it not appropriate to automate? unintended consequences; interpretable; explainable; transparent; impersonation; exploitation; false positives and true negatives; dogmatic systems; behaviour change / adjusting social norms; discrimination; failure states; model drift; bias data and models; appropriate data-set size; data reliability; data consent; personal data use; reproducing anonymised data; thresholds for acceptable model outcomes in any individual domain; Environmental impact of training models; Safety of new services; AGI goal alignment; AI Personhood
Autonomous Warfare; AI cyberattacks; bad actors; tools, research, data and models pooled into centralised organisations–monopolies; pervasive surveillance; job automation / job displacement; up skilling; using third party algorithms; company acquisition; team diversity for diverse outcomes; impacts on currently held freedoms e.g free speech, democracy; accountability; secrecy; when is it not appropriate to automate? unintended consequences; interpretable; explainable; transparent; impersonation; exploitation; false positives and true negatives; dogmatic systems; behaviour change / adjusting social norms; discrimination; failure states; model drift; bias data and models; appropriate data-set size; data reliability; data consent; personal data use; reproducing anonymised data; thresholds for acceptable model outcomes in any individual domain; Environmental impact of training models; Safety of new services; AGI goal alignment; AI Personhood
Autonomous Warfare; AI cyberattacks; bad actors; tools, research, data and models pooled into centralised organisations–monopolies; pervasive surveillance; job automation / job displacement; up skilling; using third party algorithms; company acquisition; team diversity for diverse outcomes; impacts on currently held freedoms e.g free speech, democracy; accountability; secrecy; when is it not appropriate to automate? unintended consequences; interpretable; explainable; transparent; impersonation; exploitation; false positives and true negatives; dogmatic systems; behaviour change / adjusting social norms; discrimination; failure states; model drift; bias data and models; appropriate data-set size; data reliability; data consent; personal data use; reproducing anonymised data; thresholds for acceptable model outcomes in any individual domain; Environmental impact of training models; Safety of new services; AGI goal alignment; AI Personhood
Autonomous Warfare; AI cyberattacks; bad actors; tools, research, data and models pooled into centralised organisations–monopolies; pervasive surveillance; job automation / job displacement; up skilling; using third party algorithms; company acquisition; team diversity for diverse outcomes; impacts on currently held freedoms e.g free speech, democracy; accountability; secrecy; when is it not appropriate to automate? unintended consequences; interpretable; explainable; transparent; impersonation; exploitation; false positives and true negatives; dogmatic systems; behaviour change / adjusting social norms; discrimination; failure states; model drift; bias data and models; appropriate data-set size; data reliability; data consent; personal data use; reproducing anonymised data; thresholds for acceptable model outcomes in any individual domain; Environmental impact of training models; Safety of new services; AGI goal alignment; AI Personhood
Autonomous Warfare; AI cyberattacks; bad actors; tools, research, data and models pooled into centralised organisations–monopolies; pervasive surveillance; job automation / job displacement; up skilling; using third party algorithms; company acquisition; team diversity for diverse outcomes; impacts on currently held freedoms e.g free speech, democracy; accountability; secrecy; when is it not appropriate to automate? unintended consequences; interpretable; explainable; transparent; impersonation; exploitation; false positives and true negatives; dogmatic systems; behaviour change / adjusting social norms; discrimination; failure states; model drift; bias data and models; appropriate data-set size; data reliability; data consent; personal data use; reproducing anonymised data; thresholds for acceptable model outcomes in any individual domain; Environmental impact of training models; Safety of new services; AGI goal alignment; AI Personhood
Autonomous Warfare; AI cyberattacks; bad actors; tools, research, data and models pooled into centralised organisations–monopolies; pervasive surveillance; job automation / job displacement; up skilling; using third party algorithms; company acquisition; team diversity for diverse outcomes; impacts on currently held freedoms e.g free speech, democracy; accountability; secrecy; when is it not appropriate to automate? unintended consequences; interpretable; explainable; transparent; impersonation; exploitation; false positives and true negatives; dogmatic systems; behaviour change / adjusting social norms; discrimination; failure states; model drift; bias data and models; appropriate data-set size; data reliability; data consent; personal data use; reproducing anonymised data; thresholds for acceptable model outcomes in any individual domain; Environmental impact of training models; Safety of new services; AGI goal alignment; AI Personhood
Autonomous Warfare; AI cyberattacks; bad actors; tools, research, data and models pooled into centralised organisations–monopolies; pervasive surveillance; job automation / job displacement; up skilling; using third party algorithms; company acquisition; team diversity for diverse outcomes; impacts on currently held freedoms e.g free speech, democracy; accountability; secrecy; when is it not appropriate to automate? unintended consequences; interpretable; explainable; transparent; impersonation; exploitation; false positives and true negatives; dogmatic systems; behaviour change / adjusting social norms; discrimination; failure states; model drift; bias data and models; appropriate data-set size; data reliability; data consent; personal data use; reproducing anonymised data; thresholds for acceptable model outcomes in any individual domain; Environmental impact of training models; Safety of new services; AGI goal alignment; AI Personhood
By Ben Byford
Where is Tech taking us? How will our lives look, and how do we manage technology so it doesn’t end up managing us?