Burton, Emanuelle; Goldsmith, Judy; Mattei, Nicholas; Siler, Cory; Swiatek, Sara-Jo. Computing and Technology Ethics: Engaging through Science Fiction (pp. 8-9). MIT Press.
https://www.britannica.com/technology/printing-press
https://news.colgate.edu/magazine/2020/05/01/how-to-survive-in-ancient-greece/
https://en.wikipedia.org/wiki/Woman_yelling_at_a_cat#/media/File:WomanYellingAtACat_meme.jpg
Often, there is no single best answer. In most cases, every possible choice has trade-offs
Three types of problems:
limited resources
competing types of goals
different ideas about what is good
Burton, Emanuelle; Goldsmith, Judy; Mattei, Nicholas; Siler, Cory; Swiatek, Sara-Jo. Computing and Technology Ethics: Engaging through Science Fiction (pp. 8-9). MIT Press.
Three types of problems:
limited resources
competing types of goals
different ideas about what is good
Three types of problems:
limited resources
What should be done when the demand for something >> how much of it is available?
These issues seem easily solved by using technology to "balance the scales"
e.g. artificial hearts to replace the failing ones, more fuel-efficient cars so that oil is less in demand, or high-yield crops that can feed more people
Often this does not solve the problem completely, because additional costs are introduced with that solution
competing types of goals
different ideas about what is good
Three types of problems:
limited resources
competing types of goals
multiple types of things or goals come into conflict
more than one way to achieve "good"
e.g. you get two job offers, both of which are good, but you need to choose one
different ideas about what is good
Three types of problems:
limited resources
competing types of goal
different ideas about what is good
sometimes people share the same goals but disagree on how to get there
e.g. is nuclear power a good option for the environment or a terrible one? Two environmental advocates with same overall goal (save the planet) can come to different conclusions, disagreeing on how to assess environmental well-being, or on what policies or practices should be changed
these disagreements are often invisible even to the people who are having the argument - they interpret that goal of "save the planet" to mean different things
Deontological: This framework is based on a set of rules such as company rules, government rules, or religious rules such as the Ten Commandments. If the action in question adheres to the rules, it is considered to be the right thing to do in that context.
Utilitarian: There are many varieties of this approach, but the basic idea is that the right thing to do is the thing that brings the most utility, or happiness, to the most people. While the action may help some and hurt others, if the benefits to some outweigh the costs to others, the action is “the right thing.”
Moral intuition: an unreasoned reaction (not unreasonable)
This week we read 3 things:
1. From your observations, how do tech designers and companies currently incorporate (or not incorporate) moral virtues, such as honesty, justice, or empathy, into their design processes? Can you provide examples of technologies where these virtues seem to be evident?
2. Both “Designing a Future Worth Wanting” and “The Gambler” suggest that speed is a reason virtue ethics is a better framework for technology ethics.
a. How does the article argue this point?
b. How does The Gambler reveal this point?
3. How do users typically respond when they feel a technology detracts from their moral or ethical values? Have you noticed any patterns in how individuals or communities push back against technology that is seen as harmful or unethical?
In The Gambler, the “maelstrom” represents a powerful force shaping media content based on public interest and social pressures.
How does this reflect what you’ve observed about how social media platforms and algorithms shape user behavior and decision-making?
Are people aware of how much social pressure influences the content they consume and create?
If you think they’re unaware, how best to educate the public about these effects?
Ong’s cultural background plays a key role in shaping his values and ethics in The Gambler, while his American colleagues seem to operate under different norms.
What does the Virtue Ethics framework say about virtues in terms of cultural and social context?
How do cultural differences influence the ethical decisions people make when engaging with technology or media?
Can you identify examples where different cultural values lead to different behaviors in the digital space?
https://gaming.stackexchange.com/questions/399569/what-game-is-this-character-creation-screen-from
https://gaming.stackexchange.com/questions/399569/what-game-is-this-character-creation-screen-from
https://integratedethicslabs.org/labs/virtue-ethics/
Confucius (5th century BCE)
Aristotle (340 BCE)
According to virtue ethics
Learning objectives:
Discussion Guidelines:
Reminder: please ensure your reflections are personal to you, rather than the default outputs from an AI.
Deontological: This framework is based on a set of rules such as company rules, government rules, or religious rules such as the Ten Commandments. If the action in question adheres to the rules, it is considered to be the right thing to do in that context.
Utilitarian: There are many varieties of this approach, but the basic idea is that the right thing to do is the thing that brings the most utility, or happiness, to the most people. While the action may help some and hurt others, if the benefits to some outweigh the costs to others, the action is “the right thing.”
This week we read 3 things:
Deontology emphasizes the rightness or wrongness of an action by reference to certain action-guiding principles:
laws, rules, maxims, imperatives, or commands
In deontology, following these principles makes an action "ethical"
even in situations in which the consequences of an action are understood to be desirable or good!
i.e. no "ends justify the means"
Instead, your "means" must be based on duty, and there must a choice
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Sometimes multiple duties conflict, so just pick one and stick to it blindly
1. In the proposed third law, what do the authors mean by the term "situated autonomy"?
2. In the video “Artificial Intelligence Explosion,” the team wrestles with the idea of “solving ethics” by creating a set of “correct, unambiguous rules.” Do you think the authors of the article “Beyond Asimov: The Three Laws of Responsible Robotics” have done this? Why or why not?
3. The alternative laws emphasize human responsibility in the design and deployment of robots, rather than robot-centered accountability. What are the potential challenges and benefits of placing this responsibility on humans rather than on the robots themselves?
4. In what ways could the alternative laws of responsible robotics presented in the article “Beyond Asimov: The Three Laws of Responsible Robotics” be applied to current or emerging technologies (e.g., autonomous vehicles, medical robots, etc?)? Can you think of any scenarios where these laws might be difficult to implement?
5. (bonus question, if you’re done early) In the video, Lucy says “following rules cannot produce understanding.” Do you agree with this? Does it matter whether artificial intelligence has "understanding"? What does "having understanding" even mean?
Deontology emphasizes the rightness or wrongness of an action by reference to certain action-guiding principles:
laws, rules, maxims, imperatives, or commands
In deontology, following these principles makes an action "ethical"
even in situations in which the consequences of an action are understood to be desirable or good!
i.e. no "ends justify the means"
Instead, your "means" must be based on duty, and there must a choice
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Sometimes multiple duties conflict, so just pick one and stick to it blindly
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Task of deontology is to understand how best to honor one’s duties within those circumstances
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Task of deontology is to understand how best to honor one’s duties within those circumstances
moral relativist:
Someone who believes that all moral judgments are based on individual viewpoints and that no one viewpoint ought to be privileged above any other—save that person’s own, because most moral relativists are critical of anyone who disagrees with their position on the matter (Midgley 1991).
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Task of deontology is to understand how best to honor one’s duties within those circumstances
When we say "law," the kind we follow from the government is only one kind of law. Some rules or duties can also be:
Different communities may have different ideas about which parts of human life must be guided by these rules vs. which parts are free of moral obligation
Immanuel Kant (1800's Enlightenment)
Kant sez: human reason is the most important guide to making moral choices (as opposed to some deity)
Most of the time, our laws approximate some absolute moral law, or at least serve as good reminder
In order for an action to have moral worth, it must be an action that you, the agent, recognize as right (not just blindly following the rules)
But just as importantly, it must be true for EVERYONE
A common way to test this is to ask: "If everyone did it, would it still be ethical?"
use lying as an example
Never use others as a means, only as an end
Image from DALLE: "Here are the full-body images of Immanuel Kant as a robot, complete with his Prussian-inspired outfit and futuristic details. These should give you a better sense of his overall appearance." lol thanks DALLE
some say deontology is more interested in "what is right" than "what is good"
Burton, Emanuelle; Goldsmith, Judy; Mattei, Nicholas; Siler, Cory; Swiatek, Sara-Jo. Computing and Technology Ethics: Engaging through Science Fiction (p. 42). MIT Press.
The virtue ethics framework asks the question “what would a virtuous person do” to determine what is right.
A virtuous person, according to Aristotle, would have good underlying character traits that would cause them to make good choices.
The main benefit of this framework is that it is very flexible. It can be applied to a wide variety of decisions.
According to Oxford Languages, Artificial Intelligence is “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”
Any time you’ve written a computer program that uses “if statements” you’re writing a program that makes decisions – that employs AI.
Your program basically consisted of a set of rules (an algorithm) for making the decision
Perhaps the decisions your programs have made in the past have not seemed like ethical decisions.
This will be posted on Blackboard and I'll send a Discord announcement with this info!
Text
Upgrading skills should be an ongoing process …
Proactive CARE: ethically on guard
Proactive CARE: ethically on guard
Proactive CARE: ethically on guard
Proactive CARE: ethically on guard
https://www.stefanpaulgeorgi.com/blog/how-to-write-sales-copy-pleasure-vs-pain/
For example: Imagine that you work for a college preparatory program for high school students. You are in charge of awarding full scholarships to a small number of admitted students, on utilitarian principles
What parameters should be used to determine who should get the scholarships?
Do you select the students whose academic work is strongest?
Pick the ones who seem most likely to benefit from the prep course (worst prepared)?
Select the ones whose financial need is greatest?
Some combination of these?
Do you just award an equal amount to everyone?
For example: Imagine that you work for a college preparatory program for high school students. You are in charge of awarding full scholarships to a small number of admitted students, on utilitarian principles
What parameters should be used to determine who should get the scholarships?
Do you select the students whose academic work is strongest?
Pick the ones who seem most likely to benefit from the prep course (worst prepared)?
Select the ones whose financial need is greatest?
Some combination of these?
Do you just award an equal amount to everyone?
Answering these requires us to assign a value in order to decide
...
Values are subjective - different for each individual
Traffic and surveillance cameras:
Who should be watched with traffic cameras? Who is being protected? What is the cost of protecting them, and who bears it? Cameras are becoming more common in public, private, and semiprivate workplaces and spaces. Cameras can potentially reduce the number of traffic accidents by catching more speeders (Vincent 2018); however, these same tools make it easier for governments, as well as other organizations and entities, to surveil.
How to rank different kinds of happiness?
Even when you consider one person in isolation, it is difficult to pin down what constitutes maximizing their happiness when you consider all the different kinds of things that might count.
For instance, consider the satisfaction of finally getting a challenging program to compile, run, and produce useful output; the joy of reading a clear and well-presented explanation; or the pleasure of playing a well-crafted and engaging game. Most utilitarians would agree that these are all ethically positive experiences, but how do we value them in comparison to each other. For example, how many hours of gaming is it worth to finish your final project? Can these two things be meaningfully compared at all?
On what timescale?
In the case of climate change, for example, its future effects could be drastic and damaging, but this knowledge rarely impacts individual, corporate, or societal decision making to a degree that is proportionate to the damage that we know it will do.
this means a model's programmer must know something about ethical dilemmas and also about what kinds of actions count as ethical
the programmers are limited by human perception and finite life experience