Deontological: This framework is based on a set of rules such as company rules, government rules, or religious rules such as the Ten Commandments. If the action in question adheres to the rules, it is considered to be the right thing to do in that context.
Utilitarian: There are many varieties of this approach, but the basic idea is that the right thing to do is the thing that brings the most utility, or happiness, to the most people. While the action may help some and hurt others, if the benefits to some outweigh the costs to others, the action is “the right thing.”
This week we read 3 things:
Deontology emphasizes the rightness or wrongness of an action by reference to certain action-guiding principles:
laws, rules, maxims, imperatives, or commands
In deontology, following these principles makes an action "ethical"
even in situations in which the consequences of an action are understood to be desirable or good!
i.e. no "ends justify the means"
Instead, your "means" must be based on duty, and there must a choice
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Sometimes multiple duties conflict, so just pick one and stick to it blindly
1. In the proposed third law, what do the authors mean by the term "situated autonomy"?
2. In the video “Artificial Intelligence Explosion,” the team wrestles with the idea of “solving ethics” by creating a set of “correct, unambiguous rules.” Do you think the authors of the article “Beyond Asimov: The Three Laws of Responsible Robotics” have done this? Why or why not?
3. The alternative laws emphasize human responsibility in the design and deployment of robots, rather than robot-centered accountability. What are the potential challenges and benefits of placing this responsibility on humans rather than on the robots themselves?
4. In what ways could the alternative laws of responsible robotics presented in the article “Beyond Asimov: The Three Laws of Responsible Robotics” be applied to current or emerging technologies (e.g., autonomous vehicles, medical robots, etc?)? Can you think of any scenarios where these laws might be difficult to implement?
5. (bonus question, if you’re done early) In the video, Lucy says “following rules cannot produce understanding.” Do you agree with this? Does it matter whether artificial intelligence has "understanding"? What does "having understanding" even mean?
Deontology emphasizes the rightness or wrongness of an action by reference to certain action-guiding principles:
laws, rules, maxims, imperatives, or commands
In deontology, following these principles makes an action "ethical"
even in situations in which the consequences of an action are understood to be desirable or good!
i.e. no "ends justify the means"
Instead, your "means" must be based on duty, and there must a choice
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Sometimes multiple duties conflict, so just pick one and stick to it blindly
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Task of deontology is to understand how best to honor one’s duties within those circumstances
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Task of deontology is to understand how best to honor one’s duties within those circumstances
moral relativist:
Someone who believes that all moral judgments are based on individual viewpoints and that no one viewpoint ought to be privileged above any other—save that person’s own, because most moral relativists are critical of anyone who disagrees with their position on the matter (Midgley 1991).
Deontology can seem deceptively simple, and has several misconceptions:
Only intention matters, not circumstances or consequences
Task of deontology is to understand how best to honor one’s duties within those circumstances
When we say "law," the kind we follow from the government is only one kind of law. Some rules or duties can also be:
Different communities may have different ideas about which parts of human life must be guided by these rules vs. which parts are free of moral obligation
Immanuel Kant (1800's Enlightenment)
Kant sez: human reason is the most important guide to making moral choices (as opposed to some deity)
Most of the time, our laws approximate some absolute moral law, or at least serve as good reminder
In order for an action to have moral worth, it must be an action that you, the agent, recognize as right (not just blindly following the rules)
But just as importantly, it must be true for EVERYONE
A common way to test this is to ask: "If everyone did it, would it still be ethical?"
use lying as an example
Never use others as a means, only as an end
Image from DALLE: "Here are the full-body images of Immanuel Kant as a robot, complete with his Prussian-inspired outfit and futuristic details. These should give you a better sense of his overall appearance." lol thanks DALLE
some say deontology is more interested in "what is right" than "what is good"
Burton, Emanuelle; Goldsmith, Judy; Mattei, Nicholas; Siler, Cory; Swiatek, Sara-Jo. Computing and Technology Ethics: Engaging through Science Fiction (p. 42). MIT Press.
The virtue ethics framework asks the question “what would a virtuous person do” to determine what is right.
A virtuous person, according to Aristotle, would have good underlying character traits that would cause them to make good choices.
The main benefit of this framework is that it is very flexible. It can be applied to a wide variety of decisions.
According to Oxford Languages, Artificial Intelligence is “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”
Any time you’ve written a computer program that uses “if statements” you’re writing a program that makes decisions – that employs AI.
Your program basically consisted of a set of rules (an algorithm) for making the decision
Perhaps the decisions your programs have made in the past have not seemed like ethical decisions.
This will be posted on Blackboard and I'll send a Discord announcement with this info!