CMSC 304

Social and Ethical Issues in Information Technology

Ethics Frameworks

Today's Agenda

  1. Some slides on "Intro to Ethics Theories"
  2. Discussion of the Readings

Why Ethics + Technology?

  • Ethical issues in computing fields can seem like new problems because they deal with technologies that have never existed before!

 

  • But, if we extract and abstract patterns and types of questions asked in tech ethics, we'll see that humans have been grappling with these questions for all of recorded history
    • Ancient Greeks argued over ideal values, most effective methods of education
      • Same issues roboticists face with today when contemplating the value and risks of artificial intelligence (AI)
    • Scholars in fifteenth-century Europe debated how printing press might democratize (or destroy) established bodies of knowledge
      • Same issues that plague social media platforms like Facebook and Twitter today about connection, fake news, advertising
  • intelligent robots
  • big data
  • social media

Burton, Emanuelle; Goldsmith, Judy; Mattei, Nicholas; Siler, Cory; Swiatek, Sara-Jo. Computing and Technology Ethics: Engaging through Science Fiction (pp. 8-9). MIT Press.

https://www.britannica.com/technology/printing-press

https://news.colgate.edu/magazine/2020/05/01/how-to-survive-in-ancient-greece/

Intro to Ethics Theories

  • Conversations about ethics can be extremely difficult
    • we don't all agree about what is "good"
    • we don't all agree about how to achieve "good"
    • we struggle to pinpoint reasons why we don't agree on these things
  • Our ideas about "good vs bad" are often rooted in deeply personally held beliefs, rather than objective facts
    • It can be hard to imagine how a decent person could feel so differently about something so innate to us
    • This results in arguments that use the terms "that's unethical" incorrectly, as a substitute for "I disagree"

https://en.wikipedia.org/wiki/Woman_yelling_at_a_cat#/media/File:WomanYellingAtACat_meme.jpg

Types of thinking in ethics

  • consider a question from multiple perspectives
  • see these perspectives as rooted in specific historical contexts;
  • weigh evidence to determine which of several positions is more plausible;
  • tolerate the ambiguity and uncertainty that generally surrounds the interpretation of the event

Basic Questions in Ethics

  • Over the years, humanity has devised different systems of thought to help work through difficult ethical problems
  • Studying these ethical frameworks will give you:
    • Different ways of asking questions about tough ethical challenges, different ways of answering those questions
    • Understanding of things you already perceive, or could potentially perceive, from a fresh perspective
  • Often, there is no single best answer. In most cases, every possible choice has trade-offs

  • Three types of problems:

    1. limited resources

    2. competing types of goals

    3. different ideas about what is good

Burton, Emanuelle; Goldsmith, Judy; Mattei, Nicholas; Siler, Cory; Swiatek, Sara-Jo. Computing and Technology Ethics: Engaging through Science Fiction (pp. 8-9). MIT Press.

Basic Questions in Ethics

  • Three types of problems:

    1. limited resources

    2. competing types of goals

    3. different ideas about what is good

Basic Questions in Ethics

  • Three types of problems:

    1. limited resources

      • What should be done when the demand for something >> how much of it is available?

      • These issues seem easily solved by using technology to "balance the scales"

        • e.g. artificial hearts to replace the failing ones, more fuel-efficient cars so that oil is less in demand, or high-yield crops that can feed more people

        • Often this does not solve the problem completely, because additional costs are introduced with that solution

    2. competing types of goals

    3. different ideas about what is good

Basic Questions in Ethics

  • Three types of problems:

    1. limited resources

    2. competing types of goals

      • multiple types of things or goals come into conflict

      • more than one way to achieve "good" 

        • e.g. you get two job offers, both of which are good, but you need to choose one

    3. different ideas about what is good

Basic Questions in Ethics

  • Three types of problems:

    1. limited resources

    2. competing types of goal

    3. different ideas about what is good

      • sometimes people share the same goals but disagree on how to get there

        • e.g. is nuclear power a good option for the environment or a terrible one? Two environmental advocates with same overall goal (save the planet) can come to different conclusions, disagreeing on how to assess environmental well-being, or on what policies or practices should be changed

      • these disagreements are often invisible even to the people who are having the argument - they interpret that goal of "save the planet" to mean different things

Why Ethics?

  • Ethics is the theory and practice of ways to make good choices and lead a good life

 

  • It involves both knowledge and skills (i.e. practice)

 

  • The Computing field is particularly prone to ethical issues
    • technology increasingly creates the conditions in which human beings live their lives
    • the choices and actions of technology creators have a meaningful impact on the wider world
    • and those choices and actions are based on how you answer the basic philosophical question of “what is worth having or doing?”

Why Ethics?

  • The Computing field is particularly prone to ethical issues
    • technology now determines how we live, how we spend our time
    • the choices and actions of technology creators have a meaningful impact on the wider world
    • and those choices and actions are based on how you answer the basic philosophical question of “what is worth having or doing?”
      • you can do more good in the world by understanding what other people value and why, instead of trying to make them conform to your own assumptions​
      • although human values can be influenced, they are complex and deeply embedded
        • we are often unaware of how our values impact our decisions

3 Ethical Frameworks

  • Deontological: This framework is based on a set of rules such as company rules, government rules, or religious rules such as the Ten Commandments. If the action in question adheres to the rules, it is considered to be the right thing to do in that context.

  • Utilitarian: There are many varieties of this approach, but the basic idea is that the right thing to do is the thing that brings the most utility, or happiness, to the most people. While the action may help some and hurt others, if the benefits to some outweigh the costs to others, the action is “the right thing.”

  • Virtue Ethics takes a completely different approach—it examines the background of our moral decisions and actions. For example, it asks how a person who is generous will see a situation and its moral demands differently from someone who is greedy. The “right thing to do” is the thing that would be chosen by the virtuous person.

Outside Ethical Frameworks

Moral intuition: an unreasoned reaction (not unreasonable)

  • Our “intuitive awareness of value”
  • Intrinsically motivating
  • Retrainable; fallible; unavoidable
  • We're not the only species with a sense of justice

Discussion Guidelines Reminder

CMSC 304

Social and Ethical Issues in Information Technology

Utilitarianism

Virtue Ethics Discussion

2 Rounds of Discussion + Report Out

  • Groups must be 3 people minimum, 4 maximum
  • Each person should take some individual notes during the discussion
  • Choose a scribe to write down the group's summary for each question
    • For each question in a round, choose a different scribe
  • The goal is NOT to gain consensus, rather to appreciate and consider different viewpoints + examples
  • Feel free to reference the readings while discussing
    • Reference, not "read for the first time"
  • When the round ends, I'll ask the groups to volunteer a summary of their discussion
    • Anyone from the group can answer, doesn't have to just be the scribe
  • I'll then ask if anyone disagrees, or has a counterpoint, or if you had a different interpretation
  • Each group that participates will get some points
  • Please raise your hand if you need clarification or have a question, or need some inspiration! This is not a quiz or test
    • No ChatGPT or other AI during discussions

Week 3 Discussion: Virtue Ethics

This week we read 3 things:

CMSC 304

Social and Ethical Issues in Information Technology

Round 1

1. From your observations, how do tech designers and companies currently incorporate (or not incorporate) moral virtues, such as honesty, justice, or empathy, into their design processes? Can you provide examples of technologies where these virtues seem to be evident?

2. Both “Designing a Future Worth Wanting” and “The Gambler” suggest that speed is a reason virtue ethics is a better framework for technology ethics.

    a. How does the article argue this point?

    b. How does The Gambler reveal this point?

3. How do users typically respond when they feel a technology detracts from their moral or ethical values? Have you noticed any patterns in how individuals or communities push back against technology that is seen as harmful or unethical?

CMSC 304

Social and Ethical Issues in Information Technology

Round 2

  1. In The Gambler, the “maelstrom” represents a powerful force shaping media content based on public interest and social pressures. 

    • How does this reflect what you’ve observed about how social media platforms and algorithms shape user behavior and decision-making?

    • Are people aware of how much social pressure influences the content they consume and create?

    • If you think they’re unaware, how best to educate the public about these effects?

  2. Ong’s cultural background plays a key role in shaping his values and ethics in The Gambler, while his American colleagues seem to operate under different norms.

    • What does the Virtue Ethics framework say about virtues in terms of cultural and social context?

    • How do cultural differences influence the ethical decisions people make when engaging with technology or media?

    • Can you identify examples where different cultural values lead to different behaviors in the digital space?

CMSC 304

Social and Ethical Issues in Information Technology

Virtue Ethics

Ethics Frameworks

Virtue Ethics

  • How to live "the good life"?
    • Ethics is a subset of what it means to live a full and happy life
  • A good society is good because it makes it possible for people to flourish and develop human excellence
  • Virtues are the basic building blocks of human character- fundamental qualities like kindness, generosity, or self-respect.
    • according to virtue ethics, virtues aren’t innate, they’re the capacity to exercise and develop that quality through practice

Virtue Ethics

  • Virtue ethics thinks about human character in terms of some common library of capacities
    • ​Think of it like character creation in video games

https://gaming.stackexchange.com/questions/399569/what-game-is-this-character-creation-screen-from

  • According to virtue ethics, a chronic liar would only be considered deficient in honesty, rather than lacking honesty entirely (i.e. you can still put skill points into that attribute)
  • Virtues are revealed not through single actions but through patterns of action (gaining EXP)

Virtue Ethics

  • Virtue ethics thinks about human character in terms of some common library of capacities
    • ​Think of it like character creation in video games

https://gaming.stackexchange.com/questions/399569/what-game-is-this-character-creation-screen-from

  • According to virtue ethics, a chronic liar would only be considered deficient in honesty, rather than lacking honesty entirely (i.e. you can still put skill points into that attribute)
  • Virtues are revealed not through single actions but through patterns of action (gaining EXP)
  • "habit of virtue"

Virtue Ethics

  • Virtues are NOT the same as personality
  • For example, being an optimistic person or an extrovert, or having leadership qualities or aptitude for higher math might often be beneficial or good traits to have
    • But not necessarily moral traits
    • they can be used to do bad actions, accomplish bad ends, or lead to personal corruption).

https://integratedethicslabs.org/labs/virtue-ethics/

  • The predominant form of virtue ethics in East Asia comes from the teachings of the ancient Chinese philosopher Confucius
  • In the Confucian tradition, our interpersonal bonds are a fundamental part of what makes us people
  • An important tool for cultivating virtues in Confucianism is mindful exercise of social rituals
    • people might be coerced to behave well through the threat of punishment, but...
    • a system of meaningful rituals would more gently promote flourishing as well as order

Main Guy #1 of Virtue Ethics

Confucius (5th century BCE)

  • Also emphasizes what things a human being needs to do in order to achieve deep happiness and satisfaction with life—to flourish
  • Aristotle sez we need particular qualities of character both to help us recognize:
    • what makes us happy
    • how to pursue happiness effectively
  • Some example Aristotelian virtues: courage, generosity, friendliness, temperance, concern for justice
  • In order to flourish, one must also have a well-developed sense of practical wisdom
    • practical wisdom = how best to act in a given situation? 
    • practical wisdom can only be gained by experience

Main Guy #2 of Virtue Ethics

Aristotle (340 BCE)

What about vice?

  • In virtue ethics, basic physical appetites like hunger and fear are a basic part of human nature
    • This means that they are fundamentally good for humans, and necessary for us to flourish
    • Suppressing the appetites entirely is just as damaging to a person as allowing them to expand out of control
  • Instead, learn to regulate appetites and desires and exercise them in moderation
    • With practice, this will come naturally, forming a habit
  • ​Because virtue ethics takes a character-based approach to the world, it cannot and does not aim to offer principles or formulae
    • Why might this be particularly suitable for technology?

Course Rhythm

  • Most people never become perfect in any virtue, let alone in all of them. For this reason, it is better to think of virtue as a spectrum rather than as a binary.
  • Being good at designing webpages does not make you generous, but it does make your generosity more effective in this situation.
  • Because virtue ethics takes a character-based approach to the world, it cannot and does not aim to offer principles or formulae that can be equally well applied by any person.
  • Practical wisdom can be developed only through experience.
    the definition of fluctuating changes over time and with different cultures
  • virtue ethics is organized around the idea of moderation, finding the middle path, the golden mean. However it’s not the mathematical center, it will vary between situations

Limitations of virtue ethics

  • Focused on local norms, which means it's not universal and can change
    • also means it lacks a universal principle of justice
  • Virtue ethics has also been criticized as inaccessible for people with some disabilities or neurodivergencies 
    • Requires strong capacities for emotionally engaged, social, and intuitive decision making
    • Those capacities don’t come as naturally to many people on the autism spectrum, a population particularly heavily represented in engineering

What are our moral obligations?

According to virtue ethics

  • Make a consistent and conscious effort to develop our moral character for the better
    • As Confucius says, the real ethical failing is not having faults, "but rather failing to amend them."
  • We should find standards of conduct to follow within our own societies
    • specifically, in those special persons who are exemplary human beings with qualities of character (virtues) to which we should aspire
  •  We should practice lifelong cultivation of practical wisdom (good moral judgment)
    • practice discernment in different situations
    • virtuous persons flourish by acting justly with others
    • virtuous persons contribute to the common good by providing a moral example for others to admire and follow
  • To answer the question of "what should I do in this scenario", ask "what would a virtuous person do?" and then do that thing

Virtue Ethics In-class Activity #2

Learning objectives:

  • Further understand how the virtue ethics framework works
  • Practice applying the virtue ethics framework to scenarios
  • Learn to articulate which virtues might shape decisions and actions using the virtue ethics framework
  • Consider how virtues are developed as positive character traits

Virtue Ethics In-class Activity #2

  • Ethics is the study of what it means to ‘do the right thing.’ (Sara Basse, A Gift of Fire)...But how do we know what is “the right thing?”
  • What general ethical obligations are software engineers under, to “do the right thing” beyond their distinctive professional obligations?
  • Software engineers, in addition to their special professional obligations to the public, also have the same ethical obligations to their fellow human beings that we all share. What might those obligations be, and how should they be evaluated alongside our professional obligations?

Virtue Ethics In-class Activity #2

Discussion Guidelines:

  • Understand that your words have effects on others. Speak with care. If you learn that something you have said was experienced as disrespectful or marginalizing, listen carefully and try to understand that perspective. Learn how you can do better in the future.
  • Understand that others will come to these discussions with different experiences from yours. Be careful about assumptions and generalizations you make based only on your experience. Be open to hearing and learning from other perspectives.

Report Out

Wrap Up

  • We are not born with virtues, we develop them over a lifetime through practice, imitating role models, and reflecting on how the world around us forms our character.
  • Virtue ethicists suggest a number of ways to cultivate virtue and good character. Before I share some of them, what ideas do you have?

Reflection Journal

Reminder: please ensure your reflections are personal to you, rather than the default outputs from an AI.

CMSC 304

Social and Ethical Issues in Information Technology

Deontology

Ethics Frameworks

Quiz Review

3 Ethical Frameworks

  • Deontological: This framework is based on a set of rules such as company rules, government rules, or religious rules such as the Ten Commandments. If the action in question adheres to the rules, it is considered to be the right thing to do in that context.

  • Utilitarian: There are many varieties of this approach, but the basic idea is that the right thing to do is the thing that brings the most utility, or happiness, to the most people. While the action may help some and hurt others, if the benefits to some outweigh the costs to others, the action is “the right thing.”

  • Virtue Ethics examines the background of our moral decisions and actions. For example, it asks how a person who is generous will see a situation and its moral demands differently from someone who is greedy. The “right thing to do” is the thing that would be chosen by the virtuous person.

2 Rounds of Discussion + Report Out

  • Groups must be 3 people minimum, 4 maximum
  • Each person should take some individual notes during the discussion
  • Choose a scribe to write down the group's summary for each question
    • For each question in a round, choose a different scribe
  • The goal is NOT to gain consensus, rather to appreciate and consider different viewpoints + examples
  • Feel free to reference the readings while discussing
    • Reference, not "read for the first time"
  • When the round ends, I'll ask the groups to volunteer a summary of their discussion
    • Anyone from the group can answer, doesn't have to just be the scribe
  • I'll then ask if anyone disagrees, or has a counterpoint, or if you had a different interpretation
  • Each group that participates will get some points
  • Please raise your hand if you need clarification or have a question, or need some inspiration! This is not a quiz or test
    • No ChatGPT or other AI during discussions

Week 4 Discussion: Deontology

This week we read 3 things:

Deontology Highlights

  • Deontology emphasizes the rightness or wrongness of an action by reference to certain action-guiding principles:

    • laws, rules, maxims, imperatives, or commands

  • In deontology, following these principles makes an action "ethical"

    • even in situations in which the consequences of an action are understood to be desirable or good!

    • i.e. no "ends justify the means"

  • Instead, your "means" must be based on duty, and there must a choice

  • Deontology can seem deceptively simple, and has several misconceptions:

    • It's black and white: either you follow the law (ethical) or you don't (unethical)
    • Only intention matters, not circumstances or consequences

    • Sometimes multiple duties conflict, so just pick one and stick to it blindly

CMSC 304

Social and Ethical Issues in Information Technology

Round 1

1. In the proposed third law, what do the authors mean by the term "situated autonomy"?

2. In the video “Artificial Intelligence Explosion,” the team wrestles with the idea of “solving ethics” by creating a set of “correct, unambiguous rules.” Do you think the authors of the article “Beyond Asimov: The Three Laws of Responsible Robotics” have done this? Why or why not?  

3. The alternative laws emphasize human responsibility in the design and deployment of robots, rather than robot-centered accountability. What are the potential challenges and benefits of placing this responsibility on humans rather than on the robots themselves?

4. In what ways could the alternative laws of responsible robotics presented in the article “Beyond Asimov: The Three Laws of Responsible Robotics” be applied to current or emerging technologies (e.g., autonomous vehicles, medical robots, etc?)? Can you think of any scenarios where these laws might be difficult to implement?

5. (bonus question, if you’re done early) In the video, Lucy says “following rules cannot produce understanding.” Do you agree with this? Does it matter whether artificial intelligence has "understanding"? What does "having understanding" even mean?

CMSC 304

Social and Ethical Issues in Information Technology

Deontology Highlights

  • Deontology emphasizes the rightness or wrongness of an action by reference to certain action-guiding principles:

    • laws, rules, maxims, imperatives, or commands

  • In deontology, following these principles makes an action "ethical"

    • even in situations in which the consequences of an action are understood to be desirable or good!

    • i.e. no "ends justify the means"

  • Instead, your "means" must be based on duty, and there must a choice

  • Deontology can seem deceptively simple, and has several misconceptions:

    • It's black and white: either you follow the law (ethical) or you don't (unethical)
    • Only intention matters, not circumstances or consequences

    • Sometimes multiple duties conflict, so just pick one and stick to it blindly

Deontology Highlights

  • Deontology can seem deceptively simple, and has several misconceptions:

    • It's black and white: either you follow the law (ethical) or you don't (unethical)
    • Only intention matters, not circumstances or consequences

    • Sometimes multiple duties conflict, so just pick one and stick to it blindly

Deontology Highlights

  • Deontology can seem deceptively simple, and has several misconceptions:

    • It's black and white: either you follow the law (ethical) or you don't (unethical)
      • Not all laws are as equally as important: e.g. jaywalking vs. murder
    • Only intention matters, not circumstances or consequences

    • Sometimes multiple duties conflict, so just pick one and stick to it blindly

Deontology Highlights

  • Deontology can seem deceptively simple, and has several misconceptions:

    • It's black and white: either you follow the law (ethical) or you don't (unethical)
      • Not all laws are as equally as important: e.g. jaywalking vs. murder
    • Only intention matters, not circumstances or consequences

      • Task of deontology is to understand how best to honor one’s duties within those circumstances

    • Sometimes multiple duties conflict, so just pick one and stick to it blindly
      • This type of thinking is called moral relativism, and is generally frowned upon

Deontology Highlights

  • Deontology can seem deceptively simple, and has several misconceptions:

    • It's black and white: either you follow the law (ethical) or you don't (unethical)
      • Not all laws are as equally as important: e.g. jaywalking vs. murder
    • Only intention matters, not circumstances or consequences

      • Task of deontology is to understand how best to honor one’s duties within those circumstances

    • Sometimes multiple duties conflict, so just pick one and stick to it blindly
      • This type of thinking is called moral relativism, and is generally frowned upon

moral relativist: 

Someone who believes that all moral judgments are based on individual viewpoints and that no one viewpoint ought to be privileged above any other—save that person’s own, because most moral relativists are critical of anyone who disagrees with their position on the matter (Midgley 1991).

Deontology Highlights

  • Deontology can seem deceptively simple, and has several misconceptions:

    • It's black and white: either you follow the law (ethical) or you don't (unethical)
      • Not all laws are as equally as important: e.g. jaywalking vs. murder
    • Only intention matters, not circumstances or consequences

      • Task of deontology is to understand how best to honor one’s duties within those circumstances

    • Sometimes multiple duties conflict, so just pick one and stick to it blindly
      • This type of thinking is called moral relativism, and is generally frowned upon

When we say "law," the kind we follow from the government is only one kind of law. Some rules or duties can also be:

  • community based
  • religious
  • professional

Different communities may have different ideas about which parts of human life must be guided by these rules vs. which parts are free of moral obligation

Main Guy of Deontology

Immanuel Kant (1800's Enlightenment)

  • Kant sez: human reason is the most important guide to making moral choices (as opposed to some deity)

  • Most of the time, our laws approximate some absolute moral law, or at least serve as good reminder

  • In order for an action to have moral worth, it must be an action that you, the agent, recognize as right (not just blindly following the rules)

    • But just as importantly, it must be true for EVERYONE

  • A common way to test this is to ask: "If everyone did it, would it still be ethical?"

    • ​use lying as an example

  • Never use others as a means, only as an end

Image from DALLE: "Here are the full-body images of Immanuel Kant as a robot, complete with his Prussian-inspired outfit and futuristic details. These should give you a better sense of his overall appearance." lol thanks DALLE

Deontology in Practice

  • What to do when several duties conflict?
    • For example, you have to choose between saving a stranger and saving your child
    • You have a duty to try and save both, so which to choose?
  • Answer by asking:
    • which is a more fundamental duty to your role in the world? 
      • some would argue that duty to ones child >> duty to a rando
      • what if you are a firefighter on duty, and your role is public servant?
    • which is the more relevant duty?
      • if instead you could either save your kid or stop a terrorist, which to choose?
      • is it your job as a civilian to stop terrorists? how likely is it that an amateur could stop them?

some say deontology is more interested in "what is right" than "what is good"

Deontology: Best and Worst

  • At its worst, deontological thinking can enable fanaticism, leading people to believe they are justified in punishing others for doing things they take to be wrong
  • At its best, deontology can afford people the moral courage to stand against the majority, even when there is no obvious reward for doing so
  • Common criticisms
    • not enough focus on consequences: I may have good intensions, but still f*ck it up
    • too heavy focus on authority
    • overly righteous and impractical: "an action has no moral worth if it is not done out of duty"
      • what about joy? isn't it good to seek happiness in life?
      • does any emotion matter at all? Kant sez no

Deontology and Artificial Intelligence

  • In order for responsibility to exist there must be a choice, and the agent must have the capacity to choose differently.
  • Defining deontology becomes complicated when we start thinking about automated agents, which are programmed to do certain tasks and which learn new things according to programs that have been written by humans.
  • Are automated entities exercising responsibility in the deontological sense?
  • Do we want them to?

Burton, Emanuelle; Goldsmith, Judy; Mattei, Nicholas; Siler, Cory; Swiatek, Sara-Jo. Computing and Technology Ethics: Engaging through Science Fiction (p. 42). MIT Press.

Activity: Deontology

  • The virtue ethics framework asks the question “what would a virtuous person do” to determine what is right.

  • A virtuous person, according to Aristotle, would have good underlying character traits that would cause them to make good choices.

  • The main benefit of this framework is that it is very flexible. It can be applied to a wide variety of decisions.

  • What are some challenges of using the virtue ethics framework?

Activity: Deontology

  • What are some challenges of using the virtue ethics framework?
  • Instead, might it be easier to develop a set of rules to follow that will always produce the ethical action?
  • Using sets of rules can be intuitive for computer science and Artificial Intelligence, as they often use algorithms
  • An algorithm is defined as:
    • A set of rules and procedures that leads to a decision
    • Could be machine learning, or simply control flow logic (rule-based if-then-else)

Activity: Deontology

  • According to Oxford Languages, Artificial Intelligence is “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

  • Any time you’ve written a computer program that uses “if statements” you’re writing a program that makes decisions – that employs AI.

    • Your program basically consisted of a set of rules (an algorithm) for making the decision

    • Perhaps the decisions your programs have made in the past have not seemed like ethical decisions.

A primer on the readings for next week

  • As you read, keep this ancient wisdom in your mind: "Go touch grass"
  • Also as you read these stories, consider that they could make great talking points in an interview, advertising that you are a well-rounded and thoughtful individual, beyond leetcode and grinding

Upcoming Schedule

  • Next week: Utilitarianism with our usual format
  • Week 6: Begin the writing!
    • You'll having readings for that Monday on some writing topics
    • We'll have a guest lecture + activity from the library on how to do research for papers, citing, and other notes on academic research
  • Week 7: Writing week #1!
    • First paper will be due
    • No readings so you can focus on your paper
    • We'll do in-class peer review activity after you submit it
  • Week 8: All about data + ethics
    • Very light reading because you'll be revising your paper, which means:
      • I'll do a slightly longer lecture than usual on Monday
      • Read the article by Wednesday
  • Week 9: Back to our usual format

This will be posted on Blackboard and I'll send a Discord announcement with this info!

CMSC 304

Social and Ethical Issues in Information Technology

ACM Code of Ethics

Codes of Ethics

todo: find slides

Text

  • bullet
  • bullet

Aspects of Computing Ethics

  • Ethics applies to any behavior with a Positive or Negative impact on society, its citizens, or the environment (cultural and natural)
  • Ethical-decision making requires the ability to imagine the effects of an action/feature/app
  • Professional computing ethics is:
    • Any behavior of computing professionals during the design, development, construction, and maintenance of computing artifacts that affects other people.
  • Professional competence also requires skill in:
  • communication
  • reflective analysis
  • recognizing and navigating ethical challenges

Upgrading skills should be an ongoing process …

  • First an immediate intuitive reaction
  • Second is slower, more conscious reaction, and uses more cognitive attention and energy
    • Consider stakeholders and ethical elements.
    • Analyze the impact
    • Review responsibilities and alternative approaches
    • Evaluate the trade-offs (pros and cons)

 

  • Our work affects others. The interaction should not be haphazard

Anatomy of an Ethical Decision

Proactive Ethics

Proactive CARE: ethically on guard

 

  • Consider broadly who is affected.
    • Whose behavior and work process will be affected?
    • Whose circumstances or job will be affected?
    • Whose experiences will be affected?
    • Consider a range of plausible alternatives addressing different stakeholder needs and impacts.
    • Who is needed to pursue these alternatives?

Proactive Ethics

Proactive CARE: ethically on guard

 

  • Analyze obligations to and rights of stakeholders
    • How do alternative solutions meet function and meet ethical obligations?
    • Review the Code to help identify stakeholder rights
    • What technical facts are most relevant to your system?
    • What Principles of the Code are most relevant?
    • What personal, institutional, or legal values should be considered?

Proactive Ethics

Proactive CARE: ethically on guard

 

  • Review potential actions that might make a difference
    • What responsibilities, authority, practices, or policies seem to be most important in your analysis?
    • Are there creative alternatives to the options you’ve considered so far?
    • Apply the Code’s international professional values
    • Reconsider Care and Analyze

Proactive Ethics

Proactive CARE: ethically on guard

 

  • Evaluate your work so far.
    • Which of the options considered seems to be the best?
    • What are the trade-offs?
    • Are there creative alternatives to the options you’ve considered so far?
    • Are there now other Principles in the Code that are more relevant to your deliberations about this action?
    • Monitor the decision.

In-Class Activity #2

CMSC 304

Social and Ethical Issues in Information Technology

Utilitarianism

Ethics Theories

Utilitarianism

  • Utilitarianism is organized around the idea of happiness
  • Similar to virtue ethics, utilitarianism assumes humans are motivated by the desire to be happy
  • To determine how we should act, we should first and foremost consider what kinds of actions bring about the most happiness for the greatest number of people
    • This is known as the principle of utility or the greatest happiness principle
  • Sounds great!
    • but what is happiness?
      • absence of pain? 
  • According to utilitarians, the outcome of maximum happiness is most important
    • Because of this, utilitarianism is a form of consequentialism
    • "the ends justify the means"

https://www.stefanpaulgeorgi.com/blog/how-to-write-sales-copy-pleasure-vs-pain/

Utilitarianism

  • According to utilitarians, the outcome of maximum happiness is most important
    • Because of this, utilitarianism is a form of consequentialism
    • "the ends justify the means"
  • By focusing on the greatest happiness for the greatest number of people, utilitarianism strives to be universal
    • it assumes that all individuals are of equal worth
    • no one person's happiness is greater than another's
  • Utilitarianism has broad appeal because of its apparent objectivity
    • however, there is no such thing as true objectivity
    • it's objective until you start trying to define key parameters
      • the idea is objective, the implementation is subjective

Utilitarianism

  • Utilitarianism has broad appeal because of its apparent objectivity
    • however, there is no such thing as true objectivity
    • it's objective until you start trying to define key parameters
      • the idea is objective, the implementation is subjective
  • For example: Imagine that you work for a college preparatory program for high school students. You are in charge of awarding full scholarships to a small number of admitted students, on utilitarian principles

    • What parameters should be used to determine who should get the scholarships?

Do you select the students whose academic work is strongest?

Pick the ones who seem most likely to benefit from the prep course (worst prepared)?

Select the ones whose financial need is greatest? 

Some combination of these?

Do you just award an equal amount to everyone?

Utilitarianism

  • For example: Imagine that you work for a college preparatory program for high school students. You are in charge of awarding full scholarships to a small number of admitted students, on utilitarian principles

    • What parameters should be used to determine who should get the scholarships?

Do you select the students whose academic work is strongest?

Pick the ones who seem most likely to benefit from the prep course (worst prepared)?

Select the ones whose financial need is greatest? 

Some combination of these?

Do you just award an equal amount to everyone?

  • benefit to each is smaller because the $ is split more ways
  • how to account that different high schools have different academic standards and opportunities?
  • is GPA really a measure of academic strength?
  • how to account for varying levels of financial hardship not reported in taxes?
  • what sort of measurement could even be used for this?
  • what about merit?

Answering these requires us to assign a value in order to decide

...

Values are subjective - different for each individual

  1.       Who comprises the group whose well-being is under consideration? 
    • And who will pay the price to benefit this group?
  2.       What is the value being used to define good/happiness/utility?
    • For example, in classical utilitarianism, pleasure is the most important factor in considering happiness
    • But what counts as pleasure? What if you take pleasure in others' pain?
    • What happens when different people experience different sorts of utility from the same kinds of actions?
  3.       When is the measure of success being taken?
    • The calculation of consequences often looks different depending on whether you are looking at six weeks, or six months, or six years, or six decades

3 Parameters in Utilitarianism

Traffic and surveillance cameras:

Who should be watched with traffic cameras? Who is being protected? What is the cost of protecting them, and who bears it? Cameras are becoming more common in public, private, and semiprivate workplaces and spaces. Cameras can potentially reduce the number of traffic accidents by catching more speeders (Vincent 2018); however, these same tools make it easier for governments, as well as other organizations and entities, to surveil.

How to rank different kinds of happiness?

Even when you consider one person in isolation, it is difficult to pin down what constitutes maximizing their happiness when you consider all the different kinds of things that might count.

 

For instance, consider the satisfaction of finally getting a challenging program to compile, run, and produce useful output; the joy of reading a clear and well-presented explanation; or the pleasure of playing a well-crafted and engaging game. Most utilitarians would agree that these are all ethically positive experiences, but how do we value them in comparison to each other. For example, how many hours of gaming is it worth to finish your final project? Can these two things be meaningfully compared at all?

On what timescale?

In the case of climate change, for example, its future effects could be drastic and damaging, but this knowledge rarely impacts individual, corporate, or societal decision making to a degree that is proportionate to the damage that we know it will do.

Utilitarianism and Artificial Intelligence

  • Can a machine be taught to make decisions as a utilitarian?
  • Decision-making AI relies on mathematical models 
    • models are developed using example cases
    • this means a model's programmer must know something about ethical dilemmas and also about what kinds of actions count as ethical

    • the programmers are limited by human perception and finite life experience

Putting it all together

Ethics Theories

By Rebecca Williams

Ethics Theories

  • 6