CMSC 304

Social and Ethical Issues in Information Technology

Course Intro

Regarding the readings

  • We'll be reading all kinds of stuff!
    • textbook excerpts
    • science fiction
    • news articles
    • scientific papers
  • We'll watch some stuff too
    • documentaries
    • short videos about concepts
    • Watch Party!

Regarding the readings

  • We'll be reading all kinds of stuff!
    • textbook excerpts
      • I recommend reading these first: they're laid out nicely to frame the topic with a good flow and coverage of the information
    • science fiction
    • news articles
    • scientific papers

Regarding the readings

  • We'll be reading all kinds of stuff!
    • textbook excerpts
    • science fiction
      • A story isn’t reducible to just the plot / ideas
      • Reading sci-fi in ethics classes can make a lot of the topics more approachable, and allow us to suspend our own pre-conceived notions and focus on the world/character
      • As you read, really make an effort to understand what tensions are at play
        • These could be opposing interests, different priorities, or challenges that make it difficult to resolve an issue smoothly
        • How does the story's world cause / influence these?
      • Remember, sometimes stores don't have answers - don't consider them as "the solution" to ethics
    • news articles
    • scientific papers

Regarding the readings

  • We'll be reading all kinds of stuff!
    • textbook excerpts
    • science fiction
    • news articles
      • I recommend starting by asking "what is the main argument?"
      • Then, "how does the author either support, or refute that argument?"
      • What evidence does the article provide, if any?
      • What recommendations do they offer, if any?
    • scientific papers

Regarding the readings

  • We'll be reading all kinds of stuff!
    • textbook excerpts
    • science fiction
    • news articles
    • scientific papers
      • depending on the friendliness of the article, we might read just the introduction
      • if it's accessible to general readers, we might read the whole thing

Regarding the reflection journals

  • Each week, a prompt and link to your journal will be posted
  • Please keep writing in the same doc all semester! This cuts down on the number of random docs you have to keep track of, find the link for, etc etc.

Course Rhythm

Here is a recommended weekly rhythm:

  • Come to class with your hand-written notes and take the quiz on Monday at 1 pm
  • On Wednesday, start your Reflection response for this week
    • Review the readings from this week
    • Complete Reflection Journal by Friday
  • On Thursday, start the readings for next week
    • if we're in week n, start the readings for week n+1
    • so today, start readings for week 3
  • Complete next week's readings + notes by Sunday night
  • Come to class with your hand-written notes and take the quiz on Monday at 1 pm

Why Ethics?

  • Ethics is the theory and practice of ways to make good choices and lead a good life

 

  • It involves both knowledge and skills (i.e. practice)

 

What Ethics Isn't

  • The Same as Law/Compliance
  • A Set of Fixed Rules to Follow
  • A Purely Negative Frame
  • A Subjective Sense of Right
  • Non-moral Customs of Etiquette
  • Obedience to Authority / Unquestioning Loyalty

Why Ethics + Technology?

New technology creates a huge effect on people and the planet

 

Anything that changes the world will have both positives and negatives, called a trade-off

  • tech has the potential to: make human lives safer, freer, healthier, more interconnected, more full of beauty, and less plagued by poverty

  • it also has the power to

    • dramatically increase inequality

    • enable people with malicious intentions to harm others

    • create conditions for people to make poor decisions

Burton, Emanuelle; Goldsmith, Judy; Mattei, Nicholas; Siler, Cory; Swiatek, Sara-Jo. Computing and Technology Ethics: Engaging through Science Fiction (p. 4). MIT Press.

  • social media
  • banking
  • war
  • healthcare
  • art, music, entertainment
  • education

Morals, Ethics, Law

Morals Ethics Law
  • Personal beliefs about right and wrong

  • Moral acts conform to what an individual believes to be the right thing to do

  • Standards or codes of behavior expected of an individual by a group to which the individual belongs
  • System of rules, enforced by a set of institutions, that tells us what we can and cannot do

  • Legal acts are acts that conform to the law

From: Reynolds, G. (2019) Ethics in Information Technology. Cengage Learning.

  • being ethical does not always = following the law

3 Main Levels of Ethics

Personal Ethics Professional Ethics Social+Political Ethics
  • What does it mean to you to be a good person?

  • How can you make decisions you can live with?

  • What are the standards of behavior that
    govern my professional role?
  • What regulates responsible conduct as a member of a profession?
  • How to create rules and laws that facilitate
    well-being, cooperation, and human rights?

  • Governance      , law, regulation, and policy.

profession definition [1]:

  • An occupation whose core element is work based upon the mastery of a complex body of knowledge and skills.
  • It is a vocation in which knowledge of some department of science or learning or the practice of an art founded upon it is used in the service of others. 
  • Its members are governed by codes of ethics and profess a commitment to competence, integrity and morality, altruism, and the promotion of the public good within their domain.
  • These commitments form the basis of a social contract between a profession and society...
  • Professions and their members are accountable to those served and to society.

Questions of governance:

  • liberty vs. equality
  • convenience vs. privacy
  • privacy vs. security/safety
  • efficiency vs. human autonomy
  • free speech vs. dignity vs. democracy 

Cruess SR, Johnston S, Cruess RL. "Profession": a working definition for medical educators. Teach Learn Med. 2004 Winter;16(1):74-6. doi: 10.1207/s15328015tlm1601_15. PMID: 14987179.

Is Computer Science a Profession?!

Many professionals "care" about ethics because those professions require a license to practice

  • Lawyers are admitted to the bar association
  • Doctors have to pass board exams
  • Civil/Mechanical/Electrical Engineers must take the P.E. (Professional Engineer) licensing exam

professional license:

An official authorization issued by a governmental or regulatory body that allows an individual to legally practice a specific profession. This licensing process ensures that practitioners meet the necessary qualifications and adhere to established standards of competence and ethical conduct. Professional licenses are often required in fields where specialized knowledge, skills, and public trust are critical.​

Licensing bodies maintain ethical standards for the profession

  • Violating ethical standard results in losing license.
  • You can’t practice if your license is revoked 

B. C. Stahl, J. Timmermans, and B. D. Mittelstadt, “The Ethics of Computing: A Survey of the Computing-Oriented Literature,” ACM Comput. Surv., vol. 48, no. 4, pp. 1–38, May 2016, doi: 10.1145/2871196.

Is Computer Science a Profession?!

Many professionals "care" about ethics because those professions require a license to practice

  • Lawyers are admitted to the bar association
  • Doctors have to pass board exams
  • Civil/Mechanical/Electrical Engineers must take the P.E. (Professional Engineer) licensing exam

Licensing bodies maintain ethical standards for the profession

  • Violating ethical standard results in losing license.
  • You can’t practice if your license is revoked 

Software Engineering is not a licensed profession!! 

B. C. Stahl, J. Timmermans, and B. D. Mittelstadt, “The Ethics of Computing: A Survey of the Computing-Oriented Literature,” ACM Comput. Surv., vol. 48, no. 4, pp. 1–38, May 2016, doi: 10.1145/2871196.

Is Computer Science a Profession?!

Many professionals "care" about ethics because those professions require a license to practice

  • Lawyers are admitted to the bar association
  • Doctors have to pass board exams
  • Civil/Mechanical/Electrical Engineers must take the P.E. (Professional Engineer) licensing exam

Licensing bodies maintain ethical standards for the profession

  • Violating ethical standard results in losing license.
  • You can’t practice if your license is revoked 

Software Engineering is not a licensed profession!! 

...

Should it be?

B. C. Stahl, J. Timmermans, and B. D. Mittelstadt, “The Ethics of Computing: A Survey of the Computing-Oriented Literature,” ACM Comput. Surv., vol. 48, no. 4, pp. 1–38, May 2016, doi: 10.1145/2871196.

Is Computer Science a Profession?!

Many professionals "care" about ethics because those professions require a license to practice

  • Lawyers are admitted to the bar association
  • Doctors have to pass board exams
  • Civil/Mechanical/Electrical Engineers must take the P.E. (Professional Engineer) licensing exam

Licensing bodies maintain ethical standards for the profession

  • Violating ethical standard results in losing license.
  • You can’t practice if your license is revoked 

Software Engineering is not a licensed profession!! 

...

Should it be?

B. C. Stahl, J. Timmermans, and B. D. Mittelstadt, “The Ethics of Computing: A Survey of the Computing-Oriented Literature,” ACM Comput. Surv., vol. 48, no. 4, pp. 1–38, May 2016, doi: 10.1145/2871196.

Is Computer Science a Profession?!

Why is it difficult to conceive of a Software Engineer professional license?

  • "Professionals" are often legally liable for their work product e.g. medical and legal malpractice is a thing
  • Should the person who wrote the code be legally liable for the results?
    • What about the broader context of the company?
    • What about misuse by the user?
    • Is it possible to identify all possible bugs/edge cases? Is it practical?
  • Generally, a user gives up their rights to hold software accountable in the End User Licensing Agreement (EULA) 
  • What even is the scope of knowledge that a SWE should know?
    • thermodynamics? chemistry? numerical methods? databases? algorithms? economics? 

Excerpt from the Windows 10 EULA:

 

Except for any repair, replacement, or refund that Microsoft, or the device manufacturer or installer, may provide, you may not under this limited warranty, under any other part of this agreement, or under any theory, recover any damages or other remedy, including lost profits or direct, consequential, special, indirect, or incidental damages.

Why Ethics?

  • Ethics is the theory and practice of ways to make good choices and lead a good life

 

  • It involves both knowledge and skills (i.e. practice)

 

  • The Computing field is particularly prone to ethical issues

Holding Software Accountable?

"Harnessing powerful technologies and hidden algorithms, Match intentionally designs the platforms with addictive, game-like design features, which lock users into a perpetually pay-to-play loop that prioritizes corporate profits over its marketing promises and customers' relationship goals," lawyers for the plaintiffs wrote in the suit."

"The question the lawsuit poses is: Does Match Group have to disclose the potentially addictive quality of such commonplace design features? And have the company's lack of warnings constituted a violation of consumer protection laws?"

"The legal action against Match Group joins a new crop of lawsuits challenging tech companies, including Google, Instagram owner Meta and TikTok, in an attempt to hold platforms accountable for exacerbating the youth mental health crisis."

We are still in the "Wild West" Phase of Tech

  • Regulations can't keep up with innovation
  • There are some tl;dr historical reasons that the government has been "hands off" about the Internet
  • We struggle to measure the effects of disruptive tech due to iterative nature

Ethics are uniquely important in software engineering:

  • Shortened software lifecycles accelerate development but reduce oversight - who can keep up with all the new tech?
    • The ability to deploy code quickly is valued in many tech cultures; "Push or Perish"
    • Management and legal review of software can be weakened or absent when the features are changing daily, sometimes hourly
  • Limited oversight means bug fixes and some deployments may only receive technical, not ethical, review
  • Lack of geographic constraints means that coders may be culturally unfamiliar with their users

For example, people in many countries are notoriously sensitive to the representation of disputed border territories on maps. In 2010, an error in Google maps led to Nicaragua dispatching forces to its border with Costa Rica. Google then worked with US State Department officials to correct the error. Read more here.

Therac-25

Providing radiation therapy to cancer patients, Therac-25 had malfunctions that resulted in 6 deaths. Who is accountable when technology causes harm?

Barriers to accountability:

  • The problem of many hands
    • many groups of people (programmers, engineers, etc.) at various levels of a company involved in creation of a computer program
    • when something goes wrong, who is responsible?
  • Bugs in the system, computer as scapegoat
    • "but computers are stupid"
    • overconfidence in a product, unclear/ambiguous error messages, or improper testing of individual components of the system
  • Ownership without liability
    • ownership of proprietary software and an unwillingness to share “trade secrets” with investigators whose job it is to protect the public (Nissenbaum 1994).

https://ethicsunwrapped.utexas.edu/case-study/therac-25

Iran Air Flight 655

Iran Air Flight 655 was a civilian passenger flight that was shot down by the United States Navy guided-missile cruiser USS Vincennes on July 3, 1988.

 

At 10:17 AM local time, the USS Vincennes fired two SM-2MR surface-to-air missiles at the aircraft. One missile hit the plane, causing it to disintegrate and crash into the sea. All 290 passengers and crew on board were killed, including 66 children and 16 crew members.

Iran Air Flight 655

Iran Air Flight 655 was a civilian passenger flight that was shot down by the United States Navy guided-missile cruiser USS Vincennes on July 3, 1988.

 

At 10:17 AM local time, the USS Vincennes fired two SM-2MR surface-to-air missiles at the aircraft. One missile hit the plane, causing it to disintegrate and crash into the sea. All 290 passengers and crew on board were killed, including 66 children and 16 crew members.

AEGIS Combat System

Barriers to accountability:

  • The problem of many hands
    • many groups of people (programmers, engineers, etc.) at various levels of a company involved in creation of a computer program
    • when something goes wrong, who is responsible?
  • Bugs in the system, computer as scapegoat
    • "but computers are stupid"
    • overconfidence in a product, unclear/ambiguous error messages, or improper testing of individual components of the system
  • Ownership without liability
    • ownership of proprietary software and an unwillingness to share “trade secrets” with investigators whose job it is to protect the public (Nissenbaum 1994).

Iran Air Flight 655

"The naval officer responsible for authorizing a missile launch, the watch's Anti-Air Warfare Coordinator (AAWC), pushed wrong buttons no fewer than five times in response to a system message to select a weapon."

Iran Air Flight 655 was a civilian passenger flight that was shot down by the United States Navy guided-missile cruiser USS Vincennes on July 3, 1988.

 

At 10:17 AM local time, the USS Vincennes fired two SM-2MR surface-to-air missiles at the aircraft. One missile hit the plane, causing it to disintegrate and crash into the sea. All 290 passengers and crew on board were killed, including 66 children and 16 crew members.

Findings from the Incident Report:

  • Radar and Identification Systems:

    • Aegis Combat System was designed to track multiple targets -- it misidentified commercial Airbus A300 as Iranian F-14 fighter jet
    • Radar operators failed to distinguish btwn radar signatures of a civilian airliner and military jet,
      • partly due to  limitations of the identification systems
      • also due to complexity of monitoring multiple signals in a congested area
  • Misinterpretation of Transponder Signals:

    • 655 was transmitting signals on civilian aviation transponder frequency, indicating it was a commercial flight. Vincennes reportedly received conflicting transponder codes
    • Vincennes crew claimed they saw the aircraft's altitude decreasing on radar, suggesting an attack profile, when it was actually climbing.

who is responsible for bad design?

Why do these things happen?

  • These things don't always happen because "coder so stupid"
  • It's only sometimes because "company so greedy"

 

  • Most of these situations arise from only focusing on the technical capabilities of a system

 

  • The missing step here is imagining what unintended consequences exist for your choices
    • or, what unintended consequences exist for your non-choices
  • step 1: design code to do thing
  • step 2: check that code does thing
  • step 3: done!

A common flawed premise

I am a good person

Evil is done by evil people

I don't need to worry about ethics

  • most of the time, the harms or consequences are unintended
  • making a bad choice does not = bad person

Aspects of Computing Ethics

To prevent, we need to imagine the unintended consequences and outcomes, and try to prevent them

  • What types of harms can the public suffer from, as result of code and software?
  • How can coders and software engineers contribute to a good life for others?
  • Who exactly is the "public" to whom the engineer is obligated?
  • Why are computer scientists obligated to protect the public?
  • What other ethical obligations are software engineers under?
  • How can coders can actually live up to ethical standards?
  • What is the end goal of an ethical life in software engineering?
  • What are the professional codes of software engineering ethics?

Algorithmic Decision-making

  • Even in the 80's and 90's, people had questions about how to hold software and algorithms accountable
  • We are now in an age where this is even harder, because most algorithms use a kind of machine learning that is hard to interpret ("black box")
  • These systems are becoming widespread at an extremely rapid pace
    • credit scores
    • job candidate screening, interviews
    • terrorism and threat prediction
    • automated weapons targeting systems (ATS)
  • The combination of human error and algorithmic error is fantastically complex to deconstruct

https://www.cambridge.org/core/journals/business-ethics-quarterly/article/why-a-right-to-an-explanation-of-algorithmic-decisionmaking-should-exist-a-trustbased-approach

Algorithmic Decision-making

  • What is an algorithm?
    • A set of rules and procedures that leads to a decision
    • Could be machine learning, or simply control flow logic (rule-based if-then-else)
  • What can algorithms do for us?
    • They improve accuracy and efficiency over human decision-making
    • Algorithms can make decisions faster and more consistently than humans
    • We can just create the model to be optimal, since it's just numbers (right?)
  • How can algorithms harm us?
    • "An algorithm is only as good as ..."
      • its training data
      • its designer
    • Widespread rapid adoption can encode bias, threaten fairness, erase privacy, transparency, and due process

What is fairness to an algorithm?

due process:

 a legal principle ensuring that everyone is entitled to a fair and impartial procedure before being deprived of life, liberty, or property.

...

For algorithms in society, this means that people should have the right to:

  • Understand how algorithms affect them
  • Challenge and correct unfair or biased outcomes
  • Access their data and control how it is used
  • Seek recourse if harmed by algorithmic decisions

Burton, Emanuelle; Goldsmith, Judy; Mattei, Nicholas; Siler, Cory; Swiatek, Sara-Jo. Computing and Technology Ethics: Engaging through Science Fiction (pp. 117-118). MIT Press.

hungry judges

a study of parole boards in 2011 found that parole was granted nearly 65% of the time at the start of a session, barely above 0% right before a meal break, and again nearly 65% after a break

...

hungry judges are harsher judges!

Fairness in Algorithmic Decision-making

  • Algorithms require formalization of what to compute
    • you need to actually write an equation of your goal i.e. an objective function
      • What is the cost of making each decision?

 

 

 

 

 

  • You also need to write an equation for evaluating the results for accuracy and fairness
  • If we want to use algorithms to make decisions, it forces us to be precise about what we think "fairness" is and how we define it
  • How do you define fairness?

What is fairness to an algorithm?

\text{cost} = f(\text{benefit}, \text{risks})

Fairness in Algorithmic Decision-making

Human Involvement in Algorithmic Decision-making

The "Loop"

Think about the difference between using automated systems with different levels of:

  • severity of harm
  • probability of harm

Human Involvement in Algorithmic Decision-making

The "Loop"

Human-out-of-the-loop (Quadrant 3)
Where the probability of harm is low & severity of harm is low, the system no longer needs humans to help make a decision. In this scenario, the AI runs on its own without human supervision.

Human-over-the-loop (Quadrants 1 and 4)
Humans mediate when it's determined an automated system has failed. However, if humans are not paying attention, AI will continue without human intervention.

Human-in-the-loop (Quadrant 2)

Probability of harm and the severity of harm is high, requires a human to be part of that decision. AI provides suggestions to humans. If there is no human to make a decision, no action is executed.

HUMAN OVER THE LOOP

Example 1: Traffic prediction systems. Automated system will suggest shortest route to next destination. Humans can overturn that decision.


Example 2: Some cybersecurity solutions. Important company data is protected by a firewall and encrypted. Hackers are unlikely to penetrate your firewall and decrypt encrypted data. However, if they do, severity is high. For insidious new attacks, humans should pay close attention to what is happening.

HUMAN OUT OF THE LOOP

Example 1: Recommendation engines. Many e-commerce sites will help consumers find the products they are most likely to buy, Spotify recommends songs you want to listen to next. The probability of damage is low, and the severity of seeing disliked shoes or hearing disliked songs is also low. Humans are not necessary. What about recommending news articles?

 

Example 2: Translators. Except in highly delicate situations, AI-based translation systems are improving at such a rapid rate that we will not need humans soon. AI is learning how to conduct basic translation and how to translate the meaning of slang and localized meanings of words. 

HUMAN IN THE LOOP

Example 1: Medical diagnosis and treatment. AI can help a doctor to determine what is wrong with you. Furthermore, AI can suggest treatment for you by examining the outcomes from other patients with a similar diagnosis. The AI will be aware of patient outcomes that our doctor is not aware of. AI is making our doctors more educated, and we want our doctors to make the final decision.

 

Example 3: Lethal Automated Weapon Systems (LAWS). Severity of harm is so high (shooting the wrong target) that it overrides whatever the probability of harm is (for now). In "rights-conscious nations," no LAWS can fire without human confirmation.

...

 DODD 3000.09 requires that all systems, including LAWS, be designed to “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” [...for now]

Can we minmax    ethics?

  • I want to caution you against unfettered techno-solutionism

Minmaxing:

A strategy often used in games, optimization problems, and decision-making processes, where a player or decision-maker seeks to maximize their strengths or advantages while minimizing their weaknesses or disadvantages. The term originates from the concept of minimizing the possible maximum loss, or maximizing the possible minimum gain, depending on the context.

 

In video games and role-playing games (RPGs), players often use minmaxing to optimize their characters by allocating resources such as skill points, equipment, and abilities in a way that maximizes their effectiveness in desired areas (e.g., combat prowess, magic ability) while minimizing their effectiveness in less important areas.

Criteria for Algorithmic Accountability

  1. Does it work? (Efficiency to achieve public safety)
  2. Is it fair? (Fairness)
  3. Can people understand how it works? (Transparency)
  4. Can people appeal its judgment? (Due Process)
  5. Does it use information that an individual might reasonably expect to remain private? (Privacy)

https://web.stanford.edu/class/cs182/

How do we balance the value of an algorithm in terms of achieving public safety against other values beyond fairness?

  • What else do we value? How do encode that in law or policy? 

Human Decision-making

Example: Ford Pinto

https://www.classiccarstodayonline.com/classic-car-print-advertisements/ford-1974-ford-pinto-ad-a/

https://www.tortmuseum.org/ford-pinto/#images-1

  • Since $137 M > $49.5 M, they said "meh"

  • It did not go well

    • 500-900 people burned to death.

    • Ford found not guilty of homicide. Did the recall afterwards. Incurred legal fees.

Human Decision-making

Example: Ford Pinto

Expected payout for potential accidents:

  • Estimated incidents: 180 burn deaths, 180 serious burn injuries, and 2,100 vehicles lost in fires annually.
  • Value assigned per fatality: $200,000
  • Value per injury: $67,000
  • Value per vehicle lost: $700
  • Using these figures, Ford calculated the total cost of settlements for burn deaths, injuries, and vehicle losses to be approximately $49.5 million.

Cost to fix the fuel tank design:

  • $11 part cost per vehicle
  • With 12.5 million vehicles and 1.5 million light trucks potentially affected, the total cost to fix all vehicles was calculated at approximately $137 million.
  • Since $137 M > $49.5 M, they said "meh"

  • It did not go well

    • 500-900 people burned to death.

    • Ford found not guilty of homicide. Did the recall afterwards. Incurred legal fees.

Human Decision-making

Example: Ford Pinto

Expected payout for potential accidents:

  • Estimated incidents: 180 burn deaths, 180 serious burn injuries, and 2,100 vehicles lost in fires annually.
  • Value assigned per fatality: $200,000
  • Value per injury: $67,000
  • Value per vehicle lost: $700
  • Using these figures, Ford calculated the total cost of settlements for burn deaths, injuries, and vehicle losses to be approximately $49.5 million.

Cost to fix the fuel tank design:

  • $11 part cost per vehicle
  • With 12.5 million vehicles and 1.5 million light trucks potentially affected, the total cost to fix all vehicles was calculated at approximately $137 million.

Outside Ethical Frameworks

Moral intuition: an unreasoned reaction (not unreasonable)

  • Our “intuitive awareness of value”
  • Intrinsically motivating
  • Retrainable; fallible; unavoidable
  • We're not the only species with a sense of justice

Why Ethics?

  • Ethics is the theory and practice of ways to make good choices and lead a good life

 

  • It involves both knowledge and skills (i.e. practice)

 

Activity (+Example)

Learning objectives:

  • Gain a better understanding of how computer scientists and/or data scientists contribute to the world
  • Take the opportunity to think about the responsibility that rests on your shoulders should your pursue a career in computer and/or data science
  • Consider the need for ethics education in computer and/or data science

Activity (+Example)

Example

A computer hardware designer could create a new phone that was feature rich but also designed to be obsolete within a year so that the user would need to replace it quickly. This would allow for a significant revenue stream for the company. However, the plethora of features could be too complex for an elderly person to manage, and the planned obsolescence could result in excessive waste.

 

On the flip-side, the designer could implement a version of the phone where the initial setup included a choice of “essentials only” resulting in a screen that was easy to navigate and included a large, clean, display. There could be a prominent “help” button that connected the user with a predetermined relative or caregiver. The phone could come with an offer of a discount if the buyer’s previous phone was turned in for recycling (materials reuse) or donation to a non-profit.

 

Report Out

Wrap up

  • According to author Sara Baase, in her book A Gift of Fire (Baase 2012), “Ethics is the study of what it means to do the right thing.”
  • While this is a simple definition, it turns out that the study of ethics can be a complex topic. However, it is certainly one that computer and data scientists, because of the nature of their work, need to carefully consider as they are trained and then move forward into their careers
  • As we complete more activities like this, we'll have opportunities to learn tools to help you as computer/data scientists make ethical decisions on implementations and to anticipate detrimental outcomes

Reflection Journal

Course Intro

By Rebecca Williams

Course Intro

Setting the stage for the course topics

  • 6