*beautiful day, skipped class
1. Stanford Encyclopedia of Philosophy – “Virtue Ethics”
Virtue ethics frames moral life around the cultivation of good character rather than the application of universal rules or the calculation of consequences. Rooted in traditions from Aristotle and Confucius to contemporary moral philosophy, it defines virtues as deeply ingrained dispositions—like honesty, courage, or compassion—that enable moral agents to act wisely across contexts. The fully virtuous person acts with harmony between reason and emotion, guided by phronesis or practical wisdom, which allows moral judgment to adapt intelligently to the complexities of real life.
2. Shannon Vallor – Podcast Interview on Virtue Ethics and Technology
Vallor extends virtue ethics into the technological age, emphasizing that moral excellence must evolve alongside the tools and environments that shape human practice. Virtues are learned through habit, mentorship, and reflection, and must now be cultivated in digital and AI-mediated spaces where empathy, care, and imagination are easily eroded. She highlights practical wisdom as especially crucial today—our capacity to reinterpret inherited moral scripts to meet unprecedented ethical challenges. Technology, she argues, doesn’t just test our virtues but also reshapes them, altering the very practices through which we learn to be good.
Virtue Ethics
3. Wallach & Vallor – “Moral Machines: From Value Alignment to Embodied Virtue”
Wallach and Vallor argue that aligning AI with human preferences or rules is not enough to ensure moral reliability. True ethical trustworthiness would require machines capable of something analogous to virtue—stable, context-sensitive moral understanding guided by practical wisdom and oriented toward human flourishing. Yet they warn that such “embodied virtue” remains far beyond current technical reach, as machines lack the holistic grasp of moral salience that humans develop through embodied experience and emotional attunement. The aspiration toward virtuous machines thus serves less as a near-term engineering target than as a regulative ideal reminding us what moral excellence, human or artificial, ultimately entails.
PRE-CLASS
READING 1
What best captures the focus of virtue ethics and its relevance for technology and AI?
2. Shannon Vallor – Podcast Interview on Virtue Ethics and Technology
Vallor extends virtue ethics into the technological age, emphasizing that moral excellence must evolve alongside the tools and environments that shape human practice. Virtues are learned through habit, mentorship, and reflection, and must now be cultivated in digital and AI-mediated spaces where empathy, care, and imagination are easily eroded. She highlights practical wisdom as especially crucial today—our capacity to reinterpret inherited moral scripts to meet unprecedented ethical challenges. Technology, she argues, doesn’t just test our virtues but also reshapes them, altering the very practices through which we learn to be good.
A. It judges the moral worth of technologies solely by their social consequences.
B. It applies universal moral rules to ensure machines behave rationally.
C. It centers on cultivating moral skills and character traits such as courage, compassion, and practical wisdom that humans
must practice and adapt in technologically changing contexts.
D. It seeks to replace traditional moral reasoning with automated decision systems that prevent vice.
PRE-CLASS
READING 2
1. Stanford Encyclopedia of Philosophy – “Virtue Ethics”
Virtue ethics frames moral life around the cultivation of good character rather than the application of universal rules or the calculation of consequences. Rooted in traditions from Aristotle and Confucius to contemporary moral philosophy, it defines virtues as deeply ingrained dispositions—like honesty, courage, or compassion—that enable moral agents to act wisely across contexts. The fully virtuous person acts with harmony between reason and emotion, guided by phronesis or practical wisdom, which allows moral judgment to adapt intelligently to the complexities of real life.
A. Virtue ethics ignores rules and consequences entirely.
B. Virtue ethics treats virtues and vices as the foundation on which other moral notions are based.
C. Virtue ethics defines virtues in terms of producing good consequences.
D. Virtue ethics requires acting only from a sense of duty or obligation.
In Aristotle’s framework, phronesis or practical wisdom serves primarily to:
A. Provide motivation through emotional inclination rather than rational choice.
B. Ensure that good intentions are matched by sound understanding of how to
act rightly in concrete situations.
C. Eliminate the need for moral education or experience.
D. Distinguish deontological from consequentialist reasoning.
According to the passage, what chiefly distinguishes virtue ethics from deontology and consequentialism?
Wallach + Vallor 2020 Moral Machines: From Value Alignment to Embodied Virtue
What key argument do Wallach and Vallor make in “Moral Machines: From Value Alignment to Embodied Virtue”?
A. Aligning AI systems with human preferences is sufficient to guarantee
moral safety.
B. True moral reliability in machines requires cultivating something akin to virtue or moral character, not merely value alignment.
C. Virtue ethics is irrelevant to discussions of AI because machines lack
emotions.
D. AI systems should follow fixed moral rules rather than adapting to new
moral contexts.
3. Wallach & Vallor – “Moral Machines: From Value Alignment to Embodied Virtue”
Wallach and Vallor argue that aligning AI with human preferences or rules is not enough to ensure moral reliability. True ethical trustworthiness would require machines capable of something analogous to virtue—stable, context-sensitive moral understanding guided by practical wisdom and oriented toward human flourishing. Yet they warn that such “embodied virtue” remains far beyond current technical reach, as machines lack the holistic grasp of moral salience that humans develop through embodied experience and emotional attunement. The aspiration toward virtuous machines thus serves less as a near-term engineering target than as a regulative ideal reminding us what moral excellence, human or artificial, ultimately entails.
Odd Birthday FOR
Even Birthday AGAINST
CLASS
Consider these "virtues" or "alignment traits"
Is anything missing?
TASK: Design a roommate (housemate)
TASK: Design a robot companion/assistant.
Birthday 1-15
Birthday 16-31
CONSTRAINT: You can only choose 6 traits. Which ones are they?
For (at least) one of the missing traits:
1) what are the anticipated failure modes
(what can go wrong?)?
2) for each failure mode, what other
mechanisms might we engage to address it?
TASK: Design a roommate (housemate)
TASK: Design a robot companion/assistant.
Birthday 1-15
Birthday 16-31
CONSTRAINT: You can only choose 6 traits. Which ones are they?
Missing Trait: Veracity
Failure Modes: Without it, misunderstandings could multiply. My roommate says they did not say what they said, misreports things.
Alternative Mechanisms: written agreements, shared calendars, message logs, camera and recording devices.
Missing Trait: Epistemic humility
Failure Modes: Robot assumes it always knows best. Dangerous overconfidence. Ignores human information, advice.
Alternative Mechanisms: confidence thresholds, "ask-for-help" triggers.
Resources
OPTIONAL SEE ALSO