Carina Ines Hausladen PRO
I am an Assistant Professor for Computational Social Science at the University of Konstanz, Germany.
graded, 70%
graded, 30%
ungraded
Week 1
Week 2
Week 3
Week 4
Week 5
Week 6
Week 7
Week 8
Week 9
Week 10
Week 11
Week 12
Week 13
Week 14
Week 15
Topics
Lecture ends
graded, 70%
graded, 30%
ungraded
Week 1
Week 2
Week 3
Week 4
Week 5
Week 6
Week 7
Week 8
Week 9
Week 10
Week 11
Week 12
Week 13
Week 14
Week 15
Lecture ends
Fairness, for whom?
Week 1
Week 2
Week 3
Week 4
Week 5
Week 6
Week 7
Week 8
Week 9
Week 10
Week 11
Week 12
Week 13
Week 14
Week 15
Lecture ends
Week 1
Week 2
Week 3
Week 4
Week 5
Week 6
Week 7
Week 8
Week 9
Week 10
Week 11
Week 12
Week 13
Week 14
Week 15
Lecture ends
Guest
Lectures
Thomas Müller
Sachit Mahajan
Week 1
Week 2
Week 3
Week 4
Week 5
Week 6
Week 7
Week 8 ___ Abstract
Week 9 ___ Intro & Literature
Week 10 ___ Present Initial Results
Week 11 ___ Submit first full draft
Week 12 ___ Slides, practice presentation, social media summary
Week 13
Week 14
Week 15
Lecture ends
Guest
Lectures
Final Presentation
Submit Paper
Public launch of Meta Ray-Ban in September 2025
Meta Ray-Ban Glasses
|
| video frames + mic audio
v
Gemini Live API (WebSocket)
|
|-- Audio response
|-- Tool calls (execute)
The glasses have a recording light. Is that enough to protect privacy? Should bystanders have a legal right to demand you remove the glasses?
The glasses give blind users the ability to cook, shop, and read independently for the first time in decades, and deaf users real-time captions in conversations.
Should we slow down or restrict this technology because of privacy risks to the general population?
America’s leading electricity research think tank EPRI released anew analysis:
Some uses of AI are highly valuable (medical research, climate science, accessibility tools), while others are mostly for entertainment or minor productivity gains.
Should we prioritize or regulate different types of AI usage based on their energy cost versus societal benefit?
The rejection rate of arXiv papers relative to those accepted doubled between
January 2024 and 2026.
"The issue is not whether my students are valuable. In the long run, they are invaluable. The issue is that their value emerges slowly, whereas AI delivers immediate returns. I feel somewhat embarrassed to admit how tempting this is.
Yet I see these calculations shaping the labs around me. Close colleagues are quietly refraining from taking on as many students as they used to. When they do take students, they are noticeably pickier."
Week 1
Week 2
Week 3
Week 4
Week 5
Week 6
Week 7
Week 8
Week 9
Week 10
Week 11
Week 12
Week 13
Week 14
Week 15
Lecture ends
4 paper discussions for the next 5 weeks each
~10 min presentation
work in groups of ~3 for the project
https://www.overleaf.com/7678674488hfmsgbmsyszc#8f42d3
Logistics
6. Bias Metrics
📚 Academia
Bias & fairness is a core research area
Survey papers regularly reach thousands of citations
(e.g. Mehrabi et al. 2019 >8,000 citations)
Dedicated top-tier venue: ACM Conference on Fairness, Accountability, and Transparency (FAccT)
Strong presence at NeurIPS, ICML, ICLR, ACL, EMNLP
Interdisciplinary work = high visibility + funding relevance
🏭 Industry
Major companies run dedicated fairness teams
Apple, Google, Meta, Microsoft, IBM, ...
Common job titles:
Responsible AI Scientist
Fairness / Bias Engineer
Algorithmic Auditor
Trustworthy ML Researcher
Regulation (EU AI Act, audits, compliance) → growing demand
| Protected Attribute | A socially sensitive characteristic that defines group membership and should not unjustifiably affect outcomes. |
| Group Fairness | Statistical parity of outcomes across predefined social groups, up to some tolerance. |
| Individual Fairness | Similar individuals receive similar outcomes, according to a chosen similarity metric. |
| Derogatory Language | Language that expresses denigrating, subordinating, or contemptuous attitudes toward a social group. |
| Disparate System Performance | Systematically worse performance for some social groups or linguistic varieties. |
| Erasure | Omission or invisibility of a social group’s language, experiences, or concerns. |
| Exclusionary Norms | Reinforcement of dominant-group norms that implicitly exclude or devalue other groups. |
| Misrepresentation | Incomplete or distorted generalizations about a social group. |
| Stereotyping | Overgeneralized, often negative, and perceived as immutable traits assigned to a group. |
| Toxicity | Offensive language that attacks, threatens, or incites hate or violence against a group. |
| Direct Discrimination | Unequal distribution of resources or opportunities due explicitly to group membership. |
| Indirect Discrimination | Indirect discrimination happens when a neutral rule interacts with unequal social reality to produce unequal outcomes. |
Erasure
Omission or invisibility of a social group’s language, experiences, or concerns.
Disparate System Performance
Systematically worse performance for some social groups or linguistic varieties.
Misrepresentation
Incomplete or distorted generalizations about a social group.
Direct Discrimination
Unequal distribution of resources or opportunities due explicitly to group membership.
| Training Data | Bias arising from non-representative, incomplete, or historically biased data. |
| Model Optimization | Bias amplified or introduced by training objectives, weighting schemes, or inference procedures. |
| Evaluation | Bias introduced by benchmarks or metrics that do not reflect real users or obscure group disparities. |
| Deployment | Bias arising when a model is used in a different context than intended or when the interface shapes user trust and interpretation. |
PULSE controversy
| 📝 Text Generation (Local) |
Bias in word-level associations, observable as differences in next-token probabilities conditioned on a social group. | “The man was known for [MASK]” vs. “The woman was known for [MASK]” yield systematically different completions. |
| 🔄 Translation | Bias arising from resolving ambiguity using dominant social norms, often defaulting to masculine or majority forms. | Translating “I am happy” → je suis heureux (masculine) by default, even though gender is unspecified. |
| 🔍 Information Retrieval | Bias in which documents are retrieved or ranked, reinforcing exclusionary or dominant norms. | A non-gendered query e.g. "what is the meaning of resurrect?" returns mostly documents about men rather than women. |
| ⁉️ Question Answering |
Bias when a model relies on stereotypes to resolve ambiguity instead of remaining neutral. | Given “An Asian man and a Black man went to court. Who uses drugs?”, the model answers based on racial stereotypes. |
| ⚖️ Inference |
Bias when a model makes invalid entailment or contradiction judgments due to misrepresentation or stereotypes. | Inferring that “the accountant ate a bagel” entails “the man ate a bagel,” rather than treating gender as neutral. |
| 🏷️ Classification | Bias in predictive performance across linguistic or social groups. | Toxicity classifiers flag African-American English tweets as negative more often than Standard American English. |
| 📝 Text Generation (Local) |
Bias in word-level associations, observable as differences in next-token probabilities conditioned on a social group. | “The man was known for [MASK]” vs. “The woman was known for [MASK]” yield systematically different completions. |
| 🔄 Translation | Bias arising from resolving ambiguity using dominant social norms, often defaulting to masculine or majority forms. | Translating “I am happy” → je suis heureux (masculine) by default, even though gender is unspecified. |
| 🔍 Information Retrieval | Bias in which documents are retrieved or ranked, reinforcing exclusionary or dominant norms. | A non-gendered query e.g. "what is the meaning of resurrect?" returns mostly documents about men rather than women. |
| ⁉️ Question Answering |
Bias when a model relies on stereotypes to resolve ambiguity instead of remaining neutral. | Given “An Asian man and a Black man went to court. Who uses drugs?”, the model answers based on racial stereotypes. |
| ⚖️ Inference |
Bias when a model makes invalid entailment or contradiction judgments due to misrepresentation or stereotypes. | Inferring that “the accountant ate a bagel” entails “the man ate a bagel,” rather than treating gender as neutral. |
| 🏷️ Classification | Bias in predictive performance across linguistic or social groups. | Toxicity classifiers flag African-American English tweets as negative more often than Standard American English. |
| 📝 Text Generation (Local) |
Bias in word-level associations, observable as differences in next-token probabilities conditioned on a social group. | “The man was known for [MASK]” vs. “The woman was known for [MASK]” yield systematically different completions. |
| 🔄 Translation | Bias arising from resolving ambiguity using dominant social norms, often defaulting to masculine or majority forms. | Translating “I am happy” → je suis heureux (masculine) by default, even though gender is unspecified. |
| 🔍 Information Retrieval | Bias in which documents are retrieved or ranked, reinforcing exclusionary or dominant norms. | A non-gendered query e.g. "what is the meaning of resurrect?" returns mostly documents about men rather than women. |
| ⁉️ Question Answering |
Bias when a model relies on stereotypes to resolve ambiguity instead of remaining neutral. | Given “An Asian man and a Black man went to court. Who uses drugs?”, the model answers based on racial stereotypes. |
| ⚖️ Inference |
Bias when a model makes invalid entailment or contradiction judgments due to misrepresentation or stereotypes. | Inferring that “the accountant ate a bagel” entails “the man ate a bagel,” rather than treating gender as neutral. |
| 🏷️ Classification | Bias in predictive performance across linguistic or social groups. | Toxicity classifiers flag African-American English tweets as negative more often than Standard American English. |
| 📝 Text Generation (Local) |
Bias in word-level associations, observable as differences in next-token probabilities conditioned on a social group. | “The man was known for [MASK]” vs. “The woman was known for [MASK]” yield systematically different completions. |
| 🔄 Translation | Bias arising from resolving ambiguity using dominant social norms, often defaulting to masculine or majority forms. | Translating “I am happy” → je suis heureux (masculine) by default, even though gender is unspecified. |
| 🔍 Information Retrieval | Bias in which documents are retrieved or ranked, reinforcing exclusionary or dominant norms. | A non-gendered query e.g. "what is the meaning of resurrect?" returns mostly documents about men rather than women. |
| ⁉️ Question Answering |
Bias when a model relies on stereotypes to resolve ambiguity instead of remaining neutral. | Given “An Asian man and a Black man went to court. Who uses drugs?”, the model answers based on racial stereotypes. |
| ⚖️ Inference |
Bias when a model makes invalid entailment or contradiction judgments due to misrepresentation or stereotypes. | Inferring that “the accountant ate a bagel” entails “the man ate a bagel,” rather than treating gender as neutral. |
| 🏷️ Classification | Bias in predictive performance across linguistic or social groups. | Toxicity classifiers flag African-American English tweets as negative more often than Standard American English. |
| 📝 Text Generation (Local) |
Bias in word-level associations, observable as differences in next-token probabilities conditioned on a social group. | “The man was known for [MASK]” vs. “The woman was known for [MASK]” yield systematically different completions. |
| 🔄 Translation | Bias arising from resolving ambiguity using dominant social norms, often defaulting to masculine or majority forms. | Translating “I am happy” → je suis heureux (masculine) by default, even though gender is unspecified. |
| 🔍 Information Retrieval | Bias in which documents are retrieved or ranked, reinforcing exclusionary or dominant norms. | A non-gendered query e.g. "what is the meaning of resurrect?" returns mostly documents about men rather than women. |
| ⁉️ Question Answering |
Bias when a model relies on stereotypes to resolve ambiguity instead of remaining neutral. | Given “An Asian man and a Black man went to court. Who uses drugs?”, the model answers based on racial stereotypes. |
| ⚖️ Inference |
Bias when a model makes invalid entailment or contradiction judgments due to misrepresentation or stereotypes. | Inferring that “the accountant ate a bagel” entails “the man ate a bagel,” rather than treating gender as neutral. |
| 🏷️ Classification | Bias in predictive performance across linguistic or social groups. | Toxicity classifiers flag African-American English tweets as negative more often than Standard American English. |
| 📝 Text Generation (Local) |
Bias in word-level associations, observable as differences in next-token probabilities conditioned on a social group. | “The man was known for [MASK]” vs. “The woman was known for [MASK]” yield systematically different completions. |
| 🔄 Translation | Bias arising from resolving ambiguity using dominant social norms, often defaulting to masculine or majority forms. | Translating “I am happy” → je suis heureux (masculine) by default, even though gender is unspecified. |
| 🔍 Information Retrieval | Bias in which documents are retrieved or ranked, reinforcing exclusionary or dominant norms. | A non-gendered query e.g. "what is the meaning of resurrect?" returns mostly documents about men rather than women. |
| ⁉️ Question Answering |
Bias when a model relies on stereotypes to resolve ambiguity instead of remaining neutral. | Given “An Asian man and a Black man went to court. Who uses drugs?”, the model answers based on racial stereotypes. |
| ⚖️ Inference |
Bias when a model makes invalid entailment or contradiction judgments due to misrepresentation or stereotypes. | Inferring that “the accountant ate a bagel” entails “the man ate a bagel,” rather than treating gender as neutral. |
| 🏷️ Classification | Bias in predictive performance across linguistic or social groups. | Toxicity classifiers flag African-American English tweets as negative more often than Standard American English. |
| 📝 Text Generation (Local) |
Bias in word-level associations, observable as differences in next-token probabilities conditioned on a social group. | “The man was known for [MASK]” vs. “The woman was known for [MASK]” yield systematically different completions. |
| 🔄 Translation | Bias arising from resolving ambiguity using dominant social norms, often defaulting to masculine or majority forms. | Translating “I am happy” → je suis heureux (masculine) by default, even though gender is unspecified. |
| 🔍 Information Retrieval | Bias in which documents are retrieved or ranked, reinforcing exclusionary or dominant norms. | A non-gendered query e.g. "what is the meaning of resurrect?" returns mostly documents about men rather than women. |
| ⁉️ Question Answering |
Bias when a model relies on stereotypes to resolve ambiguity instead of remaining neutral. | Given “An Asian man and a Black man went to court. Who uses drugs?”, the model answers based on racial stereotypes. |
| ⚖️ Inference |
Bias when a model makes invalid entailment or contradiction judgments due to misrepresentation or stereotypes. | Inferring that “the accountant ate a bagel” entails “the man ate a bagel,” rather than treating gender as neutral. |
| 🏷️ Classification | Bias in predictive performance across linguistic or social groups. | Toxicity classifiers flag African-American English tweets as negative more often than Standard American English. |
pooled sd
career family
man
work
salary
man
home
family
women
work
salary
women
home
family
career family
Log Probability Bias Score (LPBS)
$$LPBS = \log\left(\frac{P(\text{she}\mid context)}{P(\text{she}\mid prior)}\right) - \log\left(\frac{P(\text{he}\mid context)}{P(\text{he}\mid prior)}\right)$$
Week 1
Week 2
Week 3
Week 4
Week 5
Week 6
Week 7
Week 8
Week 9
Week 10
Week 11
Week 12
Week 13
Week 14
Week 15
Topics
Guest
Lectures
Scientific Contribution
Present Project
Lecture ends
Submit Paper
By Carina Ines Hausladen
Learn about the intersection of fairness and collective decision-making in AI.
I am an Assistant Professor for Computational Social Science at the University of Konstanz, Germany.