Function or method overloading occurs when multiple functions or methods have the same name but different parameter lists (either in number, type, or both).
What is fairness to an algorithm?
due process:
a legal principle ensuring that everyone is entitled to a fair and impartial procedure before being deprived of life, liberty, or property.
...
For algorithms in society, this means that people should have the right to:
Burton, Emanuelle; Goldsmith, Judy; Mattei, Nicholas; Siler, Cory; Swiatek, Sara-Jo. Computing and Technology Ethics: Engaging through Science Fiction (pp. 117-118). MIT Press.
hungry judges
a study of parole boards in 2011 found that parole was granted nearly 65% of the time at the start of a session, barely above 0% right before a meal break, and again nearly 65% after a break
...
hungry judges are harsher judges!
https://web.stanford.edu/class/cs182/
https://www.fdic.gov/analysis/cfr/consumer/2018/documents/aaron-roth.pdf
Solon Barocas and Moritz Hardt, "Fairness in Machine Learning", NeurIPS 2017
https://web.stanford.edu/class/cs182/
How do we balance the value of an algorithm in terms of achieving public safety against other values beyond fairness?
Example: Ford Pinto
https://www.classiccarstodayonline.com/classic-car-print-advertisements/ford-1974-ford-pinto-ad-a/
https://www.tortmuseum.org/ford-pinto/#images-1
Since $137 M > $49.5 M, they said "meh"
It did not go well
500-900 people burned to death.
Ford found not guilty of homicide. Did the recall afterwards. Incurred legal fees.
Example: Ford Pinto
Expected payout for potential accidents:
Cost to fix the fuel tank design:
Since $137 M > $49.5 M, they said "meh"
It did not go well
500-900 people burned to death.
Ford found not guilty of homicide. Did the recall afterwards. Incurred legal fees.
Example: Ford Pinto
Expected payout for potential accidents:
Cost to fix the fuel tank design:
Think about the difference between using automated systems with different levels of:
Human-out-of-the-loop (Quadrant 3)
Where the probability of harm is low & severity of harm is low, the system no longer needs humans to help make a decision. In this scenario, the AI runs on its own without human supervision.
Human-over-the-loop (Quadrants 1 and 4)
Humans mediate when it's determined an automated system has failed. However, if humans are not paying attention, AI will continue without human intervention.
Human-in-the-loop (Quadrant 2)
Probability of harm and the severity of harm is high, requires a human to be part of that decision. AI provides suggestions to humans. If there is no human to make a decision, no action is executed.
HUMAN OVER THE LOOP
Example 1: Traffic prediction systems. Automated system will suggest shortest route to next destination. Humans can overturn that decision.
Example 2: Some cybersecurity solutions. Important company data is protected by a firewall and encrypted. Hackers are unlikely to penetrate your firewall and decrypt encrypted data. However, if they do, severity is high. For insidious new attacks, humans should pay close attention to what is happening.
HUMAN OUT OF THE LOOP
Example 1: Recommendation engines. Many e-commerce sites will help consumers find the products they are most likely to buy, Spotify recommends songs you want to listen to next. The probability of damage is low, and the severity of seeing disliked shoes or hearing disliked songs is also low. Humans are not necessary. What about recommending news articles?
Example 2: Translators. Except in highly delicate situations, AI-based translation systems are improving at such a rapid rate that we will not need humans soon. AI is learning how to conduct basic translation and how to translate the meaning of slang and localized meanings of words.
HUMAN IN THE LOOP
Example 1: Medical diagnosis and treatment. AI can help a doctor to determine what is wrong with you. Furthermore, AI can suggest treatment for you by examining the outcomes from other patients with a similar diagnosis. The AI will be aware of patient outcomes that our doctor is not aware of. AI is making our doctors more educated, and we want our doctors to make the final decision.
Example 3: Lethal Automated Weapon Systems (LAWS). Severity of harm is so high (shooting the wrong target) that it overrides whatever the probability of harm is (for now). In "rights-conscious nations," no LAWS can fire without human confirmation.
...
DODD 3000.09 requires that all systems, including LAWS, be designed to “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” [...for now]
Overpolicing refers to a situation where law enforcement agencies disproportionately focus their efforts and resources on certain communities, areas, or groups of people, often leading to excessive or unnecessary police presence and intervention. This can result in frequent stops, searches, arrests, and surveillance, particularly targeting marginalized communities, such as racial or ethnic minorities, low-income neighborhoods, or other vulnerable populations.
A proxy is an indirect measure or substitute that is used to represent a variable or concept that is unavailable to measure directly. Proxies are often used when the true variable of interest cannot be observed or measured directly due to practical, ethical, or technical constraints.
~~~
Some proxies can be used to indirectly infer "protected characteristics" like race, gender, religion:
someone is poor
they have low credit score
potential employer checks credit score, rejects b/c too low
person stays poor
Bias Amplification
Social media: A user interacts with content supporting a particular political view → the system recommends similar content → user continues engaging with similar content, deepening their viewpoint → system reinforces this trend by recommending even more similar content, creating an echo chamber.
Health analytics: A person in a low-income area with limited healthcare access has a poor health outcome → the model predicts high risk for future health issues, prioritizing emergency treatment over preventive care → person receives only limited treatment → health does not improve → cycle repeats.
https://chrispiech.github.io/probabilityForComputerScientists/en/examples/fairness/
Task: build a classifier that predicts whether a user is a Software Engineer, in order to serve them a relevant ad
our moral intuition tells us we should NOT include race and gender in our input dataset for this task!
precision = coverage
recall = accuracy of the things you want to detect
actual thing you want to detect
not the thing you want to detect
low income
high income
Precision
low income
high income
Recall
"Given (out of) all the predicted SWE's, how many were actually SWE's?"
"Given (out of) all the actual SWE's, how many did the model correctly identify as SWE?"
low income
high income
low income
high income
note the nuanced difference between this and recall:
For strict statistical parity, the goal is to equalize the "positive" outcomes only
Strict Statistical Parity / Group Fairness
Pros
Cons
Rule of Thumb (context sensitive!!): Use SSP when the cost of false alarms is low
Equal Odds
Pros
Cons
Rule of Thumb (context sensitive!!): Use EO when the cost of false alarms is high