Networks & Cognition Workshop, June 2023
Our digital world is increasingly algorithmically mediated
and these algorithms are powered by machine learning
\(\to\)
historical movie ratings
new movie rating
How do we design reliable algorithms that account for user dynamics?
\(a_t\)
\(y_t\)
Interests may be impacted by recommended content
preference state \(s_t\)
expressed preferences
recommended content
recommender policy
\(a_t\)
\(y_t = \langle s_t, a_t\rangle + w_t \)
Interests may be impacted by recommended content
expressed preferences
recommended content
recommender policy
underlies factorization-based methods
preference state \(s_t\)
\(a_t\)
\(y_t = \langle s_t, a_t\rangle + w_t \)
Interests may be impacted by recommended content
expressed preferences
recommended content
recommender policy
underlies factorization-based methods
state \(s_t\) updates to \(s_{t+1}\)
items \(a_t\in\mathcal A\subseteq \mathcal S^{d-1}\)
\(y_t = \langle s_t, a_t\rangle + w_t \)
\(s_{t+1} = f_t(s_t, a_t)\)
preferences \(s\in\mathcal S^{d-1}\)
items \(a_t\in\mathcal A\subseteq \mathcal S^{d-1}\)
\(y_t = \langle s_t, a_t\rangle + w_t \)
\(s_{t+1} \propto s_t + \eta_t a_t\)
preferences \(s\in\mathcal S^{d-1}\)
Assimilation: interests may become more similar to recommended content
initial preference
resulting preference
items \(a_t\in\mathcal A\subseteq \mathcal S^{d-1}\)
\(y_t = \langle s_t, a_t\rangle + w_t \)
Biased Assimilation: interest update is proportional to affinity
\(s_{t+1} \propto s_t + \eta_t\langle s_t, a_t\rangle a_t\)
preferences \(s\in\mathcal S^{d-1}\)
Proposed by Hązła et al. (2019) as model of opinion dynamics
initial preference
resulting preference
2. Biased assimilation
\(s_{t+1} \propto s_t + \eta_t\langle s_t, a_t\rangle a_t\)
When recommendations are made globally, the outcomes differ:
initial preference
resulting preference
1. Assimilation
\(s_{t+1} \propto s_t + \eta_t a_t\)
polarization (Hązła et al. 2019; Gaitonde et al. 2021)
homogenized preferences
Regardless of whether assimilation is biased,
Personalized fixed recommendation \(a_t=a\)
$$ s_t = \alpha_t s_0 + \beta_t a$$
positive and decreasing
increasing magnitude (same sign as \(\langle s_0, a\rangle\) if biased assimilation)
\(s_{t+1} \propto s_t + \eta_t\langle s_t, a_t\rangle a_t\)
\(s_{t+1} \propto s_t + \eta_t a_t\)
Regardless of whether assimilation is biased,
\(s_{t+1} \propto s_t + \eta_t\langle s_t, a_t\rangle a_t\)
\(s_{t+1} \propto s_t + \eta_t a_t\)
Implications [DM22]
It is not necessary to identify preferences to make high affinity recommendations
Regardless of whether assimilation is biased,
\(s_{t+1} \propto s_t + \eta_t\langle s_t, a_t\rangle a_t\)
\(s_{t+1} \propto s_t + \eta_t a_t\)
initial preference
resulting preference
Implications [DM22]
It is not necessary to identify preferences to make high affinity recommendations
Preferences "collapse" towards whatever users are often recommended
Regardless of whether assimilation is biased,
\(s_{t+1} \propto s_t + \eta_t\langle s_t, a_t\rangle a_t\)
\(s_{t+1} \propto s_t + \eta_t a_t\)
initial preference
resulting preference
Implications [DM22]
It is not necessary to identify preferences to make high affinity recommendations
Preferences "collapse" towards whatever users are often recommended
Non-manipulation (and other goals) can be achieved through randomization
Simple choice model: given a recommendation, a user
Preference dynamics lead to a new perspective on harm
Simple definition: harm caused by consumption of harmful content
♪
♫
𝅘𝅥
𝅗𝅥
𝅘𝅥𝅯
♫
𝅗𝅥
♫
𝅘𝅥𝅯
♪
♫
𝅘𝅥
𝅗𝅥
\(\mathbb P\{\mathrm{click}\}\)
Due to preference dynamics, there may be downstream harm, even when no harmful content is recommended
♪
♫
𝅘𝅥
𝅗𝅥
𝅘𝅥𝅯
♫
𝅗𝅥
Recommendation: ♫
♫
𝅘𝅥𝅯
♪
♫
𝅘𝅥
𝅗𝅥
Recommendation: 𝅘𝅥𝅯
♫
𝅘𝅥𝅯
♪
♫
𝅘𝅥
𝅗𝅥
\(\mathbb P\{\mathrm{click}\}\)
\(\mathbb P \{\mathrm{click}\}\)
Due to preference dynamics, there may be downstream harm, even when no harmful content is recommended
♪
♫
𝅘𝅥
𝅗𝅥
𝅘𝅥𝅯
♫
𝅗𝅥
Recommendation: ♫
♫
𝅘𝅥𝅯
♪
♫
𝅘𝅥
𝅗𝅥
Recommendation: 𝅘𝅥𝅯
♫
𝅘𝅥𝅯
♪
♫
𝅘𝅥
𝅗𝅥
\(\mathbb P\{\mathrm{click}\}\)
\(\mathbb P \{\mathrm{click}\}\)
Due to preference dynamics, there may be downstream harm, even when no harmful content is recommended
Recommendation: ♫
♫
𝅘𝅥𝅯
♪
♫
𝅘𝅥
𝅗𝅥
Recommendation: 𝅘𝅥𝅯
This motivates a new recommendation objective which takes into account the probability of future harm [CDEIKW23]
♫
𝅘𝅥𝅯
♪
♫
𝅘𝅥
𝅗𝅥
\(\mathbb P\{\mathrm{click}\}\)
\(\mathbb P \{\mathrm{click}\}\)
Even if individual preferences are immutable, population level effects may be observed due to retention dynamics
The dynamic of retention & specialization can lead to representation disparity (Hashimoto et al.) and segmentation [DCRMF23]
Even if individual preferences are immutable, population level effects may be observed due to retention dynamics
The dynamic of retention & specialization can lead to representation disparity (Hashimoto et al.) and segmentation [DCRMF23]
Example: linear regression with users and
2
1
Even if individual preferences are immutable, population level effects may be observed due to retention dynamics
The dynamic of retention & specialization can lead to representation disparity (Hashimoto et al.) and segmentation [DCRMF23]
Example: linear regression with users and
2
1
Even if individual preferences are immutable, population level effects may be observed due to retention dynamics
The dynamic of retention & specialization can lead to representation disparity (Hashimoto et al.) and segmentation [DCRMF23]
Example: linear regression with users and
2
1
Other References
Hashimoto, Srivastava, Namkoong, Liang, 2018. Fairness Without Demographics in Repeated Loss Minimization. ICML.