Sarah Dean PRO
asst prof in CS at Cornell
4th Annual Learning for Dynamics & Control Conference, June 2022
\(\to\)
historical movie ratings
new movie rating
\(u_t\)
\(y_t = g_t(x_t, u_t) \)
Interests may be impacted by recommended content
\(x_{t+1} = f_t(x_t, u_t)\)
expressed preferences
recommended content
recommender policy
\(u_t\)
\(y_t = \langle x_t, u_t\rangle + w_t \)
Interests may be impacted by recommended content
\(x_{t+1} = f_t(x_t, u_t)\)
expressed preferences
recommended content
recommender policy
underlies factorization-based methods
\(u_t\)
\(y_t = \langle x_t, u_t\rangle + w_t \)
expressed preferences
recommended content
recommender policy
underlies factorization-based methods
A model inspired by biased assimilation updates proportional to affinity
\(x_{t+1} \propto x_t + \eta_t\langle x_t, u_t\rangle u_t\)
items \(u_t\in\mathcal U\subseteq \mathcal S^{d-1}\)
\(y_t = \langle x_t, u_t\rangle + w_t \)
A model inspired by biased assimilation updates proportional to affinity
\(x_{t+1} \propto x_t + \eta_t\langle x_t, u_t\rangle u_t\)
preferences \(x\in\mathcal S^{d-1}\)
Proposed by Hązła et al. (2019) as model of opinion dynamics
\(y_t = \langle x_t, u_t\rangle + w_t \)
A model inspired by biased assimilation updates proportional to affinity
\(x_{t+1} \propto x_t + \eta_t\langle x_t, u_t\rangle u_t\)
Non-personalized exposure leads to polarization (Hązła et al. 2019; Gaitonde et al. 2021)
\(y_t = \langle x_t, u_t\rangle + w_t \)
A model inspired by biased assimilation updates proportional to affinity
\(x_{t+1} \propto x_t + \eta_t\langle x_t, u_t\rangle u_t\)
Personalized fixed recommendation \(u_t=u\)
$$ x_t = \alpha_t x_0 + \beta_t u$$
positive and decreasing
increasing magnitude, same sign as \(\langle x_0, u\rangle\)
regret of fixed strategy
Result: As long as \(|\langle x_0, u\rangle| > c\) and noise is \(\sigma^2\) sub-Gaussian, $$R(T)= \sum_{t=0}^{T-1} 1 - \langle x_t, u_t \rangle \leq C_\eta(1/c^2 - 1) + \sigma^2\log T/c^2$$
Alg: Explore-then-Commit
maximum possible affinity
regret of explore-then-commit
Achieving high affinity is straightforward when \(\mathcal U\) contains "opposites"
Non-manipulation (Krueger et al., 2020) is an alternative goal
$$R(T) = \sum_{t=0}^{T-1} 1 - \langle x_0, x_t \rangle $$
When \(x_0\notin \mathcal U\), use randomized strategy to select \(u_t\) i.i.d.
$$\mathbb E[x_{t+1}] \propto (I+\eta_t\mathbb E[uu^\top])x_t$$
Informal Result: Suppose \(x_0\) is the dominant eigenvector of \(\mathbb E[uu^\top]\) and step size \(\eta_t \) decays like \(\frac{1}{1+t}\). Then
$$R(T) \lesssim \log T $$
Proof sketch:
$$\langle x_0 ,x_t\rangle = \frac{x_0^\top (I+u_{t-1}u_{t-1}^\top)\dots(I+u_{0}u_{0}^\top)x_0}{\|(I+u_{t-1}u_{t-1}^\top)\dots(I+u_{0}u_{0}^\top) x_0\|_2}$$
Using concentration for matrix products (Huang et al., 2021),
$$1-\langle x_0 ,x_t\rangle^2 \lesssim \frac{1}{t}$$
Rather than polarization....
...preferences may "collapse"
but this can be avoided using randomization
Necessary to have \(x_0\in\text{span}(\mathcal U)\) for
Observation function \(F(x_0; u_{0:T}) = y_{0:T}\) where \(y_t = \langle x_t, u_t\rangle\).
Result: \(F:\mathcal S^{d-1}\to \R^T\) is locally invertible if and only if \(u_{0:T}\) span \(\mathbb R^d\).
find \(q\) such that \(q\geq 0\), \(U\mathrm{diag}(U^\top x_0) q = x_0\),
\(I-U\mathrm{diag}(q)U^\top\succeq 0\)
Result: \(x_0\) is dominant eigenvector if randomization is proportional to \(q\).
Result: Problem is feasible if and only if \(x_0\) is in the span of \(\tilde \mathcal U = \{\mathrm{sign}(u^\top x_0)\cdot u\mid u\in \mathcal U\}\)
Open questions:
Key points:
Does social media have the ability to manipulate us, or merely to segment and target?
For single learner, leads to representation disparity (Hashimoto et al., 2018; Zhang et al., 2019)
Choose to participate depending on accuracy (e.g. music recommendation)
Self-reinforcing feedback loop when learners retrain
Sub-populations \(i\in[1,n]\)
Learners \(j\in[1,m]\)
\(\alpha^{t+1} = \nu(\alpha^t, \Theta^t)\)
\(\Theta^{t+1} = \mu(\alpha^{t+1}, \Theta^t)\)
evolve according to risks \(\mathcal R_i (\theta_j)\)
"risk minimizing in the limit"
strongly convex
Example: linear regression with
2
1
Result: An equilibrium \((\alpha^{eq}, \Theta^{eq})\) must have \(\Theta^{eq}=\arg\min \mathcal R(\alpha^{eq},\Theta)\) and is asymptotically stable if and only if
Definition: The total risk is \(\mathcal R(\alpha,\Theta) = \sum_{i=1}^n \sum_{j=1}^m \alpha_{ij} \mathcal R_i(\theta_j)\)
Proof sketch: asympototically stable equilibria correspond to the isolated local minima of \(\mathcal R(\alpha,\Theta)\)
Definition: In a split market, each sub-pop \(i\) allocates all participation to a single learner \(\gamma(i)\)
Utilitarian social welfare is inversely related to total risk
A notion of fairness is the worst-case risk over sub-pops
Open questions:
Key points:
Study dynamics to what end?
How to bridge the social and the technical? (Gilbert et al., 2022)
References
Krueger, Maharaj, Leike, 2020. Hidden incentives for auto-induced distributional shift. arXiv:2009.09153.
By Sarah Dean