Sarah Dean PRO
asst prof in CS at Cornell
Prof. Sarah Dean
MW 2:45-4pm
110 Hollister Hall
0. Announcements & Recap
1. Q Function Approximation
2. Optimization & Gradient Descent
3. Stochastic Gradient Descent
4. Derivative-Free Optimization
HW2 released next Monday
5789 Paper Review Assignment (weekly pace suggested)
OH cancelled today, instead Thursday 10:30-11:30am
Learning Theory Mentorship Workshop
Application due March 10: https://let-all.com/alt22.html
Prelim Tuesday 3/22 at 7:30-9pm in Phillips 101
Closed-book, definition/equation sheet for reference will be provided
Focus: mainly Unit 1 (known models) but many lectures in Unit 2 revisit important key concepts
Study Materials: Lecture Notes 1-15, HW0&1
Lecture on 3/21 will be a review
Meta-Algorithm for Policy Iteration in Unknown MDP
Supervision with Rollout (MC):
\(\mathbb{E}[y_i] = Q^\pi(s_i, a_i)\)
\(\widehat Q\) via ERM on \(\{(s_i, a_i, y_i)\}_{1}^N\)
Rollout:
\(s_t\)
\(a_t\sim \pi(s_t)\)
\(r_t\sim r(s_t, a_t)\)
\(s_{t+1}\sim P(s_t, a_t)\)
\(a_{t+1}\sim \pi(s_{t+1})\)
...
Supervision with Bellman Exp (TD):
If \(\widehat Q = Q^\pi\) then \(\mathbb{E}[y_t] = Q^\pi(s_t, a_t)\)
One step:
\(s_t\)
\(a_t\sim \pi(s_t)\)
\(r_t\sim r(s_t, a_t)\)
\(s_{t+1}\sim P(s_t, a_t)\)
\(a_{t+1}\sim \pi(s_{t+1})\)
Supervision with Bellman Opt (TD):
If \(\widehat Q = Q^*\) then \(\mathbb{E}[y_t] = Q^*(s_t, a_t)\)
SARSA and Q-learning are simple tabular algorithms
0. Announcements & Recap
1. Q Function Approximation
2. Optimization & Gradient Descent
3. Stochastic Gradient Descent
4. Derivative-Free Optimization
By Sarah Dean