• CAAMS - What else are you doing?

  • 025 Bayesian Networks

  • Breaking the Curse of Dimensionality in POMDPs

  • RL-For-Forest-Management

  • 215 POMDPs in Aerospace and Autonomy

  • 180 Offline POMDP Algorithms

  • Copy of "Optimal Trajectories

  • Copy of Thesis

  • ADCL Summary

  • 010-Introduction

  • Parting thoughts about Programming and Programming Languages

  • Freshman Seminar

  • Short Research Intro

  • Sunberg: Safe and efficient autonomy

  • CTU Julia POMDP Hackathon

  • Unexploitable Hypersonic Weapon Defense

  • Sunberg: Safe and efficient autonomy

  • DMU Course Presentation

  • 280 The Alignment Problem

  • 270 Transfer and Meta Learning

  • 260 Imitation and Inverse Reinforcement Learning

  • 250 Dynamic Games

  • 240 Simple Games

  • 230 Bayesian Network Learning

  • 220 Bayesian Networks

  • 212 POMDP Implementation in Julia

  • ADCL Research Overview

  • 211 Online POMDP Table

  • 210 Online POMDP Methods

  • 000 Template

  • 191-Formulation-Approximation-Table

  • 190 POMDP Formulation Approximations

  • 170 Exact POMDP Solutions: Alpha Vectors

  • 160 POMDPs

  • 150 Advanced Exploration and Entropy Regularization

  • 140 DQN and Advanced Policy Gradient

  • 130 Neural Network Function Approximation

  • PhD Applicant Visit Day

  • 120 Value Based Model Free RL

  • 110-Policy-Gradient

  • MART-POMDP-Intro

  • 100 Exploration vs Exploitation (Bandits)

  • 090 Reinforcement Learning

  • 080 Continuous MDPs

  • 000-Announcements

  • 070-Online-Methods

  • 060 Value Iteration Convergence

  • 050 Policy and Value Iteration

  • 040 Markov Decision Processes

  • 200 Particle Filters