This course presents optimal, adaptive, and learning control principles from the perspective of robotics applications. Working from the Hamilton-Jacobi-Bellman formulation, optimal control methods for aerial and ground robots are developed. Real world challenges such as disturbances, state estimation errors and model errors are addressed and adaptive and reinforcement learning approaches are derived to address these challenges. Course project involves simulated control of an aerial vehicle, with aerodynamic models and wind disturbances.