The Control & Learning for Robotics and Autonomy (CLEAR) Lab develops new theoretic and algorithmic tools in control and learning theory to enable advanced applications in modern robotic and autonomous systems.
Our research crosscuts various areas, including underwater robotics, legged robots, autonomous system navigation and collision avoidance, dynamic manipulation, UAV control, among others. Although the practical contexts of these areas appear to be different, the underlying research questions have a lot in common: (1) how to obtain a good dynamic model (rigid-body dynamics, system identification); (2) how to design a controller to achieve a desired dynamic performance (model predictive control, optimal control); (3) how to effectively interact with the environment and other robots (motion planning, reinforcement learning, collision avoidance).
In addition to the above robotic applications, we are also genuinely interested in using dynamical system and control theory to better understand optimization and learning algorithms. In fact, most optimization/learning algorithms can be equivalently viewed a dynamical systems whose state gets updated iteratively according to some predefined rules. The control system concepts such as equilibrium, regulation, robustness, stability, and the well-established tools such as Lypunov functions, Linear Matrix Inequalities, dynamic programming, nonlinear adaptive control, IQC, to name a few, can provide profound insights as well as exciting new results in analysis and design of optimization and learning algorithms.