Assured Autonomy in Multiagent Systems: Theory and Applications in Robotics
In this talk, I will go over our recent work which aims to develop a generic, composable framework for a multiagent system (of robots or vehicles), which can safely carry out time-critical missions in a distributed and fully autonomous fashion. The goal is to provide formal guarantees on both safety and finite-time mission completion, thus, to answer the question: “how trustworthy is the autonomy of a multi-robot system in a complex mission?” We refer to this notion of autonomy in multiagent systems as assured or trusted autonomy, which is currently a very sought-after area of research, thanks to its enormous applications in autonomous driving for instance. An important aspect of our work is its special emphasis on fast i.e., computationally tractable solutions for this otherwise very challenging planning problem.
In the first part of the talk, using tools from control theory (optimal control), formal methods (temporal logic and hybrid automata), and optimization (mixed-integer programming), I will describe two variants of (almost) real-time planning algorithms, which provide formal guarantees on safety and finite-time mission completion for a multiagent system in a complex mission. Our proposed framework is hybrid, distributed, and inherently composable, as it uses a divide-and-conquer approach for planning a complex mission, by breaking it down into several sub-tasks. This approach enables us to implement the resulting algorithms on robots with limited computational power, while still achieving close to real-time performance. We have validated the efficacy of our method on several use cases; I will mention two such examples during my talk, that are: autonomous search and rescue with a team of UAVs, and planning UAV-based inspection tasks in an industrial environment.
In the second part, I will briefly describe how we can translate and adapt these algorithms to safely learn actions and policies for robots in dynamic environments, so that they can accomplish their mission even in the presence of an uncertainty. I will introduce the ideas of self-monitoring and self-correction for agents using hybrid automata theory. Self-monitoring and self-correction refer to the problems in autonomy where the autonomous agents monitor their performance, detect deviations from normal or expected behavior, and learn to adjust both the description of their mission/task and their performance online, to maintain the expected behavior and performance. In this setting, we propose a formal and composable notion of safety in learning for autonomous multiagent systems, which we refer to as safe learning.
To round up my talk, I will briefly go over some of my other projects in the general areas of robotics and control, that I have been involved in during my time with Industry, and also during my masters at KAUST, and undergraduate at PIEAS.
Usman A. Fiaz received his bachelor’s (BS) and master's (MS) degrees in Electrical Engineering from Pakistan Institute of Engineering and Applied Sciences (PIEAS) and King Abdullah University of Science and Technology (KAUST), respectively, in 2015 and 2017. He obtained his doctoral degree (PhD), also in Electrical Engineering, from the University of Maryland, College Park (UMD) in 2022, with a specialization in Robotics, Control, and Learning. Currently, he is a Postdoctoral Fellow in Autonomy and Cyber-Physical Systems (CPS) at the National Institute of Standards and Technology (NIST), USA. He also holds an affiliate appointment with the Department of Electrical and Computer Engineering (ECE) and the Institute for Systems Research (ISR), at the University of Maryland, College Park (UMD).
Dr. Fiaz’s research interests lie in some of the modern areas of control theory, robotics, and machine learning with special emphasis on autonomous multiagent systems. More specifically, he is interested in designing theory, algorithms, and physical implementations for achieving assured autonomy in multiagent systems, such as teams of autonomous robots and vehicles (across all terrains and space), that can provide simultaneous assurances on: for example, safety and finite-time completion etc., during various complex tasks. In addition to his contributions to academia and research, Dr. Fiaz also has an extended history of collaborations with world-renowned industry. He has held visiting research positions at Intel (2021), ABB (2020), Nokia Bell Labs (2019), Mitsubishi Electric Research Labs (2018), and CERN (2014).
Dr. Fiaz is a member of the IEEE, the IEEE Robotics and Automation Society, and the IEEE Control Systems Society. He is a recipient of the Ann G. Wylie Dissertation Fellowship (2022), the Michael J. Pelczar Award for Graduate Excellence (2021), the Future Faculty Fellowship (2021), and the Outstanding Graduate Assistant Award (2018) from the University of Maryland; the Outstanding Achievement in Robotic Orchestration Award from Nokia Bell Labs (2019), the IFAC Young Author Award (Finalist) at the IFAC Mechatronics Symposium (2019), a 2nd Runner-up finish and a Bronze Medal at the MBZIRC Robotics Challenge (2017), and the President's Gold Medal for achieving the highest distinction during his BS from Pakistan Institute of Engineering and Applied Sciences (2015).