Plenary & Invited Speakers
Antonios Tsourdos, Plenary speaker
Cranfield University, UK
“Aerial Robotic Operations – Advances and Challenges”
Hadas Kress-Gazit, Plenary speaker
Cornell University, USA
“Synthesizing and Guaranteeing Robot Behaviors”
Abstract: In this talk I will describe how formal methods such as synthesis – automatically creating a system from a formal specification – can be leveraged to design robots, explain and provide guarantees for their behavior, and even identify skills they might be missing. I will discuss the benefits and challenges of synthesis techniques and will give examples of different robotic systems including modular robots, swarms and robots interacting with people.
Biography: Hadas Kress-Gazit is the Geoffrey S.M. Hedrick Sr. Professor at the Sibley School of Mechanical and Aerospace Engineering at Cornell University. She received her Ph.D. in Electrical and Systems Engineering from the University of Pennsylvania in 2008 and has been at Cornell since 2009. Her research focuses on formal methods for robotics and automation and more specifically on synthesis for robotics – automatically creating verifiable robot controllers for complex high-level tasks. Her group explores different types of robotic systems including modular robots, soft robots and swarms and synthesizes (pun intended) ideas from different communities such as robotics, formal methods, control, hybrid systems and computational linguistics. She is an IEEE fellow and has received multiple awards for her research, teaching and advocacy for groups traditionally underrepresented in STEM. She lives in Ithaca with her partner and two kids.
Ig-Jae Kim, Plenary speaker
“How will AI-Robots help a super-aged society?”
Abstract: Although the population is declining, the proportion of the elderly is rapidly increasing. According to the United Nations, if the proportion of the population aged 65 and over is 7% or more, it is an aging society, if it is 14% or more, it is an aged society, and if it exceeds 20%, it is a super-aged society. Korea has already moved from an aging society to an aged society in 2017, and is expected to reach a super-aged society by 2025. In the midst of a lot of social interest in responding to this aging society, researchers in the field of artificial intelligence and robots are emphasizing the need for companion robots that are needed in the future super-aging society. For the independent living of the elderly in single-person households, which is expected to be the main demand for companion robots, technology development that can provide basic needs for everyone, such as a safe and convenient environment and the continuation of social relationships, is required. To this end, it is necessary to provide customized services that reflect the user’s health, physical condition, living environment, taste or habits, etc. In addition, in order to provide human-friendly and active services, physical support, accumulated information, and analysis of the user’s condition and living environment are essential. Currently, KIST is making a lot of effort to solve social problems, and I would like to introduce how we are developing AI Robot technology to respond to the problems of an aging society.
Biography: Dr. Ig-Jae Kim is Director-General of Artificial Intelligence and Robotics Institute at KIST. He is currently leading a research institute focusing on artificial intelligence and robotics at Korea Institute of Science and Technology. In addition, he conducts research on how machines detect and recognize people, understand human behavior for interaction between robots and humans, and analyze the surrounding environment as well. He received his Ph.D. in Electrical and Computer Engineering from Seoul National University. He worked as a postdoctoral fellow at MIT MediaLab. After that, he worked as a senior / principal researcher at KIST and served as the director of the Center for Imaging Media Research. He is also currently serving as a scientific advisor to several major national institutions, such as the National Police Agency and the Presidential Security Service in Korea.
Luca Carlone, Invited speaker
“From SLAM to Real-time 3D Scene Understanding for Robotics”
Abstract: Spatial perception algorithms and systems have witnessed an unprecedented progress in the last decade. Robots are now able to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation and manipulation. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception. This talk discusses two efforts targeted at bridging this gap. The first effort targets high-level scene understanding. While humans are able to quickly grasp both geometric, semantic, and physical aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our work on real-time metric-semantic understanding and 3D Dynamic Scene Graphs. I introduce the first generation of Spatial Perception Engines, that extend the traditional notions of mapping and SLAM, and allow a robot to build a “mental model” of the environment, including spatial concepts (e.g., humans, objects, rooms, buildings) and their relations at multiple levels of abstraction. The second effort focuses on robustness. I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of noise and outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation: our algorithms are “hard to break” (e.g., are robust to 99% outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their input-output performance.
Biography: Luca Carlone is the Leonardo Career Development Associate Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. He joined LIDS as a postdoctoral associate (2015) and later as a Research Scientist (2016), after spending two years as a postdoctoral fellow at the Georgia Institute of Technology (2013-2015). His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems. His work includes seminal results on certifiably correct algorithms for localization and mapping, as well as approaches for visual-inertial navigation and distributed mapping. He is a recipient of the Best Paper Award in Robot Vision at ICRA 2020, a 2020 Honorable Mention from the IEEE Robotics and Automation Letters, a Track Best Paper award at the 2021 IEEE Aerospace Conference, the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, the Best Paper Award at WAFR 2016, the Best Student Paper Award at the 2018 Symposium on VLSI Circuits, and he was best paper finalist at RSS 2015 and RSS 2021. He is also a recipient of the NSF CAREER Award (2021), the RSS Early Career Award (2020), the Google Daydream (2019) and the Amazon Research Award (2020), and the MIT AeroAstro Vickie Kerrebrock Faculty Award (2020). At MIT, he teaches “Robotics: Science and Systems,” the introduction to robotics for MIT undergraduates, and he created the graduate-level course “Visual Navigation for Autonomous Vehicles”, which covers mathematical foundations and fast C++ implementations of spatial perception algorithms for drones and autonomous vehicles.
Dongheui Lee, Invited speaker
Technische Universität München, Germany
“How to Design a Robot Which can Learn Complex Tasks?”
Abstract: Robotics research community has shown increased interest on robot skill learning in the past decade. Robot learning from imitating successful human demonstrations provides an efficient way to learn new skills, which can reduce time and cost to program the robot. However, the techniques for robot learning from demonstrations are often limited to learning simple movement primitives. In this talk, I will review some of the background, motivations and state of the art in robot learning from demonstrations towards complex task learning. I will introduce some of recent progress which we made in our lab for bridging the low level skill learning and task knowledge.
Biography: Dongheui Lee is Associate Professor at the Department of Electrical and Computer Engineering, Technical University of Munich (TUM). She is also leading the Human-centered assistive robotics group at the German Aerospace Center (DLR). Her research interests include human motion understanding, human robot interaction, machine learning in robotics, and assistive robotics. She obtained her B.S. and M.S. degrees in mechanical engineering at Kyung Hee University, Korea and a PhD degree from the department of Mechano-Informatics, University of Tokyo, Japan in 2007. She was a research scientist at the Korea Institute of Science and Technology (KIST), Project Assistant Professor at the University of Tokyo (2007-2009) and joined TUM as professor. She was awarded a Carl von Linde Fellowship at the TUM Institute for Advanced Study (2011) and a Helmholtz professorship prize (2015).
Brendan Englot, Invited speaker
Stevens Institute of Technology, USA
“Improving the Situational Awareness of Underwater Robots in Cluttered Environments”
Abstract: This talk considers sonar-equipped underwater robots operating under significant localization uncertainty that are tasked with exploring and mapping cluttered environments. Several recent advances in perception and navigation in this setting will be discussed. First, a novel active perception framework will be described that leverages virtual maps to guide an underwater robot’s planning and decision-making as it explores unknown environments under localization uncertainty. Second, recent work will be discussed that produces accurate, high-definition 3D maps of cluttered underwater environments using wide-aperture multi-beam imaging sonar, an underwater sensing technology that permits long-range sensing with wide area coverage in turbid water. We deploy the sonars in an orthogonally-oriented stereo pair to eliminate ambiguity and perform dense 3D reconstructions of underwater structures. The resulting maps of underwater structures are further enhanced with the aid of object classification and probabilistic inference. Third and finally, ongoing efforts will be discussed to learn the relationship between the above-surface appearance of maritime environments in satellite imagery and their below-surface appearance in sonar imagery.
Biography: Dr. Brendan Englot is an Associate Professor of Mechanical Engineering at Stevens Institute of Technology in Hoboken, New Jersey, USA, where he has been a member of the faculty since 2014. Brendan is the founder and director of the Robust Field Autonomy Lab, which develops robust autonomous navigation solutions for mobile robots operating in harsh and unstructured environments. He received S.B., S.M. and Ph.D. degrees in Mechanical Engineering from the Massachusetts Institute of Technology in 2007, 2009 and 2012, respectively. At MIT, he studied motion planning for surveillance and inspection applications, deploying his algorithms on an underwater robot to inspect Navy and Coast Guard ships. During 2012-2014, Brendan was with United Technologies Research Center in East Hartford, Connecticut, USA, where he was a Research Scientist and Principal Investigator in the Autonomous and Intelligent Robotics Laboratory and a technical contributor to the Sikorsky Autonomous Research Aircraft. Brendan received a National Science Foundation CAREER Award in 2017, an Office of Naval Research Young Investigator Award in 2020, and in 2018 he was appointed the Geoffrey S. Inman Endowed Junior Professor of Mechanical Engineering at Stevens Institute of Technology.