AAAI 2022 Fall Symposium: Lessons Learned for Autonomous Assessment of Machine Abilities (LLAAMA)

Proceedings can be found here: https://arxiv.org/abs/2301.05384

Important information

AAAI Symposium main page: https://www.aaai.org/Symposia/Fall/fss22.php

Dates: 17-19 November

The Westin Arlington Gateway, Arlington, VA

Room: Hemingway 2

Description

Modern civilian and military systems have created a demand for sophisticated intelligent autonomous machines capable of operating in uncertain dynamic environments. Such systems are realizable thanks in large part to major advances in perception and decision-making techniques, which in turn have been propelled forward by modern machine learning tools. However, these newer forms of intelligent autonomy raise questions about when/how communication of the operational intent and assessments of actual vs. supposed capabilities of autonomous agents impact overall performance.  

This symposium session will examine possibilities for enabling intelligent autonomous systems to self-assess and communicate their ability to effectively execute assigned tasks, as well as reason about the overall limits of their competencies and maintain operability within those limits.  The symposium will bring together researchers working in this burgeoning area of research to share lessons learned, identify major theoretical and practical challenges encountered so far, and potential avenues for future research and real-world applications. 

Topics

We invite contributions from researchers in AI/expert systems, human factors, autonomous robotics and control/complex systems engineering, and other related disciplines that explore several key areas, including:

  • Applications and studies of competency self-assessments in field robotics and other real-world autonomous systems
  • Measures of competency for operational self-assessment by autonomous agents
  • AI/ML, uncertainty quantification, formal methods, and algorithmic meta-reasoning techniques to enable/support autonomous competency self-assessment     
  • Presentation and communication of machine generated competency self-assessments to human users/stakeholders
  • Techniques for evaluating the quality of machine generated competency self-assessments (e.g. correctness, completeness, fidelity, reliability).

This symposium will feature invited talks and contributed paper presentations by leading researchers and technical experts, as well as panel discussions and group breakout sessions focusing on the implementation of competency assessment in real autonomous systems.

Submissions

Deadline: August 26, 2022 September 9, 2022

Please submit one of the following types of submissions via the AAAI FSS-22 EasyChair site as a .pdf document. 

  • Regular papers (6-8 pages + references)
  • Position papers (2-4 pages + references)
  • Summary of previously published papers (1-2 pages)

Following review by and feedback from the symposium’s technical committee, authors of selected papers will be invited to give a short presentation for their contributions at the symposium.

Please use the following AAAI paper templates available here. Submissions do not have to be anonymized and will follow a single-blind review process.

Confirmed Speakers

George Hellstern works for Lockheed Martin Aeronautics.  He has more than 25 years of experience with systems design, including AI solutions for air-to-air combat and sustainment. He serves as the program manager for autonomy and AI, unmanned air systems command and control and human performance. His background is in operational, programmatic and technical work with Air Mobility Command, the Office of the Secretary of Defense and Lockheed Martin Skunk Works.  

He has led and advocated for innovation within the DoD, pushing for new and disruptive concepts in Human Machine Collaboration, Artificial Intelligence (AI), and Autonomy. His research under AFRL’s FA-XX program resulted in concepts making their way into Air Force, Navy, and DARPA publications on human-machine teaming, autonomy, and AI strategies. His collaborative warfare approaches have enabled Skunk Works technology transitions and delivered key capabilities to the warfighter.  

As part of the Open Mission Systems team at Skunk Works, he’s effectively communicated core concepts to NAVAIR, ONR, AFRL, and DARPA, shaping programs like Distributed Battle Management (DBM), System of Systems Integration Technology and Evaluation (SoSITE), Communications in Contested Environments (C2E), Air Combat Evolution (ACE), and LongShot. Technologies he’s transitioned include Speech Recognition, AI-Based Prognostic Health Management, Sensor Resource Management, and Human Performance Sensing.

Jacob Crandall is Professor of Computer Science at Brigham Young University where he directs the Laboratory for interactive Machines. He received his B.S., M.S. and Ph.D. in Computer Science from Brigham Young University. His research interests lie at the intersection of human-machine cooperation, robotics, machine learning, and game theory.  

Shlomo Zilberstein is Professor of Computer Science and Associate Dean for Research and Engagement in the Manning College of Information and Computer Sciences at the University of Massachusetts, Amherst. He also directs the Resource-Bounded Reasoning Lab. He received a B.A. in Computer Science summa cum laude from the Technion – Israel Institute of Technology, and a Ph.D. in Computer Science from the University of California, Berkeley.  

Zilberstein’s research focuses on the foundations and applications of resource-bounded reasoning techniques, which allow complex systems to make decisions while coping with uncertainty, missing information, and limited computational resources. His research interests include decision theory, reasoning under uncertainty, Markov decision processes, design of autonomous agents, heuristic search, real-time problem solving, principles of meta-reasoning, planning and scheduling, multi-agent systems, automated coordination and communication, information gathering, and reinforcement learning.

Amy Pritchett is a Department Head of Aerospace Engineering at Penn State. She received her SB, SM, and Sc.D. from Massachusetts Institute of Technology. Her research interest lies in the intersection of technology, humans and safety in dynamic, time-critical and safety-critical environments, including human-robot interaction in space exploration, human-autonomy teaming in aviation, novel flight deck designs, and manual control.

Dr. Jiangying Zhou joined Raytheon BBN as an associate director for the Analytics & Machine Intelligence (AMI) group in May 2022. Prior to joining BBN, Dr. Zhou was a DARPA program manager (2018-2022). Her areas of research include machine learning, artificial intelligence, unconventional computing, sensing, and dynamic systems modeling.

Prior to joining DARPA, Zhou was a senior engineering manager with Teledyne Scientific and Imaging, LLC. During her tenure at Teledyne, Zhou worked on contract R&D programs for U.S. government funding agencies as well as commercial customers in the areas of sensor exploitation, signal and image processing, and pattern recognition. Prior to Teledyne, Zhou served as director of R&D of Summus Inc., a small start-up company specializing in the areas of video and image compression, pattern recognition, and computer vision. Zhou began her career as a scientist at Panasonic Technologies, Inc., Princeton, New Jersey, where she conducted research in the areas of document analysis, handwriting recognition, image analysis, and information retrieval.

Dr. Zhou received a Bachelor of Science and a Master of Science, both in computer science, from Fudan University. She received a doctorate in electrical engineering from the State University of New York at Stony Brook.

Zhou is a member of the Institute of Electrical and Electronics Engineers Society and also a member of the Upsilon Pi Epsilon international honor society for the computing and information disciplines.

Program

The below schedule is tentative and subject to change. All times are in EST.

Day 1 (November 17)
09:00 – 09:20Welcome, overview, opening remarks
Session 1: Look Inside Yourself: Competency Assessment for Real-World Autonomous Systems
09:20 – 10:30Invited speaker: George Hellstern
10:30 – 11:00Coffee break
11:00 – 12:30Paper presentations (30 minutes each):

Reliable Neural Network Controllers for Autonomous Agents in Partially Observable Environments
Nils Jansen, Steven Carr and Ufuk Topcu

Symmetry Detection in Trajectory Data for More Meaningful Reinforcement Learning Representations
Marissa D’Alonzo and Rebecca Russell

Safe Online and Offline Reinforcement Learning
Thiago D. Simão
12:30- 14:00Lunch break
Session 2: Measure My Machine: Metrics and Measures for Operational Competency Self-Assessment
14:00- 15:00Invited speaker: Jacob Crandall
15:00- 15:30Paper presentation (30 minutes):

Learning Temporal Logic Properties: an Overview of Two Recent Methods
Jean-Raphaël Gaglione, Rajarshi Roy, Nasim Baharisangari, Daniel Neider, Zhe Xu and Ufuk Topcu
15:30 – 16:00Coffee break
16:00 – 16:30Paper presentation (30 minutes):

Autonomous Assessment of Demonstration Sufficiency via Bayesian Inverse Reinforcement Learning
Tu Trinh and Daniel Brown
16:30- 17:30Mini group breakout activity
18:00 – 19:00Reception
Day 2 (November 18)
Session 3: Cooking with Code: Algorithms and Models for Competency Self-Assessment
09:00 – 09:20Group discussion and day overview
09:20 – 10:20Invited speaker: Shlomo Zilberstein
10:30 – 11:00Coffee break
11:00 – 12:30Paper presentations (30 minutes each):

Global and Local Analysis of Interestingness for Competency-Aware Deep Reinforcement Learning
Pedro Sequeira, Jesse Hostetler and Melinda Gervasio

Measuring Competency of Machine Learning Systems and Enforcing Reliability
Michael Planer and Jen Sierchio

Will My Robot Achieve My Goals? Predicting the Probability that an MDP Policy Reaches a User-Specified Behavior Target

Alexander Guyer and Thomas Dietterich
12:30- 14:00Lunch break
Session 4: Talk to Me: Communication of Competency Self-Assessments
14:00- 15:00Invited speaker: Amy Pritchett
15:00- 15:30Paper presentation (30 minutes):

Targets in Reinforcement Learning to solve Stackelberg Security Games
Saptarashmi Bandyopadhyay, Chenqi Zhu, Philip Daniel, Joshua Morrison, Ethan Shay and John Dickerson
15:30 – 16:00Coffee break
16:00 – 16:30Paper presentation (30 minutes):

Learning and Understanding a Disentangled Feature Representation for Hidden Parameters in Reinforcement Learning
Christopher Reale and Rebecca Russell
16:30 – 17:30Panel discussion
18:00 – 19:30Plenary Session
Day 3 (November 19)
Session 5: To Thine Own Self Be True: Evaluating Competency Self-Assessment Quality and Future Directions
09:00 – 12:00Combined session with Thinking Fast and Slow and Other Cognitive Theories in AI

Organizing Committee

Aastha Acharya, Nicholas Conlon, Nisar Ahmed (University of Colorado Boulder);

Rebecca Russell, Michael Crystal (Draper);

Brett Israelsen (Raytheon Technologies Research Center);

Ufuk Topcu (UT Austin);

Zhe Xu (Arizona State University);

Daniel Szafir (UNC).