The COHRINT Lab brings together expertise in machine learning, sensor fusion, control and planning algorithms for autonomous mobile robot systems, with a special emphasis on aerospace applications. Our research focuses on intelligent human-robot interaction and scalable distributed robot-robot reasoning strategies for solving dynamic decision-making problems under uncertainty.

Software developed by our lab is available on GitHub.

Active and Recent Projects

Harnessing Human Perception in UAS via Bayesian Active Sensing

Sponsored by: NSF IUCRC Center for Unmanned Aerial Systems (C-UAS)

losthiker_uas_humansensorfusionUAS operators and users can play valuable roles as “human sensors” that contribute information beyond the reach of vehicle sensors. For instance, operators in search missions can provide “soft data” to narrow down possible survivor locations using semantic natural language observations (e.g. “Nothing is around the lake”; “Something is moving towards the fence”), or provide estimates of physical quantities (e.g. masses/sizes of obstacles, distances from landmarks) to help autonomous vehicles better understand search areas and improve decision making. This research focuses on the development of intelligent operator-UAS interfaces for “active human sensing”, so that autonomous UAS can decide how and when to query operators for soft data to expedite online decision making, based on dynamic models of the world and the operator.


Robust GPS-Denied Cooperative Localization Using Distributed Data Fusion

Sponsored by: U.S. Army Space and Missile Defense Command

ddf_oppsigcl_overviewThis research will develop new decentralized data fusion (DDF) algorithms for cooperative positioning. Accurate position and navigation information is crucial to mission success for mobile elements, especially in denied and contested environments. To ensure robustness to disrupted communications or GPS/satellite reception, novel sensor fusion algorithms are needed to assure cooperative positioning. This allows elements to treat each other as beacons on a “moving map”, whose uncertain locations are mutually estimated via opportunistic absolute/relative position measurements and then shared with each other. Key technical challenges for such algorithms are to ensure scalability, statistical correctness, and awareness of potential signal interference, while also enabling flexible information integration with minimal computing and communication overhead.


“Machine Self-Confidence” for Calibrating Trust in Autonomy

Sponsored by: NSF IUCRC Center for Unmanned Aerial Systems (C-UAS), and Northrop-Grumman Aerospace Systems

autonomy_userassuranceloop_simpleGiven the growing complexity and sophistication of increasingly autonomous systems, it is important to allow non-expert users to understand the actual capabilities of autonomous systems so that they can be tasked appropriately. In turn, this must engender trust in the autonomy and confidence in the operator. Competency information could be delivered as explanations of internal decision processes made by the autonomy, but this is often difficult to interpret by non-experts. Instead, we advocate that these insights be conveyed by a shorthand metric of the autonomy’s “self-confidence” in executing the tasks it has been assigned. Formulated correctly, this information should enable a competent user to task the autonomy with enhanced confidence, resulting in both increased system performance and reduced operator workload. Incorrectly constituted or inflated self-confidence can instead lead to inappropriate use of autonomy, or mistrust that leads to disuse. This project will develop specific metrics for intelligent physical system self-confidence, guided by autonomous aerospace robotics applications involving complex decision-making under uncertainty.


Scalable Cooperative Tracking of RF Ground Targets

Sponsored by: NSF IUCRC Center for Unmanned Aerial Systems (C-UAS)

rfgroundtargettracking_simplesetupThis work  develops a new approach to decentralized sensor fusion and trajectory optimization to enable multiple networked UAS assets to cooperatively localize moving RF signal sources on the ground in the presence of uncertainties in ownship states and sensing models. Our approach ties together model predictive planning with the recently developed idea of factorized distributed data fusion (FDDF), which allows each tracker vehicle to ignore state uncertainties for other vehicles and absorb new target state and local model information without sacrificing overall estimation performance. This approach will significantly reduce communication and computational overhead, and allow vehicles to maintain statistical consistency as well as accurately predict expected local information gains to efficiently devise receding horizon tracking trajectories, even in large ad hoc networks.


Learning for Coordinated Autonomous Robot-Human Teaming in Space Exploration

Sponsored by: NASA Space Technology Research Fellowship Program

nasalunarhumanrobotsIn human-robot teams, the effects of interactions at a variety of distances has not been well-studied. We argue that teammate interactions over multiple distances forms an important part of many human-robot teaming applications. New modeling and learning approaches are needed to build an accurate and reliable understanding of actual human-robot operations at multiple time, space, and information scales, thus realizing the full potential of teaming in complex future applications for space exploration.


TALAF (Tactical Autonomy Learning Agent Framework)

Sponsored by: Air Force Research Laboratory; in partnership with Orbit Logic, Inc.

afrl_talafloopThis research developed a novel software and learning architecture for optimally adapting the behaviors of autonomous agents engaging in air combat engagement simulations.  The key innovation is the development of a Gaussian Process Bayes Optimization (GP/BO) learning engine that evaluates metrics from simulation runs while intelligently modifying tunable agent parameters to seek optimum outcomes in complex multi-dimensional trade spaces. The research topic targeted more effective training of pilots at lower costs, but the onboard agent-based software utilized by the training function of the architecture also has application toward both unmanned combat aircraft and onboard advising capability, which can allow pilots to be more dominant in engagements.


Event-Triggered Cooperative Localization

Collaborators: University of California San Diego and SPAWAR

simpleeventtrigger  This research focuses on a novel cooperative localization algorithm for a team of robotic agents to estimate the state of the network via local communications. Exploiting an event-based paradigm, agents only send measurements to their neighbors when the expected benefit to employ this information is high. Because agents know the event-triggering condition for measurements to be sent, the lack of a measurement is also informative and fused into state estimates. The benefit of this implicit messaging approach is that it can reproduce nearly optimal localization results, while using significantly less power and bandwidth for communication.


Audio Localization and Perception for Robotic Search

Sponsored by: CU Boulder CEAS Discovery Learning Apprenticeship Program

5391188-3d-vector-robot-listening-stock-vector-listening-ear-hearingThis undergraduate research project looks at techniques for incorporating sound-based perception and localization into autonomous mobile robotic search problems in human environments. Although nearly all robotic exteroperception algorithms rely heavily on active/passive vision (e.g. lidar, cameras),  most autonomous robots nowadays are effectively “deaf”, i.e. they cannot incorporate ambient sound information from their environments into higher-level reasoning and decision making. Unlike active sonar, perception based on  ambient sound is passive in nature and must be able to handle a wide spectrum of stimuli. The goal of this project is to develop software and hardware capabilities for enabling an autonomous mobile robot to augment other sensory sources (onboard visual target detection, human inputs, etc.) with onboard audio detection and localization for a multi-target search and tracking task  in a cluttered indoor environment.