Research Projects

Developing Principles for Effective Human Collaboration with Free-Flying Robots

Aerial robots hold great promise in supporting human activities during space missions and terrestrial operations. For example, free-flying robots may automate environmental data collection, serve as maneuverable remote monitoring platforms, and effectively explore and map new environments. Such activities will depend on seamless integration of human-robot systems. The objective of this research is to advance fundamental knowledge of human-robot interaction (HRI) principles for aerial robots by developing new methods for communicating robot status at a glance as well as robot interface technologies that support both proximal and distal operation. This work will design a signaling module that maps robot communicative goals to signaling mechanisms, which can be used in robot software architectures in a manner analogous to common robot motion planning modules. Additionally, this work will examine how robot interface design requirements differ for crew and ground control and develop techniques for scaling control systems across interaction distances.

Investigators: Daniel Szafir
³§±è´Ç²Ô²õ´Ç°ù:ÌýNASA Early Career Faculty Award


Ìý

Coordinated Persistent Airborne Information Gathering: Cloud Robotics in the Clouds

DDDAS Loop

This projectÌýwill investigateÌýcoordinated energy-aware information-gathering by dynamic, data-driven reasoning across four inter-related layers: the sensing layer that focuses on information-theoretic formulations of mission tasks; the communication layer that learns models of net- worked communication behavior for improved coordination in denied or stressed environments; the environment layer whereby the dynamics of the atmosphere drive aircraft performance; and the platform layer where sensors embedded into the UAS platform provide novel flight performance data.

Autonomous decision-making is enabled by reasoning with complex models that cannot be run onboard the sensing platforms. System performance relies on the integration of high-fidelity offline models with reduced-order low-fidelity models driven by dynamic data collected during mission execution. This project exploits the Dynamic Data-Drive Application System (DDDAS) paradigm by developing autonomous decision-making loops closed over multiple spatial and temporal scales. Global (re)planning algorithms will be integrated with receding horizon trajectory optimization and transfer learning to create a framework that can target observations while balancing solution quality and computational resources. Further, net-centric, cloud-computing software will be created that connects multiple physical systems with computation resources dispersed over wireless communication networks. While the proposed effort includes aspects of all four components of the DDDAS Basic Area Objectives, primary advances are in the areas of Mathematical and Statistical Algorithms and Application Measurement Systems and Methods.

Investigators:ÌýEric Frew (PI), Brian Argrow
³§±è´Ç²Ô²õ´Ç°ù:ÌýAFOSR Dynamic Data-Driven Application Systems Program


Leveraging Implicit Human Cues to Design Effective Behaviors for Collaborative Robots

Robots have the potential to significantly benefit society by actively collaborating with people in critical domains including manufacturing, healthcare, and space exploration. But to provide effective assistance, robots must be able to work with people in a natural, intuitive, and socially adept manner. Current human-robot collaborations require that people explicitly communicate their goals and desired responses to robotic partners. As a result, joint human-robot activities bear little resemblance to scenarios involving human-human teamwork, where people are able to understand their partner's implicit cues, such as eye gaze, facial expressions, and intonations, and intuit appropriate responses, such as moving to