When deployed, AI agents will encounter problems that are beyond their autonomous problem-solving capabilities. Leveraging human assistance can help agents overcome their inherent limitations and robustly cope with unfamiliar situations. We present a general interactive framework that enables an agent to determine and request contextually useful information from an assistant, and to incorporate rich forms of responses into its decision-making process. We demonstrate the practicality of our framework on a simulated human-assisted navigation problem. Aided with an assistance-requesting policy learned by our method, a navigation agent achieves up to a 7x improvement in success rate on tasks that take place in previously unseen environments, compared to fully autonomous behavior. We show that the agent can take advantage of different types of information depending on the context, and analyze the benefits and challenges of learning the assistance-requesting policy when the assistant can recursively decompose tasks into subtasks.
Latest posts by Ryan Watkins (see all)
- What Is Your Estimand? Defining the Target Quantity Connects Statistical Evidence to Theory - June 8, 2025
- Observing many students using difference-in-differences designs on the same data and hypothesis reveals a universe of uncertainty - June 8, 2025
- Limitations of the LLM-as-a-Judge Approach for Evaluating LLM Outputs in Expert Knowledge Tasks - June 7, 2025