Research in the RAIL Group

How might we represent the world so that robots can behave more intelligently and reliably when they lack information about their surroundings?

As a human, you often plan with missing information without conscious thought. Perhaps you are in an unfamiliar building when a fire alarm goes off, or you are in a newly-opened supermarket equipped with only a grocery list. Despite missing key pieces of information about the world, you know what to do: you look for exit signs, or find the nearest staircase, or start walking up and down the aisles. You often have visibility over what knowledge you lack or what part of the world you have yet to see, and take action to reduce this uncertainty. It is in part our ability to reason about the known-unknowns—information we know exists but that we lack immediate access to—that allows us to plan effectively in partially-revealed environments.

In the pursuit of more capable robots, much of our research in the Robotic Anticipatory Intelligence & Learning Group investigates how we might imbue autonomous agents with this human-like ability to reason about uncertainty and plan effectively despite missing world knowledge. Often, to do so requires developing new ways of representing the world, so that we can better keep track of what we know we don't know, define actions that expand our knowledge, and predict the outcomes of those actions. Critically, we aim to build models of the world that allow us to overcome the computational challenges typically associated with planning under uncertainty.

Effective decision-making often requires imagining what the future might look like and anticipating how the robot's actions may influence that future, a capability that we often aim to enable via learning.

I am seeking motivated graduate students with interests at the intersection of Robotics and Machine Learning to join my group. Learn more about our PhD program through the CS Department website or reach out to me on Twitter or by email.

Planning in Partially Revealed Environments

So far, my work has focused on the tasks of navigation and exploration: problem settings in which humans have incredibly powerful heuristics yet robots have historically struggled to perform as well. More generally, I am interested in the ways that we can imbue a robot with the ability to predict what lies beyond what its sensors can see, so that it can make more informed decisions when planning when parts of its environment are missing.

Our algorithm uses learning to estimate the likelihood of possible outcomes upon trying an abstract action. The simulated robot can predict that most offices and classrooms are dead ends, and knows to avoid most rooms on its way to the goal.

Relevant Publications

  • Christopher Bradley, Adam Pacheck, Gregory J. Stein, Sebastian Castro, Hadas Kress-Gazit, and Nicholas Roy. "Learning and Planning for Temporally Extended Tasks in Unknown Environments." In: International Conference on Robotics and Automation (ICRA). 2021. paper.
  • Gregory J. Stein, Christopher Bradley, Victoria Preston, and Nicholas Roy. "Enabling Topological Planning with Monocular Vision". In: International Conference on Robotics and Automation (ICRA). 2020. paper, talk (10 min).
  • Gregory J. Stein, Christopher Bradley, and Nicholas Roy. "Learning over Subgoals for Efficient Navigation of Structured, Unknown Environments". In: Conference on Robot Learning (CoRL). 2018. paper, talk (14 min)

    Best Paper Finalist at CoRL 2018; Best Oral Presentation at CoRL 2018.

    .

Related Blog Posts

  • DeepMind's AlphaZero and The Real World
    Using DeepMind's AlphaZero AI to solve real problems will require a change in the way computers represent and think about the world. In this post, we discuss how abstract models of the world can be used for better AI decision making and discuss recent work of ours that proposes such a model for the task of navigation.

Mapping for Planning (from Monocular Vision)

In our Planning under Uncertainty work, high-level (topological) strategies for navigation meaningfully reduce the space of possible actions available to a robot, allowing use of heuristic priors or learning to enable computationally efficient, intelligent planning. The challenges in estimating structure with many existing techniques that aim to build a map of the environment from monocular vision in low texture or highly cluttered environments have precluded their use for topological planning in the past. In our research, we proposed a robust, sparse map representation that we built with monocular vision and that overcomes these shortcomings. Using a learned sensor, we estimated high-level structure of an environment from streaming images by detecting sparse vertices (e.g., boundaries of walls) and reasoning about the structure between them. We also estimate the known free space in our map, a necessary feature for planning through previously unknown environments. Our mapping technique can also be used with real data and is sufficient for planning and exploration in simulated multi-agent search and Learned Subgoal Planning applications.

Relevant Publications

  • Gregory J. Stein, Christopher Bradley, Victoria Preston, and Nicholas Roy. "Enabling Topological Planning with Monocular Vision". In: International Conference on Robotics and Automation (ICRA). 2020. paper, talk (10 min).