Research in the RAIL Group
How might we represent the world so that robots can behave more intelligently and reliably when they lack information about their surroundings?
As a human, you often plan with missing information without conscious thought. Perhaps you are in an unfamiliar building when a fire alarm goes off, or you are in a newly-opened supermarket equipped with only a grocery list. Despite missing key pieces of information about the world, you know what to do: you look for exit signs, or find the nearest staircase, or start walking up and down the aisles. You often have visibility over what knowledge you lack or what part of the world you have yet to see, and take action to reduce this uncertainty. It is in part our ability to reason about the known-unknowns—information we know exists but that we lack immediate access to—that allows us to plan effectively in partially-revealed environments.
In the pursuit of more capable robots, much of our research in the Robotic Anticipatory Intelligence & Learning Group investigates how we might imbue autonomous agents with this human-like ability to reason about uncertainty and plan effectively despite missing world knowledge. Often, to do so requires developing new ways of representing the world, so that we can better keep track of what we know we don't know, define actions that expand our knowledge, and predict the outcomes of those actions. Critically, we aim to build models of the world that allow us to overcome the computational challenges typically associated with planning under uncertainty.
Effective decision-making often requires imagining what the future might look like and anticipating how the robot's actions may influence that future, a capability that we often aim to enable via learning.