•   When: Tuesday, May 16, 2017 from 10:00 AM to 12:00 PM
  •   Location: ENGR 1602
  •   Export to iCal

In this work, we address the problem of understanding what may have happened in a goal-based deliberative agent's environment after the occurrence of exogenous actions and events. Such an agent observes, periodically, information about the state of the world, but this information is incomplete, and reasons for state changes are not observed. We propose methods a goal-based agent can use to construct internal, causal explanations of its observations based on a model of its environment. These explanations comprise a series of inferred actions and events that have occurred and continue to occur in its world, as well as assumptions about the initial state of the world. We show that an agent can more accurately predict future events and states by reference to these explanations, and thereby more reliably achieve its goals. This dissertation presents the following novel contributions:

(1) a formalization of the problems of achieving goals, understanding what has happened, and updating an agent's model in a partially observable, dynamic world with partially known dynamics; (2) a complete agent (DHAgent)  that achieves goals in such environments more reliably than existing agents; (3) a novel algorithm (DiscoverHistory) and technique (DiscoverHistory search) for rapidly and accurately iteratively constructing causal explanations of what may have happened in these environments; (4) an examination of formal properties of these techniques; (5) a novel method (EML), capable of inferring improved models of an environment based on a small number of training scenarios; (6) experiments supporting performance claims about the novel methods described; and (7) an analysis of the efficiency of two DiscoverHistory algorithm implementations.

Posted 7 years, 6 months ago