J of Artifical Intelligence Research

Syndicate content
JAIR (ISSN 11076 - 9757) covers all areas of artificial intelligence (AI), publishing refereed research articles, survey articles, and technical notes.
Updated: 8 years 21 weeks ago

Relations Between Spatial Calculi About Directions and Orientations

Sun, 11/01/2015 - 03:27
Qualitative spatial descriptions characterize essential properties of spatial objects or configurations by relying on relative comparisons rather than measuring. Typically, in qualitative approaches only relatively coarse distinctions between configurations are made. Qualitative spatial knowledge can be used to represent incomplete and underdetermined knowledge in a systematic way. This is especially useful if the task is to describe features of classes of configurations rather than individual configurations.

Although reasoning with them is generally NP-hard, relative directions are important because they play a key role in human spatial descriptions and there are several approaches how to represent them using qualitative methods. In these approaches directions between spatial locations can be expressed as constraints over infinite domains, e.g. the Euclidean plane. The theory of relation algebras has been successfully applied to this field. Viewing relation algebras as universal algebras and applying and modifying standard tools from universal algebra in this work, we (re)define notions of qualitative constraint calculus, of homomorphism between calculi, and of quotient of calculi. Based on this method we derive important properties for spatial calculi from corresponding properties of related calculi. From a conceptual point of view these formal mappings between calculi are a means to translate between different granularities.

Decision Making with Dynamic Uncertain Events

Sun, 11/01/2015 - 03:19
When to make a decision is a key question in decision making problems characterized by uncertainty. In this paper we deal with decision making in environments where information arrives dynamically. We address the tradeoff between waiting and stopping strategies. On the one hand, waiting to obtain more information reduces uncertainty, but it comes with a cost. Stopping and making a decision based on an expected utility reduces the cost of waiting, but the decision is based on uncertain information. We propose an optimal algorithm and two approximation algorithms. We prove that one approximation is optimistic - waits at least as long as the optimal algorithm, while the other is pessimistic - stops not later than the optimal algorithm. We evaluate our algorithms theoretically and empirically and show that the quality of the decision in both approximations is near-optimal and much faster than the optimal algorithm. Also, we can conclude from the experiments that the cost function is a key factor to chose the most effective algorithm.

Expressiveness of Two-Valued Semantics for Abstract Dialectical Frameworks

Sun, 11/01/2015 - 03:11
We analyse the expressiveness of Brewka and Woltran's abstract dialectical frameworks for two-valued semantics. By expressiveness we mean the ability to encode a desired set of two-valued interpretations over a given propositional vocabulary A using only atoms from A. We also compare ADFs' expressiveness with that of (the two-valued semantics of) abstract argumentation frameworks, normal logic programs and propositional logic. While the computational complexity of the two-valued model existence problem for all these languages is (almost) the same, we show that the languages form a neat hierarchy with respect to their expressiveness. We then demonstrate that this hierarchy collapses once we allow to introduce a linear number of new vocabulary elements. We finally also analyse and compare the representational succinctness of ADFs (for two-valued model semantics), that is, their capability to represent two-valued interpretation sets in a space-efficient manner.

Leveraging Online User Feedback to Improve Statistical Machine Translation

Mon, 09/28/2015 - 00:18
In this article we present a three-step methodology for dynamically improving a statistical machine translation (SMT) system by incorporating human feedback in the form of free edits on the system translations. We target at feedback provided by casual users, which is typically error-prone. Thus, we first propose a filtering step to automatically identify the better user-edited translations and discard the useless ones. A second step produces a pivot-based alignment between source and user-edited sentences, focusing on the errors made by the system. Finally, a third step produces a new translation model and combines it linearly with the one from the original system. We perform a thorough evaluation on a real-world dataset collected from the Reverso.net translation service and show that every step in our methodology contributes significantly to improve a general purpose SMT system. Interestingly, the quality improvement is not only due to the increase of lexical coverage, but to a better lexical selection, reordering, and morphology. Finally, we show the robustness of the methodology by applying it to a different scenario, in which the new examples come from an automatically Web-crawled parallel corpus. Using exactly the same architecture and models provides again a significant improvement of the translation quality of a general purpose baseline SMT system.

Achieving Goals Quickly Using Real-time Search: Experimental Results in Video Games

Sun, 09/27/2015 - 23:12
In real-time domains such as video games, planning happens concurrently with execution and the planning algorithm has a strictly bounded amount of time before it must return the next action for the agent to execute. We explore the use of real-time heuristic search in two benchmark domains inspired by video games. Unlike classic benchmarks such as grid pathfinding and the sliding tile puzzle, these new domains feature exogenous change and directed state space graphs. We consider the setting in which planning and acting are concurrent and we use the natural objective of minimizing goal achievement time. Using both the classic benchmarks and the new domains, we investigate several enhancements to a leading real-time search algorithm, LSS-LRTA*. We show experimentally that 1) it is better to plan after each action or to use a dynamically sized lookahead, 2) A*-based lookahead can cause undesirable actions to be selected, and 3) on-line de-biasing of the heuristic can lead to improved performance. We hope this work encourages future research on applying real-time search in dynamic domains.

Word vs. Class-Based Word Sense Disambiguation

Wed, 09/09/2015 - 19:14
As empirically demonstrated by the Word Sense Disambiguation (WSD) tasks of the last SensEval/SemEval exercises, assigning the appropriate meaning to words in context has resisted all attempts to be successfully addressed. Many authors argue that one possible reason could be the use of inappropriate sets of word meanings. In particular, WordNet has been used as a de-facto standard repository of word meanings in most of these tasks. Thus, instead of using the word senses defined in WordNet, some approaches have derived semantic classes representing groups of word senses. However, the meanings represented by WordNet have been only used for WSD at a very fine-grained sense level or at a very coarse-grained semantic class level (also called SuperSenses). We suspect that an appropriate level of abstraction could be on between both levels. The contributions of this paper are manifold. First, we propose a simple method to automatically derive semantic classes at intermediate levels of abstraction covering all nominal and verbal WordNet meanings. Second, we empirically demonstrate that our automatically derived semantic classes outperform classical approaches based on word senses and more coarse-grained sense groupings. Third, we also demonstrate that our supervised WSD system benefits from using these new semantic classes as additional semantic features while reducing the amount of training examples. Finally, we also demonstrate the robustness of our supervised semantic class-based WSD system when tested on out of domain corpus.

Solving #SAT and MAXSAT by Dynamic Programming

Wed, 09/09/2015 - 19:06
We look at dynamic programming algorithms for propositional model counting, also called #SAT, and MaxSAT. Tools from graph structure theory, in particular treewidth, have been used to successfully identify tractable cases in many subfields of AI, including SAT, Constraint Satisfaction Problems (CSP), Bayesian reasoning, and planning. In this paper we attack #SAT and MaxSAT using similar, but more modern, graph structure tools. The tractable cases will include formulas whose class of incidence graphs have not only unbounded treewidth but also unbounded clique-width. We show that our algorithms extend all previous results for MaxSAT and #SAT achieved by dynamic programming along structural decompositions of the incidence graph of the input formula. We present some limited experimental results, comparing implementations of our algorithms to state-of-the-art #SAT and MaxSAT solvers, as a proof of concept that warrants further research.

Knowledge-Based Textual Inference via Parse-Tree Transformations

Wed, 09/09/2015 - 18:53
Textual inference is an important component in many applications for understanding natural language. Classical approaches to textual inference rely on logical representations for meaning, which may be regarded as "external" to the natural language itself. However, practical applications usually adopt shallower lexical or lexical-syntactic representations, which correspond closely to language structure. In many cases, such approaches lack a principled meaning representation and inference framework. We describe an inference formalism that operates directly on language-based structures, particularly syntactic parse trees. New trees are generated by applying inference rules, which provide a unified representation for varying types of inferences. We use manual and automatic methods to generate these rules, which cover generic linguistic structures as well as specific lexical-based inferences. We also present a novel packed data-structure and a corresponding inference algorithm that allows efficient implementation of this formalism. We proved the correctness of the new algorithm and established its efficiency analytically and empirically. The utility of our approach was illustrated on two tasks: unsupervised relation extraction from a large corpus, and the Recognizing Textual Entailment (RTE) benchmarks.