Policy networks are widely used by political scientists and economists to explain various financial and social phenomena, such as the development of partnerships between political entities or institutions from different levels of governance. The analysis of policy networks demands a series of arduous and time consuming manual steps including interviews and questionnaires. In this paper, we estimate the strength of relations between actors in policy networks using features extracted from data harvested from the web. Features include webpage counts, outlinks, and lexical information extracted from web documents or web snippets. The proposed approach is automatic and does not require any external knowledge source, other than the specification of the word forms that correspond to the political actors. The features are evaluated both in isolation and jointly for both positive and negative (antagonistic) actor relations. The proposed algorithms are evaluated on two EU policy networks from the political science literature. Performance is measured in terms of correlation and mean square error between the human rated and the automatically extracted relations. Correlation of up to 0.74 is achieved for positive relations. The extracted networks are validated by political scientists and useful conclusions about the evolution of the networks over time are drawn.
Rule learning algorithms, for example, Ripper, induces univariate rules, that is, a propositional condition in a rule uses only one feature. In this paper, we propose an omnivariate induction of rules where at each condition, both a univariate and a multivariate condition is trained and the best is chosen according to a novel statistical test. This paper has three main contributions: First, we propose a novel statistical test, the combined 5$\times$2 cv $t$ test, to compare two classifiers, which is a variant of the 5$\times$2 cv $t$ test and give the connections to other tests as $5\times 2$ cv $F$ test and $k$-fold paired $t$ test. Second, we propose a multivariate version of Ripper where Support Vector Machine (SVM) with linear kernel is used to find multivariate linear conditions. Third, we propose an omnivariate version of Ripper where the model selection is done via the combined 5$\times$2 cv $t$ test. Our results indicate that (1) the combined 5$\times$2 cv $t$ test has higher power (lower type II error), lower type I error, and higher replicability compared to the 5$\times$2 cv $t$ test, (2) omnivariate rules are better in that they choose whichever condition is more accurate, selecting the right model automatically and separately for each condition in a rule.
Query processing over uncertain data streams, in particular top-$k$ query processing, has become increasingly important due to its wide application in many fields such as sensor network monitoring and internet traffic control. In many real applications, multiple top-$k$ queries are registered in the system. Sharing the results of these queries is a key factor in saving the computation cost and providing real time response. However, due to the complex semantics of uncertain top-$k$ query processing, it is nontrivial to implement sharing among different top-$k$ queries and few works have addressed the sharing issue. In this paper, we formulate various types of sharing among multiple top-$k$ queries over uncertain data streams based on the frequency upper bound of each top-$k$ query. We present an optimal dynamic programming solution as well as a more efficient (in terms of time and space complexity) greedy algorithm to compute the execution plan of executing queries for saving the computation cost between them. Experiments have demonstrated that the greedy algorithm can find the optimal solution in most cases, and it can almost achieve the same performance (in terms of latency and throughput) as the dynamic programming approach.
In this research, we introduce a novel, complex associative pattern that is found to be very useful because it identifies the core associative structure from the data. We refer to it as nested high-order pattern (or NHOP). The pattern is more specific than associative patterns represented as multiple variables. It also generalizes sequential patterns, as the outcomes need not be contiguous. This paper outlines two search algorithms, the r-Tree and Best-k algorithm in its detection. It was then applied to an analysis of biomolecule using the aligned sequence family of the molecule. In the SH3 protein, a model for protein-protein interaction mediator, we identify functional groups (core and binding sites) in the three-dimensional structure as well as amino acid patterns dominating certain species.
Real-life graphs not only contain nodes and edges, but also have events taking place, e.g., product sales in social networks. Some events exhibit strong correlations with the network structure, while others do not. Such structural correlations will shed light on viral influence existing in the corresponding network. Unfortunately, the traditional association mining concept is not applicable in graphs since it only works on homogeneous datasets. We propose a novel measure for assessing structural correlations in graphs with events. The measure applies hitting time to aggregate the proximity among nodes that have the same event. In order to calculate correlation scores for many events in large networks, we develop a scalable framework, called gScore, using sampling and approximation. By comparing to the situation where events are randomly distributed in the network, our method is able to discover events that are highly correlated with the graph structure. We test gScore's effectiveness by synthetic events on the DBLP co-author network and report interesting correlation results in a social network extracted from TaoBao.com, the largest online shopping network in China. Scalability of gScore is tested on the Twitter network. We also propose a dynamic measure which can be used for discovering detailed evolutionary patterns.
Many real-world data mining problems contain hidden information (e.g., unobservable latent dependencies). We propose a perceptron-style method, latent structured perceptron, for fast discriminative learning of structured classification with hidden information. We also give theoretical analysis and demonstrate good convergence properties of the proposed method. Our method extends the perceptron algorithm for the learning task with hidden information, which can be hardly captured by traditional models. It relies on Viterbi decoding over latent variables, combined with simple additive updates. We perform experiments on one synthetic dataset and two real-world structured classification tasks. Compared to conventional non-latent models (e.g., conditional random fields, structured perceptrons), our method is significantly more accurate on real-world tasks. Compared to existing heavy probabilistic models of latent variables (e.g., latent conditional random fields), our method lowers the training cost significantly (almost one order magnitude faster) yet with comparable or even superior classification accuracy. In addition, experiments demonstrate that the proposed method has good scalability on large-scale problems.
This is a study of the long tail problem of recommender systems when many items in the long tail have only a few ratings, thus making it hard to use them in recommender systems. The approach presented in this paper clusters items according to their popularities, so that the recommendations for tail items are based on the ratings in more intensively clustered groups and for the head items are based on the ratings of individual items or groups, clustered to a lesser extent. We apply this method to two real-life datasets and compare the results with those of the non-grouping and fully grouped methods in terms of recommendation accuracy and scalability. The results show that if such adaptive clustering is done properly, this method reduces the recommendation error rates for the tail items, while maintaining reasonable computational performance.
In our daily lives, organizing resources like books or web pages into a set of categories to ease future access is a common task. The usual largeness of these collections requires a vast endeavor and an outrageous expense to organize manually. As an approach to effectively produce an automated classification of resources, we consider the immense amounts of annotations provided by users on social tagging systems in the form of bookmarks. In this paper, we deal with the utilization of these user-provided tags to perform a social classification of resources. For this purpose, we have created three large-scale social tagging datasets including tagging data for different types of resources, web pages and books. Those resources are accompanied by categorization data from sound expert-driven taxonomies. We analyze the characteristics of the three social tagging systems, and perform an analysis on the usefulness of social tags to perform a social classification of resources that resembles the classification by experts as much as possible. We analyze 6 different representations using tags, and compare to other data sources by using 3 different settings of SVM classifiers. Finally, we explore the appropriateness of combining different data sources with tags using classifier committees to best classify the resources.
In many scientific applications, it is critical to determine if there is a relationship between a combination of objects. The strength of such an association is typically computed using some statistical measures. In order not to miss any important associations, it is not uncommon to exhaustively enumerate all possible combinations of a certain size. However, discovering significant associations among hundreds of thousands or even millions of objects is a computationally-intensive job that typically takes days, if not weeks, to complete. In this paper, we propose a framework, COSAC, for such combinatorial statistical analysis for large-scale datasets over a MapReduce-based cloud computing platform. COSAC operates in two key phases: (a) In the distribution phase, a novel load balancing scheme distributes the combination enumeration tasks across the processing units; (b) In the statistical analysis phase, each unit optimizes the processing of the allocated combinations by salvaging computations that can be reused. COSAC also supports a more practical scenario where only a selected subset of objects need to be analyzed against all the objects. We have evaluated our framework on a cluster of more than 40 nodes. The experimental results show that our framework is computationally practical, efficient, scalable and flexible.
In this paper, we will provide the first comprehensive analysis of the randomization method in the presence of public information, and the effects of high dimensionality on randomization. The goal is to examine the strengths and weaknesses of randomization and explore both the potential and the pitfalls of randomization in the presence of public information. The inclusion of public information in the theoretical analysis of the randomization method results in a number of interesting and insightful conclusions. (1) The privacy effects of randomization reduce rapidly with increasing dimensionality. With increasing dimensionality, no privacy is practically possible. (2) The properties of the underlying data set can affect the anonymity level of the randomization method. For example, natural properties of real data sets such as clustering improve the effectiveness of randomization. On the other hand, variations in data density of non-empty data localities and outliers create privacy preservation challenges for the randomization method. (3) The presence of public information makes the choice of perturbing distribution more critical than previously thought. In particular, Gaussian perturbations are significantly more effective than uniformly distributed perturbations for the high dimensional case. These insights are very useful for future research and design of the randomization method.
For wireless sensor networks, data aggregation schemes that reduce a large amount of transmission is the most practical technique. In previous studies, homomorphic encryptions have been applied to conceal communication during aggregation such that enciphered data can be aggregated algebraically without decryption. Since aggregators collect data without decryption, adversaries are not able to forge aggregated results by compromising them. However, these schemes are not satisfy multi-application environments. Second, these schemes become insecure in case some sensor nodes are compromised. Third, these schemes do not provide secure counting; thus, they may suffer unauthorized aggregation attacks. Therefore, we propose a new concealed data aggregation scheme extended from Boneh et al.'s homomorphic public encryption system. The proposed scheme has three contributions. First, it is designed for a multi-application environment. The base station extracts application-specific data from aggregated ciphertexts. Next, it mitigates the impact of compromising attacks in single application environments. Finally, it degrades the damage from unauthorized aggregations. To prove the proposed scheme's robustness and efficiency, we also conducted the comprehensive analyses and comparisons in the end.
In this work we introduce a distributed method for detecting distance-based outliers in very large data sets. Our approach is based on the concept of outlier detection solving set , which is a small subset of the data set that can be also employed for predicting novel outliers. The method exploits parallel computation in order to obtain vast time savings. Indeed, beyond preserving the correctness of the result, the proposed schema exhibits excellent performances. From the theoretical point of view, for common settings, the temporal cost of our algorithm is expected to be at least three order of magnitude faster than the classical nested-loop like approach to detect outliers. Experimental results show that the algorithm is efficient and that its running time scales quite well for increasing number of nodes. We discuss also a variant of the basic strategy which reduces the amount of data to be transferred in order to improve both the communication cost and the overall run time. Importantly, the solving set computed by our approach in distributed environment has the same quality as that produced by the corresponding centralized method.
In this paper, a problem of production plans, named k-Most Demanding Products discovering, is formulated. Given a set of customers demanding a certain type of products with multiple attributes, a set of existing products of the type, a set of candidate products which can be offered by a company, and a positive integer k, we want to help the company to select k products from the candidate products such that the expected number of the total customers for the k products is maximized. We show the problem is NP-hard when the number of attributes for a product is 3 or more. One greedy algorithm is proposed to find approximate solution for the problem. We also attempt to find the optimal solution of the problem by estimating the upper bound of the expected number of the total customers for a set of k candidate products for reducing the search space of the optimal solution. An exact algorithm is then provided to find the optimal solution of the problem by using this pruning strategy. The experiment results demonstrate that both the efficiency and memory requirement of the exact algorithm are comparable to those for the greedy algorithm, and the greedy algorithm is well scalable with respect to k.
IEEE Transactions on Knowledge and Data Engineering
Congratulations to Dr. Zeehasham Rasheed for successfully defending his doctoral thesis titled "Data Mining Framework for Metagenome Analysis."
Technology Supported Learning Systems have proved to be helpful in many learning situations. These systems require an appropriate representation of the knowledge to be learnt, the Domain Module. The authoring of the Domain Module is cost and labour intensive, but its development cost might be lightened by profiting from semi-automatic Domain Module authoring techniques and promoting knowledge reuse. DOM-Sortze is a system that uses Natural Language Processing techniques, heuristic reasoning, and ontologies for the semi-automatic construction of the Domain Module from electronic textbooks. In order to determine how it might help in the Domain Module authoring process, it has been tested with an electronic textbook, and the gathered knowledge has been compared against the Domain Module that instructional designers developed manually. This paper presents DOM-Sortze and describes the experiment carried out.
In this paper, we present a novel method aiming at multidimensional sequence classification. We propose a novel sequence representation, based on its fuzzy distances from optimal representative signal instances, called statemes. We also propose a novel modified Clustering Discriminant Analysis algorithm minimizing the adopted criterion with respect to both the data projection matrix and the class representation, leading to the optimal discriminant sequence class representation in a low-dimensional space, respectively. Based on this representation, simple classification algorithms, such as the nearest subclass centroid, provide high classification accuracy. A three step iterative optimization procedure for choosing statemes, optimal discriminant sub-space and optimal sequence class representation in the final decision space is proposed. The classification procedure is fast and accurate. The proposed method has been tested on a wide variety of multidimensional sequence classification problems, including handwritten character recognition, time series classification and human activity recognition, providing very satisfactory classification results.
A large number of organizations today generate and share textual descriptions of their products, services, and actions. Such collections of textual data contain significant amount of structured information, which remains buried in the unstructured text. While information extraction algorithms facilitate the extraction of structured relations, they are often expensive and inaccurate. We present a novel alternative approach that facilitates the generation of the structured metadata by identifying documents that are likely to contain information of interest and this information is going to be subsequently useful for querying the database. Our approach relies on the idea that humans are more likely to add the necessary metadata during creation time, if prompted by the interface; or that it is much easier for humans to identify the metadata when such information actually exists in the document, instead of naively prompting users to fill in forms with information that is not available in the document. We present algorithms that identify structured attributes that are likely to appear within the document, by jointly utilizing the content of the text and the query workload. Our experimental evaluation shows that our approach generates superior results compared to approaches that rely only on content or only on the query workload.
Classification algorithms used to support the decisions of human analysts are often used in settings in which zero-one loss is not the appropriate indication of performance. The zero-one loss corresponds to the operating point with equal costs for false alarms and missed detections, and no option for the classifier to leave uncertain test samples unlabeled. A generalization bound for ensemble classification at the standard operating point has been developed based on two interpretable properties of the ensemble: strength and correlation, using the Chebyshev inequality. Such generalization bounds for other operating points have not been developed previously, and are developed in this paper. Significantly, the bounds are empirically shown to have much practical utility in determining optimal parameters for classification with a reject option, classification for ultra-low probability of false alarm, and classification for ultra-low probability of missed detection. Counter to the usual guideline of large strength and small correlation in the ensemble, different guidelines are recommended by the derived bounds in the ultra-low false alarm and missed detection probability regimes.