Sencun Zhu, Shouhuai Xu, Sanjeev Setia, and Sushil Jajodia
ISE-TR-02-09
Most ad hoc networks do not implement any network access control, leaving these networks vulnerable to resource consumption attacks where a malicious node injects packets into the network with the goal of depleting the resources of the nodes relaying the packets. To thwart or prevent such attacks, it is necessary to employ authentication mechanisms that ensure that only authorized nodes can inject traffic into the network. In this paper, we present LHAP, a scalable and light-weight authentication protocol for ad hoc networks. LHAP is based on two techniques: (i) hop-by-hop authentication for verifying the authenticity of all the packets transmitted in the network and (ii) one-way key chain and TESLA for packet authentication and for reducing the overhead for establishing trust among nodes. We analyze the security properties of LHAP. Furthermore, our performance analysis shows that LHAP is a very lightweight security protocol.
Ye Wu and Jeff Offutt
ISE-TR-02-08
The Internet is quietly becoming the body of the business world, with web applications as the brains. This means that software faults in web applications have potentially disastrous consequences. Most work on web applications has been on making them more powerful, but relatively little has been done to ensure their quality. Important quality attributes for web applications include reliability, availability, interoperability and security. Web applications share some characteristics of client-server, distributed, and traditional programs, however there are a number of novel aspects of web applications. These include the fact that web applications are ``dynamic'', due to factors such as the frequent changes of the application requirement as well as dramatic changes of the web technologies, the fact that the roles of the clients and servers change dynamically, the heterogeneity of the hardware and software components, the extremely loose coupling and dynamic integration, and the ability of the user to directly affect the control of execution. In this paper, we first analyze the distinct features of web-based applications, and then define a generic analysis model that characterizes the typical behaviors of web-based applications independently of different technologies. Based on this analysis, a family of testing techniques is discussed to ensure different level of quality control of web applications under various situations.
Ye Wu and Jeff Offutt
ISE-TR-02-07
Component-based software engineering has been increasingly adopted for software development. This approach relies on using reusable components as the building blocks for constructing software. On the one hand, this helps improve software quality and productivity; on the other hand, it necessitates frequent maintenance activities, such as upgrading third party components or adding new features. The cost of maintenance for conventional software can account for as much as two-thirds of the total cost, and it is likely to be more for component-based software. Thus, an effective maintenance technique for component-based software is strongly desired. This paper presents a UML-based technique that attempts to help resolve difficulties introduced by the implementation transparent characteristics of component-based software systems. This technique can also be useful for other maintenance activities. For corrective maintenance activities, the technique starts with UML diagrams that represent changes to a component, and uses them to support regression testing. To accommodate this approach for perfective maintenance activities, more challenges are encountered. We provide a UML-based framework to evaluate the similarities of the old and new components, and corresponding retesting strategies are provided.
Sencun Zhu, Sanjeev Setia, and Sushil Jajodia
ISE-TR-02-06
Scalable group rekeying is one of the biggest challenges that need to be addressed to support secure communications for large and dynamic groups. In recent years, many group key management approaches based on the use of logical key trees have been proposed to address this issue. Using logical key trees reduces the complexity of group rekeying operation from O(N) to O(log N), where N is the group size. In this paper, we present two optimizations for logical key tree organizations that utilize information about the characteristics of group members to further reduce the overhead of group rekeying. First, we propose a partitioned key tree organization that exploits the temporal patterns of group member joins and departures to reduce the overhead of rekeying. Using an analytic model, we show that our optimization can achieve up to 31.4% reduction in key server bandwidth overhead over the un-optimized scheme. Second, we propose an approach under which the key tree is organized based on the loss probabilities of group members. Our analysis shows this optimization can reduce the rekeying overhead by up to 12.1%.
Sanjeev Setia, Sencun Zhu, and Sushil Jajodia
ISE-TR-02-05
In this paper, we present a new scalable and reliable key distribution protocol for group key management schemes that use logical key hierarchies for scalable group rekeying. Our protocol, called WKA-BKR, is based upon two ideas -- weighted key assignment and batched key retransmission -- both of which exploit the special properties of logical key hierarchies and the group rekey transport payload to reduce the bandwidth overhead of the reliable key delivery protocol. Using both analytic modeling and simulation, we compare the performance of WKA-BKR with that of other rekey transport protocols, including a recently proposed protocol based on proactive FEC. Our results show that for most network loss scenarios, the bandwidth used by WKA-BKR is lower than that of the other protocols.
Lingyu Wang, Duminda Wijesekera, and Sushil Jajodia
ISE-TR-02-04
This paper deals with the inference problems in data warehouses and decision support systems such as on-line analytical processing (OLAP)systems. Even though OLAP systems restrict user accesses to predefined aggregations, the possibility of inappropriate disclosure of sensitive attribute values still exists. Based on a definition of non- compromiseability to mean that any member of a set of variables satisfying a given set of their aggregates can have more than one value, we derive sufficient conditions for non-compromiseability in sum-only data cubes. Specifically, (1) the non-compromiseability of multi-dimensional aggregates can be reduced to that of one dimensional aggregates, (2) full or dense core cuboids are non-compromiseable, and (3) there is a tight lower bound for the cardinality of a core cuboid to remain non- compromiseable. Based on those conditions, and a three-tiered model for controlling inferences, we provide a divide-and-conquer algorithm that uniformly divides data sets into chunks and builds a data cube on each such chunk. The union of those data cubes are then used to provide users with inference-free OLAP queries.
Jacob Berlin and Amihai Motro
ISE-TR-02-03
Recently, the problem of data integration has been newly addressed by methods based on machine learning and discovery. Such methods are intended to automate, at least in part, the laborious process of information integration, by which existing data sources are incorporated in a virtual database. Essentially, these methods scan new data sources, attempting to discover possible mappings to the virtual database. Like all discovery processes, this process is intrinsically probabilistic; that is, each discovery is associated with a specific value that denotes assurance of its appropriateness. Consequently, the rows in a discovered virtual table have mixed assurance levels, with some rows being more credible than others. In this paper we argue that rows in discovered virtual databases should be ranked, and we describe a ranking method, called TupleRank, for calculating such a ranking order. Roughly speaking, TupleRank calibrates the probabilities calculated during a discovery process with historical information about the performance of the system. The work is done in the framework of the Autoplex system for discovering content for virtual databases, and initial experimentation is reported and discussed.
Sanjeev Setia, Sencun Zhu, and Sushil Jajodia
ISE-TR-02-02
Scalable group rekeying is one of the important problems that needs to be addressed in order to support secure communications for large and dynamic groups. One of the challenging issues that arises in scalable group rekeying is the problem of delivering the updated keys to the members of the group in a reliable and timely manner. In this paper, we present a new scalable and reliable key distribution protocol for group key management schemes that use logical key hierarchies for scalable group rekeying. Our protocol, called WKA-BKR, is based upon two key ideas, weighted key assignment and batched key retransmission, both of which exploit the special properties of logical key hierarchies to reduce the overhead and increase the reliability of the key delivery protocol. We have evaluated the performance of our approach using detailed simulations. Our results show that for most network loss scenarios, the bandwidth used by our protocol is lower than that of previously proposed key delivery protocols.
Claudio Bettini, Sushil Jajodia, X. Sean Wang, and D. Wijesekera
ISE-TR-02-01
Policies are widely used in many different systems and applications. Recently, it has been recognized that a ``yes/no'' response to every scenario is just not enough for many modern systems and applications. Many policies require certain conditions to be satisfied and actions to be performed before or after a decision is made in accordance with the policy. To address this need, this paper introduces the notions of provisions and obligations. Provisions are those conditions that need to be satisfied or actions that must be performed before a decision is rendered, while obligations are those conditions or actions that must be fulfilled by either the users or the system after the decision. This paper formalizes a rule-based policy framework that includes provisions and obligations, and investigates a reasoning mechanism within this framework. A policy decision may be supported by more than one derivation, each associated with a potentially different set of provisions and obligations (called a global PO set). The reasoning mechanism can derive all the global PO sets for each specific policy decision, and facilitates the selection of the best one based on numerical weights assigned to provisions and obligations as well as on semantic relationships among them. The paper also shows the use of the proposed policy framework in a security application and discusses through an example various aspects of how the system may compensate unfulfilled obligations.