Assured Cloud Computing Weekly Seminars Slides and Video Fall 2015
- Posted on September 23, 2015 at 11:30 am by whitesel@illinois.edu.
- Categorized Events.
- Comments are off for this post.
Characterizing and Adapting the Consistency-Latency Tradeoff in Distributed Key-value Stores slides | video
Muntasir Rahman, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
September 16, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: The CAP theorem is a fundamental result that applies to distributed storage systems. In this paper, we first present and prove a probabilistic variation of the CAP theorem. We present probabilistic models to characterize the three important elements of the CAP theorem: consistency (C), availability or latency (A), and partition-tolerance (P). Then, we provide quantitative characterization of the tradeoff among these three elements.
Next, we leverage this result to present a new system, called PCAP, which allows applications running on a single data-center to specify either a latency SLA or a consistency SLA. The PCAP system automatically adapts, in real-time and under changing network conditions, to meet the SLA while optimizing the other C/A metric. We incorporate PCAP into two popular key-value stores — Apache Cassandra and Riak. Our experiments with these two deployments, under realistic workloads, reveal that the PCAP system satisfactorily meets SLAs, and performs close to the bounds dictated by our tradeoff analysis. We also extend PCAP from a single data-center to multiple geo-distributed data-centers.
Quantitative Analysis of Consistency in NoSQL Key-value Stores slides
Si Liu, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
September 23, 2015, 3:00 p.m., 1131 Siebel Center
Abstract:The promise of high scalability and availability has prompted many companies to replace traditional relational database management systems (RDBMS) with NoSQL key-value stores. This comes at the cost of relaxed consistency guarantees: key-value stores only guarantee eventual consistency in principle. In practice, however, many key-value stores seem to offer stronger consistency. Quantifying how well consistency properties are met is a non-trivial problem. We address this problem by formally modeling key-value stores as probabilistic systems and quantitatively analyzing their consistency properties by statistical model checking. We present for the first time a formal probabilistic model of Apache Cassandra, a popular NoSQL key-value store, and quantify how much Cassandra achieves various consistency guarantees under various conditions. To validate our model, we evaluate multiple consistency properties using two methods and compare them against each other. The two methods are: (1) an implementation-based evaluation of the source code; and (2) a statistical model checking analysis of our probabilistic model.
Monitoring Data Fusion for Intrusion Tolerance slides
Atul Bohara, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
October 7, 2015, 3:00 p.m., 1131 Siebel Center
Abstract: Security and resiliency of computer systems relies heavily on monitoring. Increasing deployment of these monitors, however, generates an unmanageable amount of logs, making intrusion detection inefficient with high false positive and false negative rates. Moreover, even after deploying a variety of monitors, the system usually lacks a global security view, making it infeasible to utilize the valuable information produced by these monitors for system security.
In this talk, I will present our technique to address these challenges. We will discuss some data-driven techniques to create, maintain, and present higher-level views of the system under consideration. This involves combining data from multiple monitors, which may be at different levels of abstraction such as host level and network level, and learning how the profile of the system evolves over time. Specifically, we will discuss how these higher-level views of the system will help in making decisions such as presence of an intrusion or violation of a security policy. I will also touch upon our plan to experimentally evaluate our approach.
A Quantitative Methodology for Security Monitor Deployment slides | video
Uttam Thakore, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
October 14, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Despite advances in intrusion detection and prevention systems, attacks on networked computer systems continue to succeed. Intrusion tolerance and forensic analysis, which are required to adequately detect and defend against attacks that succeed, depend on monitors to collect information about possible attacks. Since monitoring can be expensive, however, monitors must be selectively deployed to maximize their overall utility.
In this talk, we present a methodology both to quantitatively evaluate monitor deployments in terms of security goals and to deploy monitors optimally based on cost constraints. We first define a model that describes the system to protect, the monitors that can be deployed, and the relationship between intrusions and data generated by monitors. Then, we define a set of quantitative metrics that quantify the utility and richness of monitor data with respect to intrusion detection and the cost associated with monitor deployment. We describe how a practitioner could characterize intrusion detection requirements using our model. Finally, we use our model and metrics to formulate a method to determine the cost-optimal, maximum-utility placement of monitors. We illustrate the practicality and expressiveness of our approach with an enterprise Web service case study and a scalability analysis of our algorithms.
Runtime Monitoring of Hypervisor Integrity slides | video
Cuong Pham, Electrical and Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
October 21, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Not unlike other types of utilities, cloud computing is enjoying the economy of sharing. This is true because sharing tremendously drives down the cost of computing resources, which in turns attracts more and more users and providers to get on the bandwagon. This trend generally works well, until computer security comes into the cost equation. In this talk, I will describe Virtual Machine (VM) Escape Attack, the primary security risk of sharing computing resource via VMs — the current mainstream mechanism that enables most cloud computing offerings. After that, I will describe our technique, called hShield, to cope with this class of attack. hShield is a proposal to integrate runtime integrity measurement of hypervisors into existing Hardware Assisted Virtualization (HAV) technologies, such as Intel VT-x or AMD SVM.
Towards a Secure Container Framework slides
Mohammad Ahmad, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
November 4, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Containers are a form of OS-level virtualization that leverage cgroups and namespaces for isolation. They present a lightweight alternative to hypervisor-based virtualization and have already been adopted by several platform as a service (PaaS) cloud providers. While containers offer improved performance, cross-container side-channel attacks shown on Public PaaS clouds raise questions about their security.
In this talk, we present our work towards building a secure container framework with improved container isolation. Specifically, as a first step, we focus on defenses against cache-based side-channel attacks using a combination of software and hardware mechanisms.
Phurti: Application and Network-Aware Flow Scheduling for Multi-Tenant MapReduce Clusters slides | video
Chris Cai, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
November 11, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Traffic for a typical MapReduce job in a datacenter consists of multiple network flows. Traditionally, network resources have been allocated to optimize network-level metrics such as flow completion time or throughput. Some recent schemes propose using application-aware scheduling which can reduce the average job completion time. However, most of them treat the core network as a black box with sufficient capacity. Even if only one network link in the core network becomes a bottleneck, it can hurt application performance.
We design and implement a centralized flow scheduling framework called Phurti with the goal of improving the completion time for jobs in a cluster shared among multiple Hadoop jobs (multi-tenant). Phurti communicates both with the Hadoop framework to retrieve job-level network traffic information and the OpenFlow-based switches to learn about network topology. Phurti implements a novel heuristic called Smallest Maximum Sequential-traffic First (SMSF) that uses collected application and network information to perform traffic scheduling for MapReduce jobs. Our evaluation with real Hadoop workloads shows that compared to application and network- agnostic scheduling strategies, Phurti improves job completion time for 95% of the jobs, decreases average job completion time by 20%, tail job completion time by 13% and scales well with the cluster size and number of jobs.
Efficient Monitoring in Actor-based Mobile Hybrid Cloud Framework slides | video
Kirill Mechitov, Computer Science Postdoctoral Research Associate, University of Illinois at Urbana-Champaign
November 18, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Mobile cloud computing (MCC) enables overcoming the energy and processing limitations of mobile devices by leveraging the virtually unlimited, elastic, on-demand resources of the cloud. The increased dynamicity and complexity of hybrid cloud applications making use of both public and private cloud services (e.g., for reasons of privacy and information security) requires open systems that interact with the environment while addressing application-specific constraints, user expectations, and security/privacy policies of multiple systems and organizations. We have developed IMCM, a proof-of-concept implementation of an actor-based framework for mobile hybrid cloud applications. IMCM uses dynamic fine-grained code offloading to achieve significant performance and energy consumption improvements in cloud-backed mobile applications. In this talk, we describe IMCM’s lightweight monitoring framework, capable of capturing dynamic parameters of the execution environment and end-user context, in addition to coarse-grained actions and events of distributed actor-based applications. We demonstrate how the monitoring system can facilitate efficient detection of security policy violations, and generalize these results to distributed actor-based applications supporting code mobility.
Reliability and Security as-a-Service
Zachary Estrada, Electrical and Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
December 2, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Infrastructure as-a-Service (IaaS) clouds significantly lower the barrier for obtaining scalable computing resources. Could a similar service be offered to provide on-demand reliability and security monitoring? Cloud computing systems are typically built using virtual machines (VMs) and much work has been done on using that virtualization layer for reliability and security monitoring. In this talk, I will demonstrate how we use whole-system dynamic analysis to inform dynamic hypervisor based VM monitoring for providing reliability and security as-a-service.