Assured Cloud Computing Special Seminar: Nomad: Mitigating Arbitrary Cloud Side Channels via Provider-Assisted Migration
- Posted on January 24, 2017 at 9:09 am by whitesel@illinois.edu.
- Categorized ACC Speaker.
- Comments are off for this post.
Winner 2016 NSA Best Scientific Cybersecurity Paper
Soo-Jin Moon, Carnegie Mellon University
February 22, 4:00 p.m., 2405 Siebel Center
Abstract: Recent studies have shown a range of co-residency side channels that can be used to extract private information from cloud clients. Unfortunately, addressing these side channels often requires detailed attack-specific fixes that require significant modifications to hardware, client virtual machines (VM), or hypervisors. Furthermore, these solutions cannot be generalized to future side channels. Barring extreme solutions such as single tenancy which sacrifices the multiplexing benefits of cloud computing, such side channels will continue to affect critical services. In this work, we present Nomad, a system that offers vector-agnostic defense against known and future side channels. Nomad envisions a provider-assisted VM migration service, applying the moving target defense philosophy to bound the information leakage due to side channels. In designing Nomad, we make four key contributions: (1) a formal model to capture information leakage via side channels in shared cloud deployments; (2) identifying provider-assisted VM migration as a robust defense for arbitrary side channels; (3) a scalable online VM migration heuristic that can handle large datacenter workloads; and (4) a practical implementation in OpenStack. We show that Nomad is scalable to large cloud deployments, achieves near-optimal information leakage subject to constraints on migration overhead, and imposes minimal performance degradation for typical cloud applications such as web services and Hadoop MapReduce.
Bio: Soo-Jin Moon is a third-year Ph.D. student at the Electrical and Computer Engineering at Carnegie Mellon University, where she is part of Cylab and advised by Vyas Sekar. Her research interests are broadly in the space of Network and Systems Security. Her work has been recognized with the NSA Best Scientific Cybersecurity paper (2016) and CSAW Applied Security Research award (2015). Before joining CMU, she received a bachelor’s degree (2014) in Electrical Engineering from University of Waterloo, Canada.
Assured Cloud Computing Weekly Seminars Slides and Video Fall 2016
- Posted on August 22, 2016 at 3:33 pm by whitesel@illinois.edu.
- Categorized Events.
- Comments are off for this post.
Digital Forensic Analysis: From Low-Level Events to High-Level Actions slides | video
Imani Palmer, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
August 31, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: As digital forensic science advances it is important to be able to rigorously determine conclusions drawn from electronic evidence. The process of analyzing digital evidence is based on the individual knowledge of an examiner. This framework will provide examiners with a analysis toolkit, in order to provide a mapping of low-level events to user actions. This framework will handle the analysis phase of the digital forensic investigative process. It will receive information from digital forensic tools. We have implemented various methods for developing these mappings. We evaluate our prototype and discuss the possibility of applying in real-world scenarios.
An Indirect Attack on Computing Infrastructure through Targeted Alteration on Environmental Control slides | video
Keywhan Chung, Electrical and Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
September 28, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: With increasing concern of securing the computer infrastructure, massive amount of effort had been put into hardening them. However, relatively less amount of effort had been put into considering the surrounding cyber-physical systems that the infrastructure heavily relies on. In this talk, I present how a malicious user can attack a large computing infrastructure by compromising the environmental control systems in the facilities that host the compute nodes. This talk will cover the study on failures of a computer infrastructure related to problems in the cooling system and demonstrate, using real data, that the control systems that provide chilled water can be used as entry points by an attacker to indirectly compromise the computing functionality through the orchestration of clever alterations of sensing and control devices. In this way, the attacker does not leave any trace of his or her malicious activity on the nodes of the cluster. Failures of the cooling systems can trigger unrecoverable failure modes that can be recovered only after service interruption and manual intervention.
Lateral Movement Detection Using Distributed Data Fusion slides | video
Atul Bohara, Electrical and Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
September 28, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Attackers often attempt to move laterally from host to host, infecting them until an overall goal is achieved. One possible defense against this strategy is to detect such coordinated and sequential actions by fusing data from multiple sources. In this paper, we propose a framework for distributed data fusion that specifies the communication architecture and data transformation functions. Then, we use this framework to specify an approach for lateral movement detection that uses host- level process communication graphs to infer network connection causations. The connection causations are then aggregated into system-wide host-communication graphs that expose possible lateral movement in the system. In order to provide a balance between the resource usage and the robustness of the fusion architecture, we propose a multilevel fusion hierarchy that uses different clustering techniques. We evaluate the scalability of the hierarchical fusion scheme in terms of storage overhead, number of message updates sent, fairness of resource sharing among clusters, and quality of local graphs. Finally, we implement a host-level monitor prototype to collect connection causations, and evaluate its overhead. The results show that our approach provides an effective method to detect lateral movement between hosts, and can be implemented with acceptable overhead.
Cloud Security Certifications: Are They Adequate to Provide Baseline Protection? slides | video
Carlo Di-Giulio, Library and Information Science Research Assistant, University of Illinois at Urbana-Champaign
October 5, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Information security certifications, compliance with standards, and third-party assessment are among the most commonly used approaches to reassure potential and current users of cloud computing services. While at least two prominent examples of such certification/audit based security controls exist (i.e ISO/IEC 27001, and SOC2) the US government has created new requirements for Federal Agencies with new regulations and initiatives aimed at improving cloud security services offered by industry. In this presentation we will review and evaluate security controls and procedures required by the Federal Risk Authorization Management Program (FedRAMP) as well as compare FedRAMP to existing certifications for completeness and adequacy. Our research contextualizes the adoption and development of FedRAMP, and offers a big picture of performances of ISO/IEC 27001, SOC2, and FedRAMP, questioning on the level of protection that they provide by comparing them to each other.
Energy-Aware, Security-Conscious Code Offloading for the Mobile Cloud slides | video
Kirill Mechitov, Computer Science Postdoc, University of Illinois at Urbana-Champaign
October 12, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Mobile cloud computing (MCC) enables overcoming the energy and processing limitations of mobile devices by leveraging the virtually unlimited, elastic, on-demand resources of the cloud. The increased dynamicity and complexity of hybrid cloud applications making use of both public and private cloud services (e.g., for reasons of privacy and information security) requires open systems that interact with the environment while addressing application-specific constraints, user expectations, and security/privacy policies of multiple systems and organizations. We have developed IMCM, a proof-of-concept implementation of an actor-based framework for mobile hybrid cloud applications. IMCM uses dynamic fine-grained code offloading to achieve significant performance and energy consumption improvements in cloud-backed mobile applications, while respecting specified privacy and security policies. In this talk, we discuss the energy monitoring and estimation aspects of the IMCM framework.
Using Reachability Logic to Verify Distributed Systems slides | video
Stephen Skeirik, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
November 2, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Model checking is a method traditionally used to verify distributed systems, but it suffers from the limitation that it requires concrete initial states. This applies in particular to ACC distributed systems, where verification efforts so far have mostly used model checking. To gain higher levels of assurance, deductive verification, not for some concrete initial states, but for possibly infinite sets of initial states, is needed. In this presentation, we describe a recently developed logic, reachability logic, and show how it can be used to deductively verify distributed systems over a possibly infinite number of initial states. We conclude by examining what work has already been done and possible future directions, with special emphasis on deductive verification of ACC systems.
Trust & Security/Assured Cloud Computing Joint Seminar: Application of Game Theory to High Assurance Cloud Computing
- Posted on August 12, 2016 at 2:39 pm by whitesel@illinois.edu.
- Categorized ACC Speaker.
- Comments are off for this post.
Charles A. Kamhoua, U.S. Air Force Research Laboratory
September 20, 4:00 p.m., Coordinated Science Laboratory Auditorium (B02 CSL)
Abstract: The growth of cloud computing has spurred many entities, both small and large, to use cloud services for cost savings. Public cloud computing has allowed for quick, dynamic scalability without many overhead or long-term commitments. However, concern over cyber security is the main reason many large organizations with sensitive information such as the Department of Defense have been reluctant to join a public cloud. This is due to three challenging problems. First, the current cloud infrastructures lack provable trustworthiness. Integrating Trusted Computing (TC) technologies with cloud infrastructure shows a promising method for verifying the cloud’s behaviors, which may in turn facilitate provable trustworthiness. Second, public clouds have the inherent and unknown danger stemming from a shared platform – namely, the hypervisor. An attacker that subverts a virtual machine (VM) and then goes on to compromise the hypervisor can readily compromise all virtual machines on that hypervisor. We propose a security-aware virtual machine placement scheme in the cloud. Third, a sophisticated attack in a cloud has to be understood as a sequence of events that calls for the detection/response model to encompass observations from varying dimensions. We discuss a method to automatically determine the best response, given the observations on the system states from a set of monitors.
Game theory provides a rich mathematical tool to analyze conflict within strategic interactions and thereby gain a deeper understanding of cloud security issues. Theoretical constructs or mathematical abstractions provide a rigorous scientific basis for cyber security because they allow for reasoning quantitatively about cyber-attacks. This talk will address the three cloud security challenging problems identified above and report on our latest findings from this body of work.
Bio: Charles A. Kamhoua received the BS in electronic from the University of Douala (ENSET), Cameroon, in 1999, and the MS in telecommunication and networking and the PhD in electrical engineering from Florida International University (FIU), in 2008 and 2011, respectively. In 2011, he joined the Cyber Assurance Branch of the U.S. Air Force Research Laboratory (AFRL), Rome, New York, as a National Academies Postdoctoral Fellow and became a Research Electronics Engineer in 2012. Prior to joining AFRL, he was an educator for more than 10 years. His current research interests include the application of game theory to cyber security, survivability, cloud computing, hardware Trojan, online social network, wireless communication and cyber threat information sharing. He has more than 60 technical publications in prestigious journals and International conferences along with a Best Paper Award at the 2013 IEEE FOSINTSI. He has mentored more than 40 young scholars at AFRL counting Summer Faculty Fellow, postdoc, and students. He has been invited to more than 30 keynote and distinguished speeches in the USA and abroad. He has been recognized for his scholarship and leadership with numerous prestigious awards including 30 Air Force Notable Achievement Awards, the 2016 FIU Charles E. Perry Young Alumni Visionary Award, the 2015 AFOSR Windows on the World Visiting Research Fellowship at Oxford University, UK, an AFOSR Basic Research Award, the 2015 Black Engineer of the Year Award (BEYA), the 2015 NSBE Golden Torch Award—Pioneer of the Year, selection to the 2015 Heidelberg Laureate Forum, and the 2011 NSF PIRE Award at the Fluminense Federal University, Brazil. He is currently an advisor for the National Research Council, a member of ACM, the FIU alumni association, NSBE and a senior member of IEEE.
Assured Cloud Computing Special Seminar: Design and Validation of Distributed Data Stores using Formal Methods
- Posted on January 25, 2016 at 3:17 pm by amyclay@illinois.edu.
- Categorized Events.
- Comments are off for this post.
Peter Ölveczky, University of Oslo
February 10, 4 p.m. 2405 Siebel Center
Abstract: To deal with large amounts of data and to ensure high availability, many cloud computing systems rely on distributed, partitioned, and/or replicated data stores. However, such data stores are complex artifacts that are very hard to design and analyze, as they satisfy different notions of “consistency”.
We therefore propose to use formal methods to model distributed data stores, and to analyze both their correctness properties and their performance. In particular, one goal is to identify key building blocks in the design of such data stores, and their properties, so that new data stores, or different versions of existing data stores, can be designed by reusing such “components”.
This talk, which is based on joint work with Jon Grov and a number of members of the Assured Cloud Computing Center, gives a high-level overview of this ongoing work, which has already been used to design and analyze new versions of Google’s Megastore, Facebook/Apache’s Cassandra, and UC Berkeley’s RAMP distributed data stores.
Bio: Peter Ölveczky received his PhD in computer science from the University of Bergen, Norway, in 2000, having performed his thesis research at SRI International. He was assistant and then associate professor at the University of Oslo 2001-2008, and has been a full professor there since 2008. He was also a post-doctoral researcher at the University of Illinois at Urbana-Champaign (UIUC) 2002-2004, and has been a visiting researcher at UIUC since 2008.
Ölveczky’s research focuses on formal methods, in particular for real-time systems. He is the developer of the Real-Time Maude tool, which has been used to formally model and analyze a large range of advanced systems, including scheduling protocols, distributed data stores, wireless sensor network algorithms, the human thermoregulatory system, mobile ad hoc networks, avionics systems, and so on. Ölveczky has organized 9 international scientific workshops/conferences, and has edited a number of scientific books and journal issues.
Assured Cloud Computing Weekly Seminars Slides and Video Spring 2016
- Posted on January 20, 2016 at 3:19 pm by amyclay@illinois.edu.
- Categorized Events.
- Comments are off for this post.
Models for Reasoning about Digital Evidence slides | video
Imani Palmer, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
February 17, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Due to the popularity of containers and virtual machines, it is necessary to integrate forensics and policy monitoring in order to enhance the security of systems. The integration of forensics with policy monitoring will aid in its dynamic nature allowing for better response to potential attacks. Our latest approach leverages the knowledge of the system from digital evidence. This evidence combined with monitoring will enable us determine if the current state of the machine is vulnerable or has been attacked. Currently, the process of analyzing evidence is based on the individual knowledge of an examiner. This framework will enable researchers and examiners to apply various reasoning models to their cases. The application of these reasoning methods would be automated in order to avoid discrepancies and provide reproducibility. This framework will handle the analysis phase of the digital forensic investigative process. It will receive information from digital forensic tools. This information will feed various visualizations to aid in the development of a hypothesis. The reasoning models define and assign likelihood between the relationships of evidence pertaining to the hypothesis. Last, a framework for estimating likelihood error rates will be built and provisional estimates determined and examined. As digital forensic science advances it is important to be able to rigorously determine conclusions drawn from electronic evidence.
Cyber Security as a Signaling Game slides | video
Key-whan Chung, Electrical & Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
March 16, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: With the increasing concern on cyber security, variation of approaches for intrusion detection (IDS) and response have been introduced. Starting from a signature based IDS and advancing to anomaly based detection systems, the detection methods has become more intelligent. However, with the attack models becoming more sophisticated, we face attack models that are hard to differentiate from benign activities. To effectively detect malicious intentions, we apply a game theoretic approach, modeling the rationality and the probabilistic behavior of the attacker and the defender. We model a slow Denial of Service (DoS) attack, a DoS attack on the application level, from the logs collected from a real system and formulate the interaction as a signaling game, where two players optimize their action based on the uncertainty of the opponent’s action. Using a simulation based approach, we evaluate the performance of the approach and discuss the possibility of applying it to a real system.
Intrusion Detection in Enterprise Systems by Combining and Clustering Diverse Monitor Data slides | video
Atul Bohara, Computer Science Research Assistant
Uttam Thakore, Computer Science Research Assistant
University of Illinois at Urbana-Champaign
April 5, 2016, 4:00 p.m., B02 Coordinated Science Laboratory
Abstract: Intrusion detection using multiple security devices has received much attention recently. The large volume of information generated by these tools, however, increases the burden on both computing resources and security administrators. Moreover, attack detection does not improve as expected if these tools work without any coordination.
In our work, we propose a simple method to join information generated by security monitors with diverse data formats. We present a novel intrusion detection technique that uses unsupervised clustering algorithms to identify malicious behavior within large volumes of diverse security monitor data. First, we extract a set of features from network-level and host-level security logs that aid in detecting malicious host behavior and flooding-based network attacks in an enterprise network system. We then apply clustering algorithms to the separate and joined logs and use statistical tools to identify anomalous usage behaviors captured by the logs. We evaluate our approach on an enterprise network data set, which contains network and host activity logs. Our approach correctly identifies and prioritizes anomalous behaviors in the logs by their likelihood of maliciousness. By combining network and host logs, we are able to detect malicious behavior that cannot be detected by either log alone.
Dynamic Fine-Grained Code Offloading in Mobile Cloud Applications slides | video
Kirill Mechitov, Computer Science Postdoctoral Researcher
University of Illinois at Urbana-Champaign
April 6, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Mobile cloud computing (MCC) enables overcoming the energy and processing limitations of mobile devices by leveraging the virtually unlimited, elastic, on-demand resources of the cloud. The increased dynamicity and complexity of hybrid cloud applications making use of both public and private cloud services (e.g., for reasons of privacy and information security) requires open systems that interact with the environment while addressing application-specific constraints, user expectations, and security/privacy policies of multiple systems and organizations. We have developed IMCM, a proof-of-concept implementation of an actor-based framework for mobile hybrid cloud applications. IMCM uses dynamic fine-grained code offloading to achieve significant performance and energy consumption improvements in cloud-backed mobile applications, while respecting specified privacy and security policies. In this talk, we refine the design and implementation of the Elasticity Manager, the core decision-making component of the IMCM framework. We also explore the application of related work on actor-based model checking for schedulability analysis of distributed real-time systems to design-time optimization in mobile cloud computing.
*Two 30-minute seminars on April 13.
Cauldron: A Framework to Defend Against Cache-based Side-channel Attacks in Clouds slides | video
Mohammad Ahmad, Computer Science Research Assistant
University of Illinois at Urbana-Champaign
*April 13, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Cache-based side-channel attacks have garnered much interest in recent literature. Such attacks are particularly relevant for cloud computing platforms due to high levels of multitenancy. In fact, there exists recent work that demonstrates such attacks on real cloud platforms (e.g., Heroku). This paper presents Cauldron, a framework to defend against such cache-based side-channel attacks. Cauldron uses a combination of smart scheduling techniques and microarchitectural mechanisms to achieve this goal. We are able to demonstrate improved defenses against both cross-core side channel attacks that target shared caches. Furthermore, Cauldron is transparent to the user – requiring no modification (or even recompilation) of users’ application binaries by integrating directly with the popular container runtime framework, Docker. Preliminary evaluation results show that the proposed approach is effective for cloud computing applications.
Security and Privacy Mechanisms: an Analysis of Cloud Service Providers for Governments slides | video
Carlo Di-Giulio, Graduate School of Library and Information Science
University of Illinois at Urbana-Champaign
*April 13, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Since 2010, in order to reduce alarmingly increasing costs in IT management and promote efficiency at a federal level, the US Government has been promoting a “Cloud First” policy, made of regulations and initiatives, aimed to promote the use of cloud services for Federal Agencies. The Agencies, slowly responding to the suggestions of the US Government, are working to migrate their systems to cloud environment, either built In-House (e.g. “private cloud”) or supplied by private Cloud Service Providers (CSPs) with shared infrastructures.
Public cloud services, however, may not guarantee a level of data security and privacy protection comparable to that assured with a private cloud, especially for high-sensitive data such as those processed by military agencies, and more specifically the Air Force. Our study aims to analyze the risks derived from the migration to public cloud services, and specify the main differences among those available to Federal Agencies in terms of data privacy and security.
Drawing on publicly available documentation, the still ongoing research is moving on three steps. The first is a comprehensive assessment of the regulatory context; the second is the identification of the services offered by five of the main CSPs targeting Federal Agencies; the third is the definition of policies and best practices adopted by the CSPs.
The comparison of services and polices, driven by the understanding of the regulatory context, helps to shed lights on privacy and security risks in the migration of Government systems to publicly available cloud services.
This presentation will provide an overview of the research and its preliminary results.
Application-aware Network Resource Allocation slides
Chris Cai, Computer Science Research Assistant
University of Illinois at Urbana-Champaign
April 20, 2016, 4:00 p.m., B02 Coordinated Science Laboratory
Abstract: Recently Application-aware Networking has drawn a lot of attention both in research community and industry. As the era of big data comes, applications start to demand significantly increased data, and network usage patterns of applications keep becoming more complicated. Therefore the network supporting the application needs to be more intelligent in terms of understanding the specific performance requirements and the network usage characteristics of different types of applications. Application-aware Networking describes such a network should be able to maintain information about the applications that run on top of it and leverage the information to optimize the performance of applications. The information about applications can include the performance requirements of applications, and types of network resource demand of applications, etc. We plan to present two works: Phurti and CRONets(Cloud-Routed Overlay Networks). They each present Application-aware Networking under very different networking environment: cloud network and wide-area network. We plan to discuss the im- plementation and evaluation of Phurti, as well as present the preliminary results and research plan of CRONets.
Getafix: Workload-aware Data Management in Lookback Processing Systems slides | video
Mainak Ghosh, Computer Science Research Assistant
University of Illinois at Urbana-Champaign
April 27, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: In this paper, we target lookback processing systems (LPS), which allow queries to operate on segment-based historical data. We present new strategies that decide which segments should be placed, and how they should be replicated. Our approach leverages segment popularity. They are provably optimal in replication level, and thus memory and network overheads, for the static case. For the dynamic case, we present two heuristics. Our experiments show that the approaches improve memory and network utilization compared to existing strategies.
Assured Cloud Computing Weekly Seminars Slides and Video Fall 2015
- Posted on September 23, 2015 at 11:30 am by whitesel@illinois.edu.
- Categorized Events.
- Comments are off for this post.
Characterizing and Adapting the Consistency-Latency Tradeoff in Distributed Key-value Stores slides | video
Muntasir Rahman, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
September 16, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: The CAP theorem is a fundamental result that applies to distributed storage systems. In this paper, we first present and prove a probabilistic variation of the CAP theorem. We present probabilistic models to characterize the three important elements of the CAP theorem: consistency (C), availability or latency (A), and partition-tolerance (P). Then, we provide quantitative characterization of the tradeoff among these three elements.
Next, we leverage this result to present a new system, called PCAP, which allows applications running on a single data-center to specify either a latency SLA or a consistency SLA. The PCAP system automatically adapts, in real-time and under changing network conditions, to meet the SLA while optimizing the other C/A metric. We incorporate PCAP into two popular key-value stores — Apache Cassandra and Riak. Our experiments with these two deployments, under realistic workloads, reveal that the PCAP system satisfactorily meets SLAs, and performs close to the bounds dictated by our tradeoff analysis. We also extend PCAP from a single data-center to multiple geo-distributed data-centers.
Quantitative Analysis of Consistency in NoSQL Key-value Stores slides
Si Liu, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
September 23, 2015, 3:00 p.m., 1131 Siebel Center
Abstract:The promise of high scalability and availability has prompted many companies to replace traditional relational database management systems (RDBMS) with NoSQL key-value stores. This comes at the cost of relaxed consistency guarantees: key-value stores only guarantee eventual consistency in principle. In practice, however, many key-value stores seem to offer stronger consistency. Quantifying how well consistency properties are met is a non-trivial problem. We address this problem by formally modeling key-value stores as probabilistic systems and quantitatively analyzing their consistency properties by statistical model checking. We present for the first time a formal probabilistic model of Apache Cassandra, a popular NoSQL key-value store, and quantify how much Cassandra achieves various consistency guarantees under various conditions. To validate our model, we evaluate multiple consistency properties using two methods and compare them against each other. The two methods are: (1) an implementation-based evaluation of the source code; and (2) a statistical model checking analysis of our probabilistic model.
Monitoring Data Fusion for Intrusion Tolerance slides
Atul Bohara, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
October 7, 2015, 3:00 p.m., 1131 Siebel Center
Abstract: Security and resiliency of computer systems relies heavily on monitoring. Increasing deployment of these monitors, however, generates an unmanageable amount of logs, making intrusion detection inefficient with high false positive and false negative rates. Moreover, even after deploying a variety of monitors, the system usually lacks a global security view, making it infeasible to utilize the valuable information produced by these monitors for system security.
In this talk, I will present our technique to address these challenges. We will discuss some data-driven techniques to create, maintain, and present higher-level views of the system under consideration. This involves combining data from multiple monitors, which may be at different levels of abstraction such as host level and network level, and learning how the profile of the system evolves over time. Specifically, we will discuss how these higher-level views of the system will help in making decisions such as presence of an intrusion or violation of a security policy. I will also touch upon our plan to experimentally evaluate our approach.
A Quantitative Methodology for Security Monitor Deployment slides | video
Uttam Thakore, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
October 14, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Despite advances in intrusion detection and prevention systems, attacks on networked computer systems continue to succeed. Intrusion tolerance and forensic analysis, which are required to adequately detect and defend against attacks that succeed, depend on monitors to collect information about possible attacks. Since monitoring can be expensive, however, monitors must be selectively deployed to maximize their overall utility.
In this talk, we present a methodology both to quantitatively evaluate monitor deployments in terms of security goals and to deploy monitors optimally based on cost constraints. We first define a model that describes the system to protect, the monitors that can be deployed, and the relationship between intrusions and data generated by monitors. Then, we define a set of quantitative metrics that quantify the utility and richness of monitor data with respect to intrusion detection and the cost associated with monitor deployment. We describe how a practitioner could characterize intrusion detection requirements using our model. Finally, we use our model and metrics to formulate a method to determine the cost-optimal, maximum-utility placement of monitors. We illustrate the practicality and expressiveness of our approach with an enterprise Web service case study and a scalability analysis of our algorithms.
Runtime Monitoring of Hypervisor Integrity slides | video
Cuong Pham, Electrical and Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
October 21, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Not unlike other types of utilities, cloud computing is enjoying the economy of sharing. This is true because sharing tremendously drives down the cost of computing resources, which in turns attracts more and more users and providers to get on the bandwagon. This trend generally works well, until computer security comes into the cost equation. In this talk, I will describe Virtual Machine (VM) Escape Attack, the primary security risk of sharing computing resource via VMs — the current mainstream mechanism that enables most cloud computing offerings. After that, I will describe our technique, called hShield, to cope with this class of attack. hShield is a proposal to integrate runtime integrity measurement of hypervisors into existing Hardware Assisted Virtualization (HAV) technologies, such as Intel VT-x or AMD SVM.
Towards a Secure Container Framework slides
Mohammad Ahmad, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
November 4, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Containers are a form of OS-level virtualization that leverage cgroups and namespaces for isolation. They present a lightweight alternative to hypervisor-based virtualization and have already been adopted by several platform as a service (PaaS) cloud providers. While containers offer improved performance, cross-container side-channel attacks shown on Public PaaS clouds raise questions about their security.
In this talk, we present our work towards building a secure container framework with improved container isolation. Specifically, as a first step, we focus on defenses against cache-based side-channel attacks using a combination of software and hardware mechanisms.
Phurti: Application and Network-Aware Flow Scheduling for Multi-Tenant MapReduce Clusters slides | video
Chris Cai, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
November 11, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Traffic for a typical MapReduce job in a datacenter consists of multiple network flows. Traditionally, network resources have been allocated to optimize network-level metrics such as flow completion time or throughput. Some recent schemes propose using application-aware scheduling which can reduce the average job completion time. However, most of them treat the core network as a black box with sufficient capacity. Even if only one network link in the core network becomes a bottleneck, it can hurt application performance.
We design and implement a centralized flow scheduling framework called Phurti with the goal of improving the completion time for jobs in a cluster shared among multiple Hadoop jobs (multi-tenant). Phurti communicates both with the Hadoop framework to retrieve job-level network traffic information and the OpenFlow-based switches to learn about network topology. Phurti implements a novel heuristic called Smallest Maximum Sequential-traffic First (SMSF) that uses collected application and network information to perform traffic scheduling for MapReduce jobs. Our evaluation with real Hadoop workloads shows that compared to application and network- agnostic scheduling strategies, Phurti improves job completion time for 95% of the jobs, decreases average job completion time by 20%, tail job completion time by 13% and scales well with the cluster size and number of jobs.
Efficient Monitoring in Actor-based Mobile Hybrid Cloud Framework slides | video
Kirill Mechitov, Computer Science Postdoctoral Research Associate, University of Illinois at Urbana-Champaign
November 18, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Mobile cloud computing (MCC) enables overcoming the energy and processing limitations of mobile devices by leveraging the virtually unlimited, elastic, on-demand resources of the cloud. The increased dynamicity and complexity of hybrid cloud applications making use of both public and private cloud services (e.g., for reasons of privacy and information security) requires open systems that interact with the environment while addressing application-specific constraints, user expectations, and security/privacy policies of multiple systems and organizations. We have developed IMCM, a proof-of-concept implementation of an actor-based framework for mobile hybrid cloud applications. IMCM uses dynamic fine-grained code offloading to achieve significant performance and energy consumption improvements in cloud-backed mobile applications. In this talk, we describe IMCM’s lightweight monitoring framework, capable of capturing dynamic parameters of the execution environment and end-user context, in addition to coarse-grained actions and events of distributed actor-based applications. We demonstrate how the monitoring system can facilitate efficient detection of security policy violations, and generalize these results to distributed actor-based applications supporting code mobility.
Reliability and Security as-a-Service
Zachary Estrada, Electrical and Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
December 2, 2015, 3:00 p.m., 2405 Siebel Center
Abstract: Infrastructure as-a-Service (IaaS) clouds significantly lower the barrier for obtaining scalable computing resources. Could a similar service be offered to provide on-demand reliability and security monitoring? Cloud computing systems are typically built using virtual machines (VMs) and much work has been done on using that virtualization layer for reliability and security monitoring. In this talk, I will demonstrate how we use whole-system dynamic analysis to inform dynamic hypervisor based VM monitoring for providing reliability and security as-a-service.
Trust & Security/Assured Cloud Computing Joint Seminar: Security-Aware Virtual Machine Allocation in the Cloud: A Game Theoretic Approach
- Posted on September 3, 2015 at 11:40 am by whitesel@illinois.edu.
- Categorized Events.
- Comments are off for this post.
Charles A. Kamhoua, U.S. Air Force Research Laboratory
September 2, 4:00 p.m., 2405 Siebel Center
Research paper presented: Luke Kwiat, Charles A. Kamhoua, Kevin Kwiat, Jian Tang, and Andrew Martin, “Security-aware Virtual Machine Allocation in the Cloud: A Game Theoretic Approach”, IEEE International Conference on Cloud Computing (IEEE Cloud 2015), New York, NY, June 27-July 2, 2015. [full text]
Abstract: With the growth of cloud computing, many businesses, both small and large, are opting to use cloud services compelled by a great cost savings potential. This is especially true of public cloud computing which allows for quick, dynamic scalability without many overhead or long-term commitments. However, one of the largest dissuasions from using cloud services comes from the inherent and unknown danger of a shared platform such as the hypervisor. An attacker can attack a virtual machine (VM) and then go on to compromise the hypervisor. If successful, then all virtual machines on that hypervisor can become compromised. This is the problem of negative externalities, where the security of one player affects the security of another. This work shows that there are multiple Nash equilibria for the public cloud security game. It also demonstrates that we can allow the players’ Nash equilibrium profile to not be dependent on the probability that the hypervisor is compromised, reducing the factor externality plays in calculating the equilibrium. Finally, by using our allocation method, the negative externality imposed onto other players can be brought to a minimum compared to other common VM allocation methods.
Bio: Charles A. Kamhoua received his B.S. in Electronic from the University of Douala (ENSET), Cameroon in 1999, and the M.S. in Telecommunication and Networking and PhD in Electrical Engineering from Florida International University in 2008 and 2011 respectively. In 2011, he joined the Cyber Assurance Branch of the U.S. Air Force Research Laboratory (AFRL), Rome, New York, as a National Academies Postdoctoral Fellow and became a Research Electronics Engineer in 2012. Prior to joining AFRL, he was an educator for more than 10 years. His current research interests cover the application of game theory and mechanism design to cyber security and survivability, with over 50 technical publications in prestigious journals and International conferences including a Best Paper Award at the 2013 IEEE FOSINT-SI. Dr. Kamhoua has been recognized for his scholarship and leadership with numerous prestigious awards including ten Air Force Notable Achievement Awards, the 2015 AFOSR Windows on the World Visiting Research Fellowship at Oxford University, UK, an AFOSR basic research award of $645K, the 2015 Black Engineer of the Year Award (BEYA), the 2015 NSBE Golden Torch Award – Pioneer of the Year, a selection to the 2015 Heidelberg Laureate Forum, a 2011 NSF PIRE award at Fluminense Federal University, Brazil, and the 2008 FAEDS teacher award. He is an advisor for the National Research Council, a Senior Member of IEEE, a member of ACM, the FIU alumni association, and NSBE.