Assured Cloud Computing Special Seminar: Nomad: Mitigating Arbitrary Cloud Side Channels via Provider-Assisted Migration

  • Posted on January 24, 2017 at 9:09 am by whitesel@illinois.edu.
  • Categorized ACC Speaker.
  • Comments are off for this post.

Winner 2016 NSA Best Scientific Cybersecurity Paper

Soo-Jin Moon, Carnegie Mellon University
February 22, 4:00 p.m., 2405 Siebel Center

slidesvideo

Abstract: Recent studies have shown a range of co-residency side channels that can be used to extract private information from cloud clients. Unfortunately, addressing these side channels often requires detailed attack-specific fixes that require significant modifications to hardware, client virtual machines (VM), or hypervisors. Furthermore, these solutions cannot be generalized to future side channels. Barring extreme solutions such as single tenancy which sacrifices the multiplexing benefits of cloud computing, such side channels will continue to affect critical services. In this work, we present Nomad, a system that offers vector-agnostic defense against known and future side channels. Nomad envisions a provider-assisted VM migration service, applying the moving target defense philosophy to bound the information leakage due to side channels. In designing Nomad, we make four key contributions: (1) a formal model to capture information leakage via side channels in shared cloud deployments; (2) identifying provider-assisted VM migration as a robust defense for arbitrary side channels; (3) a scalable online VM migration heuristic that can handle large datacenter workloads; and (4) a practical implementation in OpenStack. We show that Nomad is scalable to large cloud deployments, achieves near-optimal information leakage subject to constraints on migration overhead, and imposes minimal performance degradation for typical cloud applications such as web services and Hadoop MapReduce.

Bio: Soo-Jin Moon is a third-year Ph.D. student at the Electrical and Computer Engineering at Carnegie Mellon University, where she is part of Cylab and advised by Vyas Sekar. Her research interests are broadly in the space of Network and Systems Security. Her work has been recognized with the NSA Best Scientific Cybersecurity paper (2016) and CSAW Applied Security Research award (2015). Before joining CMU, she received a bachelor’s degree (2014) in Electrical Engineering from University of Waterloo, Canada.

Assured Cloud Computing Weekly Seminars Slides and Video Spring 2017

  • Posted on January 19, 2017 at 12:50 pm by whitesel@illinois.edu.
  • Categorized Uncategorized.
  • Comments are off for this post.

Trustworthy Services Built on Event-based Probing for Layered Defense  slides | video
Read Sprabery, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
February 1, 2017, 4:00 p.m., 2405 Siebel Center

Abstract: Numerous event-based probing methods exist for cloud computing environments allowing a hypervisor to gain insight into guest activities. Such event-based probing has been shown to be useful for detecting attacks, system hangs through watchdogs, and for inserting exploit detectors before a system can be patched, among others. Here, we illustrate how to use such probing for trustworthy logging and highlight some of the challenges that existing event-based probing mechanisms do not address. Challenges include ensuring a probe inserted at given address is trustworthy despite the lack of attestation available for probes that have been inserted dynamically. We show how probes can be inserted to ensure proper logging of every invocation of a probed instruction. When combined with attested boot of the hypervisor and guest machines, we can ensure the output stream of monitored events is trustworthy. Using these techniques we build a trustworthy log of certain guest-system-call events. The log powers a cloud-tuned Intrusion Detection System (IDS). New event types are identified that must be added to existing probing systems to ensure attempts to circumvent probes within the guest appear in the log. We highlight the overhead penalties paid by guests to increase guarantees of log completeness when faced with attacks on the guest kernel. Promising results (less that 10% for guests) are shown when a guest relaxes the trade-off between log completeness and overhead. Our demonstrative IDS detects common attack scenarios with simple policies built using our guest behavior recording system.

Prioritization of Cloud System Monitoring  
Uttam Thakore, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
February 15, 2017, 4:00 p.m., 2405 Siebel Center

Abstract: Rapid identification of and response to incidents is a costly but necessary part of ensuring the reliability and security of large-scale enterprise cloud systems. This functionality requires efficient analysis of heterogeneous monitor and log data, which becomes increasingly challenging as systems grow in size and complexity. In this talk, we describe a novel method for prioritizing the collection and analysis of monitor data in enterprise clouds for incident analysis. In particular, we use statistical correlation analysis to construct a graph of time-lagged correlation relationships between heterogeneous data sources in the system, and use the strength of correlation along paths in the graph to prioritize which data sources an administrator should analyze when performing incident analysis. We discuss our current results in evaluating our approach on incidents in an IBM enterprise cloud and how well our approach identifies the data sources that provide evidence of behavior that causes the incidents.

Label-based Defenses Against Cache Side Channel Attacks in PaaS Cloud Infrastructure  slides | video
Konstantin Evchenko, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
March 15, 2017, 4:00 p.m., 2405 Siebel Center

Abstract: Cache side channels pose a serious risk to cloud computing environments due to multi-tenancy. With the move to containers this risk has been exacerbated as more multi-tenancy is possible when compared to VM situations.

We introduce a label based defense for protecting against cache-based side channels that target container based PaaS infrastructures. Our approach is a novel combination of hardware enforced spatial separation and software enforced temporal separation of labeled containers on shared resources.

We present the implementation of this defense as a series of modifications to popular existing platforms that are used to deploy cloud services. Unlike many previous works, our approach does not require modifications to the existing hardware and client software, allows hyperthreading to remain enable and can be quickly deployed in the cloud as a part of scheduled software upgrade routine. We evaluate both the effectiveness and the overheads of our approach using representative cloud workloads.

Exploring Design Alternatives for the RAMP Transaction Systems Through Statistical Model Checking   slides | video
Si Liu, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
March 29, 2017, 4:00 p.m., 2405 Siebel Center

Abstract: In this work we explore and extend the design space of the recent RAMP (Read Atomic Multi-Partition) transaction system for large-scale partitioned data stores. Arriving at a mature distributed system design through implementation and experimental validation is a labor-intensive task, which means that only a limited number of design alternatives can be explored in practice. The developers of RAMP did implement and validate three design alternatives: RAMP-Fast, RAMP-Small, and RAMP-Hybrid. They also sketched three additional designs and presented some conjectures about them. This work addresses two questions: (1) How can the design space of a distributed transaction system such as RAMP be systematically explored with modest effort, so that substantial knowledge about design alternatives can be gained before designs are implemented? and (2) How realistic and informative are t he results of such design explorations? We answer the first question by: (i) formally modeling eight RAMP-like designs (five by the RAMP developers and three of our own) in Maude as probabilistic rewrite theories, and (ii) using statistical model checking of those models to analyze key performance metrics such as throughput, average latency, and actual degrees of strong consistency and read atomicity. We answer the second question by showing that the quantitative analyses thus obtained for these models: (i) are consistent with the experimental results obtained by the RAMP developers for their implemented designs; (ii) they confirm the conjectures made by the RAMP developers for their other three unimplemented designs; and (iii) they uncover a new design, our proposed RAMP-Faster design, that outperforms all other designs for several key properties, such as latency, throughput and consistency, while providing read atomicity for 99% of the transactions.

Trustworthy Services Built on Event-based Probing for Layered Defense   slides | video
Read Sprabery, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
March 29, 2017, 4:00 p.m., 2405 Siebel Center

Abstract: Numerous event-based probing methods exist for cloud computing environments allowing a hypervisor to gain insight into guest activities. Such event-based probing has been shown to be useful for detecting attacks, system hangs through watchdogs, and for inserting exploit detectors before a system can be patched, among others. Here, we illustrate how to use such probing for trustworthy logging and highlight some of the challenges that existing event-based probing mechanisms do not address. Challenges include ensuring a probe inserted at given address is trustworthy despite the lack of attestation available for probes that have been inserted dynamically. We show how probes can be inserted to ensure proper logging of every invocation of a probed instruction. When combined with attested boot of the hypervisor and guest machines, we can ensure the output stream of monitored events is trustworthy. Using these techniques we build a trustworthy log of certain guest-system-call events. The log powers a cloud-tuned Intrusion Detection System (IDS).

This talk will focus on the algorithm for proper insertion of dynamic probes and on the structure and effectiveness of layered policies.

Formalizing Hardware-Assisted Virtualization Behavior to Verify VM Monitoring Frameworks  slides | video (audio only)
Lavin Devnani, Electrical and Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
April 5, 2017, 4:00 p.m., 2405 Siebel Center

Abstract: This paper presents an approach to verify virtual machine monitoring frameworks by formalizing guest and hypervisor behavior. We model components of guest environments that are exposed to monitoring frameworks during VM transitions. In addition, we model execution flows at the guest user and guest kernel levels that lead to VM transitions. Explicit-state model checking and state space searches are used to verify monitor properties specified as LTL formulae. We apply this model to verify correctness and security properties of monitors specified under frameworks like hprobes and HyperTap.

Getafix: Workload-aware Distributed Interactive Analytics  slides | video
Mainak Ghosh, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
April 12, 2017, 4:00 p.m., 2405 Siebel Center

Abstract: Distributed interactive analytics engines (Druid, Redshift, Pinot) need to achieve low query latency while using the least storage space. This paper presents a solution to the problem of replication of data blocks and routing of queries. Our techniques decide the replication level of individual data blocks (based on popularity, access counts), as well as output optimal placement patterns for such data blocks. For the static version of the problem (given set of queries accessing some segments), our techniques are provably optimal in both storage and query latency. For the dynamic version of the problem, we build a system called Getafix that dynamically tracks data block popularity, adjusts replication levels, dynamically  routes queries, and garbage collects less useful data blocks. We implemented Getafix into Druid, the most popular open-source interactive analytics engine. Our experiments use both synthetic traces and production traces from Yahoo! Inc.’s production Druid cluster. Compared to existing techniques Getafix either improves storage space used by up to 3.5x while achieving comparable query  latency, or improves query latency by up to 60% while using comparable storage.

Deep Learning Inference as a Service  slides | video
Mohammad Babaeizadeh, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
May 3, 2017, 4:00 p.m., 2405 Siebel Center

Abstract: Deep Learning technologies are showing up in a vast number of industrial areas, from real-time speech translation and smart cities to self-driving cars and drug discovery. The increasing number of these models being utilized for numerous applications demands for a scalable, high efficient inference mechanism capable of serving the ever growing number of queries.

However, unlike deep model development and training which is supported by sophisticated infrastructure and systems, model deployment and inference have received little attention. Currently, developers must combine the necessary pieces from various system components to support the inference, and often opt-out of shared resources, which makes the whole process highly error-prone and costly.

Compared to other computations in cloud computing, serving a model is unique in various ways. First, it is compute-intensive and often need a coprocessor, which results in a more complex framework. Second, unlike training, it is a real-time service with a tight service level objective. Lastly, inference on co-processors such as GPU has a non-linear performance model with respect to input size, which needs a more complicated scheduler.

In this talk, I will discuss the intriguing computational aspects of deep neural networks at inference time and how they can be exploited to design and implement a scalable deep learning inference service. Such cloud-based service enables customers to easily deploy pre-trained deep models at scale while maintaining a high utilization of available resources to minimize the service cost.

Assured Cloud Computing Weekly Seminars Slides and Video Fall 2016

  • Posted on August 22, 2016 at 3:33 pm by whitesel@illinois.edu.
  • Categorized Events.
  • Comments are off for this post.

Imani PalmerDigital Forensic Analysis: From Low-Level Events to High-Level Actions  slides | video
Imani Palmer, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
August 31, 2016, 4:00 p.m., 2405 Siebel Center

Abstract: As digital forensic science advances it is important to be able to rigorously determine conclusions drawn from electronic evidence. The process of analyzing digital evidence is based on the individual knowledge of an examiner. This framework will provide examiners with a analysis toolkit, in order to provide a mapping of low-level events to user actions. This framework will handle the analysis phase of the digital forensic investigative process. It will receive information from digital forensic tools. We have implemented various methods for developing these mappings. We evaluate our prototype and discuss the possibility of applying in real-world scenarios.

kchungAn Indirect Attack on Computing Infrastructure through Targeted Alteration on Environmental Control  slides | video
Keywhan Chung, Electrical and Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
September 28, 2016, 4:00 p.m., 2405 Siebel Center

Abstract: With increasing concern of securing the computer infrastructure, massive amount of effort had been put into hardening them. However, relatively less amount of effort had been put into considering the surrounding cyber-physical systems that the infrastructure heavily relies on. In this talk, I present how a malicious user can attack a large computing infrastructure by compromising the environmental control systems in the facilities that host the compute nodes. This talk will cover the study on failures of a computer infrastructure related to problems in the cooling system and demonstrate, using real data, that the control systems that provide chilled water can be used as entry points by an attacker to indirectly compromise the computing functionality through the orchestration of clever alterations of sensing and control devices. In this way, the attacker does not leave any trace of his or her malicious activity on the nodes of the cluster. Failures of the cooling systems can trigger unrecoverable failure modes that can be recovered only after service interruption and manual intervention.

Atul Bohara PhotoLateral Movement Detection Using Distributed Data Fusion  slides | video
Atul Bohara, Electrical and Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
September 28, 2016, 4:00 p.m., 2405 Siebel Center

Abstract: Attackers often attempt to move laterally from host to host, infecting them until an overall goal is achieved. One possible defense against this strategy is to detect such coordinated and sequential actions by fusing data from multiple sources. In this paper, we propose a framework for distributed data fusion that specifies the communication architecture and data transformation functions. Then, we use this framework to specify an approach for lateral movement detection that uses host- level process communication graphs to infer network connection causations. The connection causations are then aggregated into system-wide host-communication graphs that expose possible lateral movement in the system. In order to provide a balance between the resource usage and the robustness of the fusion architecture, we propose a multilevel fusion hierarchy that uses different clustering techniques. We evaluate the scalability of the hierarchical fusion scheme in terms of storage overhead, number of message updates sent, fairness of resource sharing among clusters, and quality of local graphs. Finally, we implement a host-level monitor prototype to collect connection causations, and evaluate its overhead. The results show that our approach provides an effective method to detect lateral movement between hosts, and can be implemented with acceptable overhead.

Di-Giulo ImageCloud Security Certifications: Are They Adequate to Provide Baseline Protection?  slides | video
Carlo Di-Giulio, Library and Information Science Research Assistant, University of Illinois at Urbana-Champaign
October 5, 2016, 4:00 p.m., 2405 Siebel Center

Abstract: Information security certifications, compliance with standards, and third-party assessment are among the most commonly used approaches to reassure potential and current users of cloud computing services. While at least two prominent examples of such certification/audit based security controls exist (i.e ISO/IEC 27001, and SOC2) the US government has created new requirements for Federal Agencies with new regulations and initiatives aimed at improving cloud security services offered by industry. In this presentation we will review and evaluate security controls and procedures required by the Federal Risk Authorization Management Program (FedRAMP) as well as compare FedRAMP to existing certifications for completeness and adequacy. Our research contextualizes the adoption and development of FedRAMP, and offers a big picture of performances of ISO/IEC 27001, SOC2, and FedRAMP, questioning on the level of protection that they provide by comparing them to each other.

Lecturer_KirillMechitovEnergy-Aware, Security-Conscious Code Offloading for the Mobile Cloud  slides | video
Kirill Mechitov, Computer Science Postdoc, University of Illinois at Urbana-Champaign
October 12, 2016, 4:00 p.m., 2405 Siebel Center

Abstract: Mobile cloud computing (MCC) enables overcoming the energy and processing limitations of mobile devices by leveraging the virtually unlimited, elastic, on-demand resources of the cloud. The increased dynamicity and complexity of hybrid cloud applications making use of both public and private cloud services (e.g., for reasons of privacy and information security) requires open systems that interact with the environment while addressing application-specific constraints, user expectations, and security/privacy policies of multiple systems and organizations. We have developed IMCM, a proof-of-concept implementation of an actor-based framework for mobile hybrid cloud applications. IMCM uses dynamic fine-grained code offloading to achieve significant performance and energy consumption improvements in cloud-backed mobile applications, while respecting specified privacy and security policies. In this talk, we discuss the energy monitoring and estimation aspects of the IMCM framework.

skeirik-photoUsing Reachability Logic to Verify Distributed Systems  slides | video
Stephen Skeirik, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
November  2, 2016, 4:00 p.m., 2405 Siebel Center

Abstract: Model checking is a method traditionally used to verify distributed systems, but it suffers from the limitation that it requires concrete initial states.  This applies in particular to ACC distributed systems, where verification efforts so far have mostly used model checking.  To gain higher levels of assurance, deductive verification, not for some concrete initial states, but for possibly infinite sets of initial states, is needed. In this presentation, we describe a recently developed logic, reachability logic, and show how it can be used to deductively verify distributed systems over a possibly infinite number of initial states. We conclude by examining what work has already been done and possible future directions, with special emphasis on deductive verification of ACC systems.

Trust & Security/Assured Cloud Computing Joint Seminar: Application of Game Theory to High Assurance Cloud Computing

  • Posted on August 12, 2016 at 2:39 pm by whitesel@illinois.edu.
  • Categorized ACC Speaker.
  • Comments are off for this post.

Charles Kamhoua

Charles A. Kamhoua, U.S. Air Force Research Laboratory
September 20, 4:00 p.m., Coordinated Science Laboratory Auditorium (B02 CSL)

slides | video

Abstract: The growth of cloud computing has spurred many entities, both small and large, to use cloud services for cost savings. Public cloud computing has allowed for quick, dynamic scalability without many overhead or long-term commitments. However, concern over cyber security is the main reason many large organizations with sensitive information such as the Department of Defense have been reluctant to join a public cloud. This is due to three challenging problems. First, the current cloud infrastructures lack provable trustworthiness. Integrating Trusted Computing (TC) technologies with cloud infrastructure shows a promising method for verifying the cloud’s behaviors, which may in turn facilitate provable trustworthiness. Second, public clouds have the inherent and unknown danger stemming from a shared platform – namely, the hypervisor. An attacker that subverts a virtual machine (VM) and then goes on to compromise the hypervisor can readily compromise all virtual machines on that hypervisor. We propose a security-aware virtual machine placement scheme in the cloud. Third, a sophisticated attack in a cloud has to be understood as a sequence of events that calls for the detection/response model to encompass observations from varying dimensions. We discuss a method to automatically determine the best response, given the observations on the system states from a set of monitors.

Game theory provides a rich mathematical tool to analyze conflict within strategic interactions and thereby gain a deeper understanding of cloud security issues. Theoretical constructs or mathematical abstractions provide a rigorous scientific basis for cyber security because they allow for reasoning quantitatively about cyber-attacks. This talk will address the three cloud security challenging problems identified above and report on our latest findings from this body of work.

Bio: Charles A. Kamhoua received the BS in electronic from the University of Douala (ENSET), Cameroon, in 1999, and the MS in telecommunication and networking and the PhD in electrical engineering from Florida International University (FIU), in 2008 and 2011, respectively. In 2011, he joined the Cyber Assurance Branch of the U.S. Air Force Research Laboratory (AFRL), Rome, New York, as a National Academies Postdoctoral Fellow and became a Research Electronics Engineer in 2012. Prior to joining AFRL, he was an educator for more than 10 years. His current research interests include the application of game theory to cyber security, survivability, cloud computing, hardware Trojan, online social network, wireless communication and cyber threat information sharing. He has more than 60 technical publications in prestigious journals and International conferences along with a Best Paper Award at the 2013 IEEE FOSINTSI. He has mentored more than 40 young scholars at AFRL counting Summer Faculty Fellow, postdoc, and students. He has been invited to more than 30 keynote and distinguished speeches in the USA and abroad. He has been recognized for his scholarship and leadership with numerous prestigious awards including 30 Air Force Notable Achievement Awards, the 2016 FIU Charles E. Perry Young Alumni Visionary Award, the 2015 AFOSR Windows on the World Visiting Research Fellowship at Oxford University, UK, an AFOSR Basic Research Award, the 2015 Black Engineer of the Year Award (BEYA), the 2015 NSBE Golden Torch Award—Pioneer of the Year, selection to the 2015 Heidelberg Laureate Forum, and the 2011 NSF PIRE Award at the Fluminense Federal University, Brazil. He is currently an advisor for the National Research Council, a member of ACM, the FIU alumni association, NSBE and a senior member of IEEE.

Assured Cloud Computing Weekly Seminars Slides and Video Fall 2015

  • Posted on September 23, 2015 at 11:30 am by whitesel@illinois.edu.
  • Categorized Events.
  • Comments are off for this post.

Muntasir Raihan RahmanCharacterizing and Adapting the Consistency-Latency Tradeoff in Distributed Key-value Stores  slides | video
Muntasir Rahman, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
September 16, 2015, 3:00 p.m., 2405 Siebel Center

Abstract: The CAP theorem is a fundamental result that applies to distributed storage systems. In this paper, we first present and prove a probabilistic variation of the CAP theorem. We present probabilistic models to characterize the three important elements of the CAP theorem: consistency (C), availability or latency (A), and partition-tolerance (P). Then, we provide quantitative characterization of the tradeoff among these three elements.

Next, we leverage this result to present a new system, called PCAP, which allows applications running on a single data-center to specify either a latency SLA or a consistency SLA. The PCAP system automatically adapts, in real-time and under changing network conditions, to meet the SLA while optimizing the other C/A metric. We incorporate PCAP into two popular key-value stores — Apache Cassandra and Riak. Our experiments with these two deployments, under realistic workloads, reveal that the PCAP system satisfactorily meets SLAs, and performs close to the bounds dictated by our tradeoff analysis. We also extend PCAP from a single data-center to multiple geo-distributed data-centers.

Si Liu PhotoQuantitative Analysis of Consistency in NoSQL Key-value Stores  slides
Si Liu, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
September 23, 2015, 3:00 p.m., 1131 Siebel Center

Abstract:The promise of high scalability and availability has prompted many companies to replace traditional relational database management systems (RDBMS) with NoSQL key-value stores. This comes at the cost of relaxed consistency guarantees: key-value stores only guarantee eventual consistency in principle. In practice, however, many key-value stores seem to offer stronger consistency. Quantifying how well consistency properties are met is a non-trivial problem. We address this problem by formally modeling key-value stores as probabilistic systems and quantitatively analyzing their consistency properties by statistical model checking. We present for the first time a formal probabilistic model of Apache Cassandra, a popular NoSQL key-value store, and quantify how much Cassandra achieves various consistency guarantees under various conditions. To validate our model, we evaluate multiple consistency properties using two methods and compare them against each other. The two methods are: (1) an implementation-based evaluation of the source code; and (2) a statistical model checking analysis of our probabilistic model.

Atul Bohara PhotoMonitoring Data Fusion for Intrusion Tolerance  slides 
Atul Bohara, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
October 7, 2015, 3:00 p.m., 1131 Siebel Center

Abstract: Security and resiliency of computer systems relies heavily on monitoring. Increasing deployment of these monitors, however, generates an unmanageable amount of logs, making intrusion detection inefficient with high false positive and false negative rates. Moreover, even after deploying a variety of monitors, the system usually lacks a global security view, making it infeasible to utilize the valuable information produced by these monitors for system security.

In this talk, I will present our technique to address these challenges. We will discuss some data-driven techniques to create, maintain, and present higher-level views of the system under consideration. This involves combining data from multiple monitors, which may be at different levels of abstraction such as host level and network level, and learning how the profile of the system evolves over time. Specifically, we will discuss how these higher-level views of the system will help in making decisions such as presence of an intrusion or violation of a security policy. I will also touch upon our plan to experimentally evaluate our approach.

Uttam ThakoreA Quantitative Methodology for Security Monitor Deployment  slides | video
Uttam Thakore, Computer Science Research Assistant, University of Illinois at Urbana-Champaign

October  14, 2015, 3:00 p.m., 2405 Siebel Center

Abstract: Despite advances in intrusion detection and prevention systems, attacks on networked computer systems continue to succeed. Intrusion tolerance and forensic analysis, which are required to adequately detect and defend against attacks that succeed, depend on monitors to collect information about possible attacks. Since monitoring can be expensive, however, monitors must be selectively deployed to maximize their overall utility.

In this talk, we present a methodology both to quantitatively evaluate monitor deployments in terms of security goals and to deploy monitors optimally based on cost constraints. We first define a model that describes the system to protect, the monitors that can be deployed, and the relationship between intrusions and data generated by monitors. Then, we define a set of quantitative metrics that quantify the utility and richness of monitor data with respect to intrusion detection and the cost associated with monitor deployment. We describe how a practitioner could characterize intrusion detection requirements using our model. Finally, we use our model and metrics to formulate a method to determine the cost-optimal, maximum-utility placement of monitors. We illustrate the practicality and expressiveness of our approach with an enterprise Web service case study and a scalability analysis of our algorithms.

Cuong Pham PhotoRuntime Monitoring of Hypervisor Integrity  slides | video
Cuong Pham, Electrical and Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
October 21, 2015, 3:00 p.m., 2405 Siebel Center

Abstract: Not unlike other types of utilities, cloud computing is enjoying the economy of sharing. This is true because sharing tremendously drives down the cost of computing resources, which in turns attracts more and more users and providers to get on the bandwagon. This trend generally works well, until computer security comes into the cost equation. In this talk, I will describe Virtual Machine (VM) Escape Attack, the primary security risk of sharing computing resource via VMs — the current mainstream mechanism that enables most cloud computing offerings. After that, I will describe our technique, called hShield, to cope with this class of attack. hShield is a proposal to integrate runtime integrity measurement of hypervisors into existing Hardware Assisted Virtualization (HAV) technologies, such as Intel VT-x or AMD SVM.

Mohammad Ahmad PhotoTowards a Secure Container Framework slides 
Mohammad Ahmad, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
November 4, 2015, 3:00 p.m., 2405 Siebel Center

Abstract: Containers are a form of OS-level virtualization that leverage cgroups and namespaces for isolation. They present a lightweight alternative to hypervisor-based virtualization and have already been adopted by several platform as a service (PaaS) cloud providers. While containers offer improved performance, cross-container side-channel attacks shown on Public PaaS clouds raise questions about their security.

In this talk, we present our work towards building a secure container framework with improved container isolation. Specifically, as a first step, we focus on defenses against cache-based side-channel attacks using a combination of software and hardware mechanisms.

Chris Cai photoPhurti: Application and Network-Aware Flow Scheduling for Multi-Tenant MapReduce Clusters  slides | video
Chris Cai, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
November 11, 2015, 3:00 p.m., 2405 Siebel Center

Abstract: Traffic for a typical MapReduce job in a datacenter consists of multiple network flows. Traditionally, network resources have been allocated to optimize network-level metrics such as flow completion time or throughput. Some recent schemes propose using application-aware scheduling which can reduce the average job completion time. However, most of them treat the core network as a black box with sufficient capacity. Even if only one network link in the core network becomes a bottleneck, it can hurt application performance.

We design and implement a centralized flow scheduling framework called Phurti with the goal of improving the completion time for jobs in a cluster shared among multiple Hadoop jobs (multi-tenant). Phurti communicates both with the Hadoop framework to retrieve job-level network traffic information and the OpenFlow-based switches to learn about network topology. Phurti implements a novel heuristic called Smallest Maximum Sequential-traffic First (SMSF) that uses collected application and network information to perform traffic scheduling for MapReduce jobs. Our evaluation with real Hadoop workloads shows that compared to application and network- agnostic scheduling strategies, Phurti improves job completion time for 95% of the jobs, decreases average job completion time by 20%, tail job completion time by 13% and scales well with the cluster size and number of jobs.

Lecturer_KirillMechitovEfficient Monitoring in Actor-based Mobile Hybrid Cloud Framework slides | video
Kirill Mechitov, Computer Science Postdoctoral Research Associate, University of Illinois at Urbana-Champaign
November 18, 2015, 3:00 p.m., 2405 Siebel Center

Abstract: Mobile cloud computing (MCC) enables overcoming the energy and processing limitations of mobile devices by leveraging the virtually unlimited, elastic, on-demand resources of the cloud. The increased dynamicity and complexity of hybrid cloud applications making use of both public and private cloud services (e.g., for reasons of privacy and information security) requires open systems that interact with the environment while addressing application-specific constraints, user expectations, and security/privacy policies of multiple systems and organizations. We have developed IMCM, a proof-of-concept implementation of an actor-based framework for mobile hybrid cloud applications. IMCM uses dynamic fine-grained code offloading to achieve significant performance and energy consumption improvements in cloud-backed mobile applications. In this talk, we describe IMCM’s lightweight monitoring framework, capable of capturing dynamic parameters of the execution environment and end-user context, in addition to coarse-grained actions and events of distributed actor-based applications. We demonstrate how the monitoring system can facilitate efficient detection of security policy violations, and generalize these results to distributed actor-based applications supporting code mobility.

Zak Estrada PhotoReliability and Security as-a-Service
Zachary Estrada, Electrical and Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
December 2, 2015, 3:00 p.m., 2405 Siebel Center

Abstract: Infrastructure as-a-Service (IaaS) clouds significantly lower the barrier for obtaining scalable computing resources. Could a similar service be offered to provide on-demand reliability and security monitoring? Cloud computing systems are typically built using virtual machines (VMs) and much work has been done on using that virtualization layer for reliability and security monitoring. In this talk, I will demonstrate how we use whole-system dynamic analysis to inform dynamic hypervisor based VM monitoring for providing reliability and security as-a-service.

Trust & Security/Assured Cloud Computing Joint Seminar: Security-Aware Virtual Machine Allocation in the Cloud: A Game Theoretic Approach

  • Posted on September 3, 2015 at 11:40 am by whitesel@illinois.edu.
  • Categorized Events.
  • Comments are off for this post.

Charles Kamhoua

Charles A. Kamhoua, U.S. Air Force Research Laboratory
September 2, 4:00 p.m., 2405 Siebel Center

Slides | Video

Research paper presented: Luke Kwiat, Charles A. Kamhoua, Kevin Kwiat, Jian Tang, and Andrew Martin, “Security-aware Virtual Machine Allocation in the Cloud: A Game Theoretic Approach”, IEEE International Conference on Cloud Computing (IEEE Cloud 2015), New York, NY, June 27-July 2, 2015. [full text]

Abstract: With the growth of cloud computing, many businesses, both small and large, are opting to use cloud services compelled by a great cost savings potential. This is especially true of public cloud computing which allows for quick, dynamic scalability without many overhead or long-term commitments. However, one of the largest dissuasions from using cloud services comes from the inherent and unknown danger of a shared platform such as the hypervisor. An attacker can attack a virtual machine (VM) and then go on to compromise the hypervisor. If successful, then all virtual machines on that hypervisor can become compromised. This is the problem of negative externalities, where the security of one player affects the security of another. This work shows that there are multiple Nash equilibria for the public cloud security game. It also demonstrates that we can allow the players’ Nash equilibrium profile to not be dependent on the probability that the hypervisor is compromised, reducing the factor externality plays in calculating the equilibrium. Finally, by using our allocation method, the negative externality imposed onto other players can be brought to a minimum compared to other common VM allocation methods.

Bio: Charles A. Kamhoua received his B.S. in Electronic from the University of Douala (ENSET), Cameroon in 1999, and the M.S. in Telecommunication and Networking and PhD in Electrical Engineering from Florida International University in 2008 and 2011 respectively. In 2011, he joined the Cyber Assurance Branch of the U.S. Air Force Research Laboratory (AFRL), Rome, New York, as a National Academies Postdoctoral Fellow and became a Research Electronics Engineer in 2012. Prior to joining AFRL, he was an educator for more than 10 years. His current research interests cover the application of game theory and mechanism design to cyber security and survivability, with over 50 technical publications in prestigious journals and International conferences including a Best Paper Award at the 2013 IEEE FOSINT-SI. Dr. Kamhoua has been recognized for his scholarship and leadership with numerous prestigious awards including ten Air Force Notable Achievement Awards, the 2015 AFOSR Windows on the World Visiting Research Fellowship at Oxford University, UK, an AFOSR basic research award of $645K, the 2015 Black Engineer of the Year Award (BEYA), the 2015 NSBE Golden Torch Award – Pioneer of the Year, a selection to the 2015 Heidelberg Laureate Forum, a 2011 NSF PIRE award at Fluminense Federal University, Brazil, and the 2008 FAEDS teacher award. He is an advisor for the National Research Council, a Senior Member of IEEE, a member of ACM, the FIU alumni association, and NSBE.