Assured Cloud Computing Special Seminar: Design and Validation of Distributed Data Stores using Formal Methods
- Posted on January 25, 2016 at 3:17 pm by amyclay@illinois.edu.
- Categorized Events.
- Comments are off for this post.
Peter Ölveczky, University of Oslo
February 10, 4 p.m. 2405 Siebel Center
Abstract: To deal with large amounts of data and to ensure high availability, many cloud computing systems rely on distributed, partitioned, and/or replicated data stores. However, such data stores are complex artifacts that are very hard to design and analyze, as they satisfy different notions of “consistency”.
We therefore propose to use formal methods to model distributed data stores, and to analyze both their correctness properties and their performance. In particular, one goal is to identify key building blocks in the design of such data stores, and their properties, so that new data stores, or different versions of existing data stores, can be designed by reusing such “components”.
This talk, which is based on joint work with Jon Grov and a number of members of the Assured Cloud Computing Center, gives a high-level overview of this ongoing work, which has already been used to design and analyze new versions of Google’s Megastore, Facebook/Apache’s Cassandra, and UC Berkeley’s RAMP distributed data stores.
Bio: Peter Ölveczky received his PhD in computer science from the University of Bergen, Norway, in 2000, having performed his thesis research at SRI International. He was assistant and then associate professor at the University of Oslo 2001-2008, and has been a full professor there since 2008. He was also a post-doctoral researcher at the University of Illinois at Urbana-Champaign (UIUC) 2002-2004, and has been a visiting researcher at UIUC since 2008.
Ölveczky’s research focuses on formal methods, in particular for real-time systems. He is the developer of the Real-Time Maude tool, which has been used to formally model and analyze a large range of advanced systems, including scheduling protocols, distributed data stores, wireless sensor network algorithms, the human thermoregulatory system, mobile ad hoc networks, avionics systems, and so on. Ölveczky has organized 9 international scientific workshops/conferences, and has edited a number of scientific books and journal issues.
Assured Cloud Computing Weekly Seminars Slides and Video Spring 2016
- Posted on January 20, 2016 at 3:19 pm by amyclay@illinois.edu.
- Categorized Events.
- Comments are off for this post.
Models for Reasoning about Digital Evidence slides | video
Imani Palmer, Computer Science Research Assistant, University of Illinois at Urbana-Champaign
February 17, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Due to the popularity of containers and virtual machines, it is necessary to integrate forensics and policy monitoring in order to enhance the security of systems. The integration of forensics with policy monitoring will aid in its dynamic nature allowing for better response to potential attacks. Our latest approach leverages the knowledge of the system from digital evidence. This evidence combined with monitoring will enable us determine if the current state of the machine is vulnerable or has been attacked. Currently, the process of analyzing evidence is based on the individual knowledge of an examiner. This framework will enable researchers and examiners to apply various reasoning models to their cases. The application of these reasoning methods would be automated in order to avoid discrepancies and provide reproducibility. This framework will handle the analysis phase of the digital forensic investigative process. It will receive information from digital forensic tools. This information will feed various visualizations to aid in the development of a hypothesis. The reasoning models define and assign likelihood between the relationships of evidence pertaining to the hypothesis. Last, a framework for estimating likelihood error rates will be built and provisional estimates determined and examined. As digital forensic science advances it is important to be able to rigorously determine conclusions drawn from electronic evidence.
Cyber Security as a Signaling Game slides | video
Key-whan Chung, Electrical & Computer Engineering Research Assistant, University of Illinois at Urbana-Champaign
March 16, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: With the increasing concern on cyber security, variation of approaches for intrusion detection (IDS) and response have been introduced. Starting from a signature based IDS and advancing to anomaly based detection systems, the detection methods has become more intelligent. However, with the attack models becoming more sophisticated, we face attack models that are hard to differentiate from benign activities. To effectively detect malicious intentions, we apply a game theoretic approach, modeling the rationality and the probabilistic behavior of the attacker and the defender. We model a slow Denial of Service (DoS) attack, a DoS attack on the application level, from the logs collected from a real system and formulate the interaction as a signaling game, where two players optimize their action based on the uncertainty of the opponent’s action. Using a simulation based approach, we evaluate the performance of the approach and discuss the possibility of applying it to a real system.
Intrusion Detection in Enterprise Systems by Combining and Clustering Diverse Monitor Data slides | video
Atul Bohara, Computer Science Research Assistant
Uttam Thakore, Computer Science Research Assistant
University of Illinois at Urbana-Champaign
April 5, 2016, 4:00 p.m., B02 Coordinated Science Laboratory
Abstract: Intrusion detection using multiple security devices has received much attention recently. The large volume of information generated by these tools, however, increases the burden on both computing resources and security administrators. Moreover, attack detection does not improve as expected if these tools work without any coordination.
In our work, we propose a simple method to join information generated by security monitors with diverse data formats. We present a novel intrusion detection technique that uses unsupervised clustering algorithms to identify malicious behavior within large volumes of diverse security monitor data. First, we extract a set of features from network-level and host-level security logs that aid in detecting malicious host behavior and flooding-based network attacks in an enterprise network system. We then apply clustering algorithms to the separate and joined logs and use statistical tools to identify anomalous usage behaviors captured by the logs. We evaluate our approach on an enterprise network data set, which contains network and host activity logs. Our approach correctly identifies and prioritizes anomalous behaviors in the logs by their likelihood of maliciousness. By combining network and host logs, we are able to detect malicious behavior that cannot be detected by either log alone.
Dynamic Fine-Grained Code Offloading in Mobile Cloud Applications slides | video
Kirill Mechitov, Computer Science Postdoctoral Researcher
University of Illinois at Urbana-Champaign
April 6, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Mobile cloud computing (MCC) enables overcoming the energy and processing limitations of mobile devices by leveraging the virtually unlimited, elastic, on-demand resources of the cloud. The increased dynamicity and complexity of hybrid cloud applications making use of both public and private cloud services (e.g., for reasons of privacy and information security) requires open systems that interact with the environment while addressing application-specific constraints, user expectations, and security/privacy policies of multiple systems and organizations. We have developed IMCM, a proof-of-concept implementation of an actor-based framework for mobile hybrid cloud applications. IMCM uses dynamic fine-grained code offloading to achieve significant performance and energy consumption improvements in cloud-backed mobile applications, while respecting specified privacy and security policies. In this talk, we refine the design and implementation of the Elasticity Manager, the core decision-making component of the IMCM framework. We also explore the application of related work on actor-based model checking for schedulability analysis of distributed real-time systems to design-time optimization in mobile cloud computing.
*Two 30-minute seminars on April 13.
Cauldron: A Framework to Defend Against Cache-based Side-channel Attacks in Clouds slides | video
Mohammad Ahmad, Computer Science Research Assistant
University of Illinois at Urbana-Champaign
*April 13, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Cache-based side-channel attacks have garnered much interest in recent literature. Such attacks are particularly relevant for cloud computing platforms due to high levels of multitenancy. In fact, there exists recent work that demonstrates such attacks on real cloud platforms (e.g., Heroku). This paper presents Cauldron, a framework to defend against such cache-based side-channel attacks. Cauldron uses a combination of smart scheduling techniques and microarchitectural mechanisms to achieve this goal. We are able to demonstrate improved defenses against both cross-core side channel attacks that target shared caches. Furthermore, Cauldron is transparent to the user – requiring no modification (or even recompilation) of users’ application binaries by integrating directly with the popular container runtime framework, Docker. Preliminary evaluation results show that the proposed approach is effective for cloud computing applications.
Security and Privacy Mechanisms: an Analysis of Cloud Service Providers for Governments slides | video
Carlo Di-Giulio, Graduate School of Library and Information Science
University of Illinois at Urbana-Champaign
*April 13, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: Since 2010, in order to reduce alarmingly increasing costs in IT management and promote efficiency at a federal level, the US Government has been promoting a “Cloud First” policy, made of regulations and initiatives, aimed to promote the use of cloud services for Federal Agencies. The Agencies, slowly responding to the suggestions of the US Government, are working to migrate their systems to cloud environment, either built In-House (e.g. “private cloud”) or supplied by private Cloud Service Providers (CSPs) with shared infrastructures.
Public cloud services, however, may not guarantee a level of data security and privacy protection comparable to that assured with a private cloud, especially for high-sensitive data such as those processed by military agencies, and more specifically the Air Force. Our study aims to analyze the risks derived from the migration to public cloud services, and specify the main differences among those available to Federal Agencies in terms of data privacy and security.
Drawing on publicly available documentation, the still ongoing research is moving on three steps. The first is a comprehensive assessment of the regulatory context; the second is the identification of the services offered by five of the main CSPs targeting Federal Agencies; the third is the definition of policies and best practices adopted by the CSPs.
The comparison of services and polices, driven by the understanding of the regulatory context, helps to shed lights on privacy and security risks in the migration of Government systems to publicly available cloud services.
This presentation will provide an overview of the research and its preliminary results.
Application-aware Network Resource Allocation slides
Chris Cai, Computer Science Research Assistant
University of Illinois at Urbana-Champaign
April 20, 2016, 4:00 p.m., B02 Coordinated Science Laboratory
Abstract: Recently Application-aware Networking has drawn a lot of attention both in research community and industry. As the era of big data comes, applications start to demand significantly increased data, and network usage patterns of applications keep becoming more complicated. Therefore the network supporting the application needs to be more intelligent in terms of understanding the specific performance requirements and the network usage characteristics of different types of applications. Application-aware Networking describes such a network should be able to maintain information about the applications that run on top of it and leverage the information to optimize the performance of applications. The information about applications can include the performance requirements of applications, and types of network resource demand of applications, etc. We plan to present two works: Phurti and CRONets(Cloud-Routed Overlay Networks). They each present Application-aware Networking under very different networking environment: cloud network and wide-area network. We plan to discuss the im- plementation and evaluation of Phurti, as well as present the preliminary results and research plan of CRONets.
Getafix: Workload-aware Data Management in Lookback Processing Systems slides | video
Mainak Ghosh, Computer Science Research Assistant
University of Illinois at Urbana-Champaign
April 27, 2016, 4:00 p.m., 2405 Siebel Center
Abstract: In this paper, we target lookback processing systems (LPS), which allow queries to operate on segment-based historical data. We present new strategies that decide which segments should be placed, and how they should be replicated. Our approach leverages segment popularity. They are provably optimal in replication level, and thus memory and network overheads, for the static case. For the dynamic case, we present two heuristics. Our experiments show that the approaches improve memory and network utilization compared to existing strategies.