Xiaozhou Li will present his Pre-FPO on 12/9/15 in CS 401 at 10am. The members of his committee are: Mike Freedman (adviser), Kyle Jamieson, Michael Kaminsky (Intel Labs), Jennifer Rexford, and Kai Li. Below are the title and abstract of his talk. Title: Towards High-performance and Cost-effective Key-Value Storage Abstract: Key-value storage is one of the fundamental building blocks for today’s large-scale, high-performance data-intensive applications. In this talk, I will present my research on improving the performance and scalability of key-value storage in a cost effective manner, with a particular focus on combining new hardware and infrastructure capabilities with carefully-crafted algorithmic techniques. I will first present the design, implementation, and evaluation of a high-throughput and memory-efficient concurrent hash table. The design arises from careful attention to systems-level optimizations such as minimizing critical section length and reducing interprocessor coherence traffic through algorithm re-engineering. We exploited Intel’s recent hardware transactional memory (HTM) for concurrency control, and found that HTM provides software engineering benefits by reducing the intellectual complexity of locking more than it provides performance benefits. Algorithmic optimizations that benefit both HTM and designs for fine-grained locking are needed to achieve high performance. In the second part of this talk, I will present SwitchKV, a new scalable key-value store system design that combines high-performance cache nodes with resource constrained backend nodes to provide load balancing in the face of unpredictable workload skew. The cache nodes absorb the hottest queries so that no individual backend node is over-burdened. Compared with previous designs, SwitchKV exploits SDN techniques and deeply optimized switch hardware to enable efficient content based routing. Programmable network switches keep track of cached keys and route requests to the appropriate nodes at line speed, based on keys encoded in packet headers. A new hybrid caching strategy keeps cache and switch forwarding rules updated with low overhead and ensures that system load is always well-balanced under rapidly changing workloads. We demonstrate SwitchKV can meet the service-level objectives for many cloud services more efficiently than traditional systems.