Muhammad Shahbaz will present his FPO, " Enabling Programmable Infrastructure for Multi-tenant Data Centers " on Friday, 9/7/2018 at 11am in CS 302.

The members of his committee are as follows: Adviser: Nick Feamster; Readers: Jennifer Rexford and  Ben Pfaff (VMware Inc.); Examiners: Wyatt Lloyd, Michael Freedman, and Nick Feamster.

Everyone is welcome to attend.  A copy of his thesis is available in CS 310.  Abstract follows below.


Today’s data centers are large, shared infrastructures hosting hundreds of thousands

of tenants over a vast network of servers. Operating these data centers frequently

entails incorporating new infrastructure services that require customizing the

behavior of its switches and exploiting unique characteristics of data centers to scale.

Emerging programmable switch ASICs allow network operators to customize the

behavior of physical switches. Yet, virtual switches running on servers in multitenant

data centers are still fixed function and composed of large, complex software

codebases. Modifying these switches requires both intimate knowledge of the switch

codebase and extensive expertise in network protocol design, raising the bar for customizing

these switches prohibitively high.

In this dissertation, we address these challenges by, first, presenting the design and

implementation of PISCES: a programmable, protocol-independent software switch

derived from Open vSwitch (OVS), a fixed-function hypervisor switch, whose behavior

is customized using P4. PISCES is not tethered to specific protocols; this

independence makes it easy to add new features. We also show how the compiler can

analyze the high-level specification to optimize forwarding performance. Our evaluation

shows that PISCES performs comparably to OVS and that P4 programs for

PISCES are about 40 times shorter than equivalent changes to OVS source code.

Next, we demonstrate how such programmable switches help build scalable infrastructure

services by exploiting the unique characteristics of data-center networks. We

use these switches to address the multicast scalability problem in multi-tenant data

centers and present the design of an infrastructure service, Elmo, that scales multicast

by taking advantage of the data-center characteristics; specifically, the symmetric

topology and short paths in a data center. In Elmo, a PISCES switch encodes multicast

group information inside packets themselves, reducing the need to store the

same information in hardware switches, which instead read the encoded information

to route packets to recipients. In a three-tier data-center topology with 27,000 hosts,

Elmo supports a million multicast groups using a 325-byte packet header, requiring

as few as 1,100 multicast group-table entries on average in hardware switches, with a traffic overhead as low as 5% over ideal multicast.