[talks] Joshua Kroll's PreFPO will take place Feb 3, 2015 at 2pm in CITP conference room (Sherrerd 306)

Nicki Gotsis ngotsis at CS.Princeton.EDU
Wed Jan 28 10:52:58 EST 2015


Joshua Kroll will present his PreFPO on Feb 3, 2015 at 2pm in CITP conference room (Sherrerd 306). 

The members of his committee are: Ed Felten (advisor); Matt Green (Johns Hopkins University) and Arvind Narayanan (readers); Andrew Appel and Nick Feamster (nonreaders).

Everyone is invited to attend his talk. 

The talk title and abstract follow below: 

Title: Accountable Algorithms 

Abstract:  Important decisions about people are increasingly made by algorithms: Votes are counted; voter rolls are purged; financial aid decisions are made; taxpayers are chosen for audits; air travelers are selected for search; credit eligibility decisions are made. Citizens, and society as a whole, have an interest in making these processes more transparent. Yet the full basis for these decisions is rarely available to affected people: the algorithm or some inputs may be secret; or the implementation may be secret; or the process may not be precisely described. A person who suspects the process went wrong has little recourse. And an oversight authority who wants to ensure that decisions are made according to an acceptable policy has little assurance that proffered decision rules match decisions for actual users. 

Traditionally, Computer Science addresses these problems by demanding a specification of the desired behavior, which can then be enforced or verified. But this model is poorly suited to real-world oversight tasks, where the specification might be complicated or might not be known in advance. For example, laws are often ambiguous precisely because it would be politically (and practically) infeasible to give a precise specification of their meaning. Instead, people do their best to approximate what they believe the law will allow and disputes about what is actually acceptable happen after-the-fact via expensive investigation and adjudication (e.g. in a court or legislature). As a result, actual oversight, in which real decisions are reviewed for their correctness, fairness, or faithfulness to a rule happens only rarely, if at all. 

Further, Computer Science often sees rules as self-enforcing: the mere fact that an automated check fails is sufficient to demonstrate that some choice was invalid. However, like all rules, automated rules (even those implemented by cryptography) are just the intentions of a system designer and only bear relevance to the extent that people will actually follow them, either due to internalized incentives or the external threat of punishment. 

In this thesis, we present a novel approach to relating the tools of technology to the problem of overseeing decision making processes. Our methods use the tools of computer science to cryptographically ensure the technical properties that can be proven, while providing information necessary for a political, legal, or social oversight process to operate effectively. First, we present a system for the accountable execution of legal warrants, in which the decision by a judge to allow an investigator access to private or sensitive records is realized cryptographically, so that the investigator's access to sensitive information is limited to only that information which the judge has explicitly allowed (and this can be confirmed by a disinterested third party). This system is an example of the current style of technical systems for accountability: a well-defined policy, specified in advance, is operationalized with technical tools. However, our construction does not just enforce the policy, but can convince others that the policy is being enforced correctly. Second, we present accountable algorithms, unifying the tools of zero-knowledge computational integrity with cryptographic commitments to design processes that admit meaningful after-the-fact oversight, consistent with the norm in law and policy. Accountable algorithms can attest to the valid operation of a decision policy even when all or part of that policy is kept secret. 


More information about the talks mailing list