[coniks] Questions for okTurtles CONIKS review
Marcela S. Melara
melara at CS.Princeton.EDU
Mon Jan 16 17:23:03 EST 2017
Sorry this response took a little longer than anticipated. Thank you for
your positive review of CONIKS, and all of your feedback on the CONIKS
2.0 paper, which wasn't published at a peer-reviewed venue. I look
forward to reading your comparison with CT and other approaches.
My answers are in-line.
> Question 1: Is there a formal RFC-like specification for CONIKS being
> developed, and if so, where is it?
We've started laying the groundwork for an RFC-like specification, but
haven't begun writing it yet. We've dedicated the coniks-spec  repo
to maintain our specification documents, and you'll find a few
preliminary write-ups. But I'd like to emphasize that they don't follow
any RFC-style language guidelines yet.
> One significant concern that I had while reading the paper is the use of
> "should" in places where I would have expected the word "must" to be.
> This was especially notable on page 10 of the document.
The CONIKS 2.0 paper is the final report for Michael's semester of
independent research (a requirement of all CS majors at Princeton). The
report is meant as a final write-up of the semester-long project, and
not as a formal specification of the protocol. Hence, the language does
not follow RFC guidelines. We published the CONIKS 2.0 report as it is
the first document to address the key change protocol, which is
incomplete in the original paper, as well as to get community feedback
on our design. But a more formal specification of this protocol will be
included in our RFC-like documents.
> In IETF-style RFCs, the words "should" and "must" have very specific,
> concrete meanings when it comes to determining whether an implementation
> conforms to the specification or not. Many of the "should"s on page 10,
> /should /(heh) have been "must"s if CONIKS is to provide real security
> Were these just oversights, or are these "should"s to be interpreted per
> SHOULD in RFC 2119?
I completely agree with you here. Reading through page 10 of the report
again, in an RFC-like specification those sentences that say "should"
ought to say "must", with one exception:
"If the `allowsUnsignedKeychange` flag is set to TRUE, an unauthorized
key change should not be considered a violation."
> Question 2: Is all of the data, including the options, ALWAYS signed by
> the user's public key?
> There is a very curious `allowsUnsignedKeychange` flag that could
> undermine all of the MITM security offered by CONIKS, but only if its
> setting is not signed by the user's registered public key.
> Is this setting, along with all over
> settings/options/data/configurations signed by the user's public key?
Yes. In addition, page 8 of the report states: "Once the
`allowsUnsignedKeychange` flag is set to
FALSE, the user can sign a request to change that flag."
> Or can it be signed by the server's public key, or not signed at all?
The report states that if `allowsUnsignedKeychange` is set to TRUE, all
provider-initiated key changes should be signed by the provider; this
also applies to other security settings for a particular user. In the
prototype implementation of key changes in coniks-java, which Michael
coded as part of his project, changes to an account with
`allowsUnsignedKeychange = TRUE` don't require any signature. A more
formal implementation of the protocol could require a provider signature
for any changes to an account with an `allowsUnsignedKeychange = TRUE`
configuration. But it's important to note that, even if the provider
signs any user data/configuration changes, the user will still have to
determine whether the change was legitimate or not. For this reason, we
haven't made provider signatures on provider-initiated changes a
> Question 3: Will CONIKS force users to trust untrustworthy entities for
> private key resets, or will CONIKS allow users to specify the entities
> whom they trust to re-create their identities for them?
> In DPKI, we solved this problem by allowed the user to specify the
> entities that they trust to restore their identity for them. This can be
> accomplished simply by letting the user specify the public keys and the
> n-of-m parameters (of those keys) that is necessary to create broadcast
> a message that signs a new public key on behalf of the user.
> Example: Alice loses her phone. Alice uses the app to generate a new
> keypair and sends a request to the friends she authorized to sign it.
We haven't considered such an approach yet, but at first glance, this
seems like a sound alternative to simply allowing unsigned changes in
the case of a private key loss. In general, we're interested in
exploring any features that may improve the security of our current
design; if you would like to suggest this as a possible feature for one
of our implementations, I would encourage you to open an issue in our
coniks-spec repo to get a discussion started.
> Question 4: What "password"?
> On page 9, there is a strange sentence that is not elaborated, "If for
> some reason Alice forgets her private key, she can ask the provider to
> reset her password."
> This sentence does not make sense for two reasons:
> 1. Nobody remembers their private key.
> 2. A password is not a private key.
> What password is this and how is it associated with the private key and
> how can the provider "reset" it without overriding the private key.
You're right, as written this sentence makes little sense. This sentence
should read: "If ... Alice *loses* her private key..." This entire
section (4.4) assumes that users (or rather, clients) derive their key
pairs from a password, and that users must "call", or otherwise
communicate out of band with, the provider to reset their password if
they lose their private key. Although the report unfortunately fails to
mention this very important detail, this is an approach that Michael
explored as part of the account recovery protocol. Hence, the
association of private keys and passwords. Since a private key loss
requires a password reset and such a reset does override the private
key, the report goes on to state that Alice has to let her contacts know
to expect an unsigned key change at the specified epoch. Now, because we
don't want to rely on users to generate good passwords to generate
strong key pairs in practice, our implementations do not use this
particular approach. So, as I mentioned above, we certainly welcome
alternative approaches for account recovery.
Finally, to address your question about CONIKS' scalability that you
asked me on Twitter: "What are the scaling properties of CONIKS (either
hard data, or theoretical predictions)"? The evaluation in the CONIKS
paper (Section 5.3, "Server Overheads") provides the results of an
experiment we ran on one of our research lab servers with an early
version of our coniks-java implementation to measure the scale of our
Merkle tree implementation. Our paper also provides the results of an
experiment to measure the client overhead to verify an authentication
path (Section 5.3 "Lookup Cost"). The numbers we provide for the
download sizes on the client side, on the other hand, are theoretical
calculations. We've also run a few similar benchmarks on our coniks-go
Merkle tree implementation, and will post the results by the end of the
week. We don't currently have data for verification overheads and
download sizes on the client, but I've opened an issue to address this .
I hope this all makes sense!
More information about the coniks