-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider EFF / ACLU objections by distinguaishing accountable vs. voluntary DIDs #370
Comments
Additional solution proposal: use multi-sig or similar to add friction to the right places, such as requiring an additional PIV smart card that ad-hoc required for presenting a certain strong credential, such as a passport--maybe it is required to decrypt the data. This way, if you left your passport PIV smart card at home, just like leaving your passport at home, you can then no longer present it. In many ways it is at parity with the existing security model. We lose the convenience of having everything physically in one place, but sometimes that's not the worth the centralization risks. Looking ahead, in the world of ZKPs, we will also be able to carry the exact minimal viable verifications per environment and enforce that norm, which is an improvement opportunity over today's driver's licenses that also disclose Driver IDs to the bouncer who only needs to check age thresholds. It's not a silver bullet but one of many lead ones we can use in the right circumstances. |
Here’s an attempt at a unified theory for protocol design based on privacy engineering. Let’s stipulate encrypted storage and call it EDV. EDVs wil have two kinds of customers depending on whether the “client” is controlled by the data subject or the EDV customer, for example a college, laboratory, or imaging center. Let’s stipulate the client for EDV is a policy decision point (PDP). The PDP could be either wallet-like (has local biometric auth’n and could be “cold”) or it could be agent-like (fiduciary, “custodial”). EDV protocols should be agnostic to the cold vs. custodial PDP if possible. Issuers are often EDV customers. Data subjects should have a choice of treating Issuers as oracles (directly accessible by Verifiers) in order to avoid some privacy challenges by choosing key or peer DIDs when appropriate. Issuer protocols should be agnostic to the choice of Verifier or Holder as requesting party. Either way, it’s up to the PDP, not the Issuer to decide. The data subject may be more concerned with Verifier policies than they are with correlation by the Issuer. Self-sovereign principles give the data subject control of the PDP. The data subject’s universe of eligible Verifiers will be expanded if verifier protocols are agnostic to whether the presentation comes from an Issuer or a Holder. Giving the data subject control over their presentation path is an ethical move. Verifier use-cases are a major privacy problem from two very different perspectives. In the EFF case, #370, the data subject needs to constrain the Verifier’s ability to demand a credential and could choose Issuers that will respond differently based on the Verifier’s credentials. For example, law enforcement might have access but transport operators would be blocked by policy to avoid discrimination. Conversely, a customer service representative as Verifier might balk at disclosing their identity to the data holder or the self-sovereign PDP of the data holder for fear of side-channel attacks. I propose that we put all this together to drive consensus around best practice across public and private DID methods, DID service endpoints, and secure data stores. The result would be a DID service endpoint that points to a PDP combined with EDV and Issuer protocols that are agnostic to whether the PDP is wallet or agent-like. |
Here’s an attempt at a unified theory for protocol design based on privacy engineering. Let’s stipulate encrypted storage and call it EDV. EDVs wil have two kinds of customers depending on whether the “client” is controlled by the data subject or the EDV customer, for example a college, laboratory, or imaging center. Let’s stipulate the client for EDV is a policy decision point (PDP). The PDP could be either wallet-like (has local biometric auth’n and could be “cold”) or it could be agent-like (fiduciary, “custodial”). EDV protocols should be agnostic to the cold vs. custodial PDP if possible. Issuers are often EDV customers. Data subjects should have a choice of treating Issuers as oracles (directly accessible by Verifiers) in order to avoid some privacy challenges by choosing key or peer DIDs when appropriate. Issuer protocols should be agnostic to the choice of Verifier or Holder as requesting party. Either way, it’s up to the PDP, not the Issuer to decide. The data subject may be more concerned with Verifier policies than they are with correlation by the Issuer. Self-sovereign principles give the data subject control of the PDP. The data subject’s universe of eligible Verifiers will be expanded if verifier protocols are agnostic to whether the presentation comes from an Issuer or a Holder. Giving the data subject control over their presentation path is an ethical move. Verifier use-cases are a major privacy problem from two very different perspectives. In the EFF case, #370, the data subject needs to constrain the Verifier’s ability to demand a credential and could choose Issuers that will respond differently based on the Verifier’s credentials. For example, law enforcement might have access but transport operators would be blocked by policy to avoid discrimination. Conversely, a customer service representative as Verifier might balk at disclosing their identity to the data holder or the self-sovereign PDP of the data holder for fear of side-channel attacks. I propose that we put all this together to drive consensus around best practice across public and private DID methods, DID service endpoints, and secure data stores. The result would be a DID service endpoint that points to a PDP combined with EDV and Issuer protocols that are agnostic to whether the PDP is wallet or agent-like. |
Some more context on EFF concerns around DIDs: https://www.eff.org/deeplinks/2020/08/digital-identification-must-be-designed-privacy-and-equity-10 |
… On Mon, Sep 28, 2020 at 3:58 PM Brent Zundel ***@***.***> wrote:
Some more context on EFF concerns around DIDs:
https://www.eff.org/deeplinks/2020/08/digital-identification-must-be-designed-privacy-and-equity-10
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#370 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABB4YNX7FSGH4DLAX77HALSIDTFPANCNFSM4P2QV25A>
.
|
This issue needs to result in a PR in the next 30 days or the issue will be deferred to the next version of the specification. |
Here's the view from outside: https://redecentralize.org/redigest/2020/10
…On Mon, Nov 2, 2020 at 11:06 PM Manu Sporny ***@***.***> wrote:
This issue needs to result in a PR in the next 30 days or the issue will
be deferred to the next version of the specification.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#370 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABB4YM7NNKRMDKUJXOQGJ3SN56TBANCNFSM4P2QV25A>
.
|
@selfissued if you have a chance to review, I recall you had opinions on GDPR / privacy guidance. |
@agropper has contacted EFF twice to clarify their objections, this may be resolved by one of the other PRs around Herd Privacy. |
Have not heard back from EFF. Pending contact, this will be closed after #324 is resolved. |
I've heard back form EFF. They continue to be troubled by our work and have shared concerns with Brent (who they see as representing DID Core) as they relate to "service endpoints and DID communications or "DIDCOM". EFF hopes that accountability and privacy be "the lead of this and not just a recommendation" [in DID Core]. Importantly, EFF's concern about our work is that DIDs are unique IDs communicated over the web and as such is a risk of being associated with an individual. This echoes my concerns that DIDs for "documents, things, and schemas" will be associated with people even if their subject is not a person. This also echoes @jandrieu and others framing in terms of "herd privacy". Per EFF, their concern is not mitigated by "zero-knowledge proofs and PKI promises". Coincident with the EFF response, I raised the accountability issue in the SDS context: https://lists.identity.foundation/g/sds-wg/message/52 Also, @jandrieu, @rhiaro and I had a call to discuss #324 and, indirectly, this EFF issue. My conclusion, so far, is that our normative treatment of service endpoints in DID Core is key to the success of our entire enterprise because it demarcates the boundary between self-sovereign control and ambient surveillance as seen from the perspective of EFF. To try to deal with the EFF perspective, I see no alternative other than DID Core recognize only one type of service endpoint: an authorization agent of the DID subject with the option that the authorization agent also acts as a mediator in order to contribute to herd privacy. A messaging service endpoint would not be an alternative to an authorization service because it too risks limiting herd privacy. Messaging would need to be subject to authorization first. In other words, our approach to a compromise with EFF would effectively force all methods and extensions that try to support public uses of DIDs to obviously flag themselves as less private than DID Core. This is my proposed resolution to this issue. |
Does EFF agree with this position? Does writing up the item stated above in the DID Core specification address their concern? Ideally, EFF would weigh in on this issue directly. Are they able to do that? |
They don't want to engage directly, as I explicitly requested, so I seem to
be the yenta in this situation.
The ball is in our court. We can reach out to them with a potential
consensus proposition or not.
…On Wed, Jan 20, 2021 at 11:32 AM Manu Sporny ***@***.***> wrote:
This is my proposed resolution to this issue.
Does EFF agree with this position? Does writing up the item stated above
in the DID Core specification address their concern?
Ideally, EFF would weigh in on this issue directly. Are they able to do
that?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#370 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABB4YLKYHBZ2V4KJM45EMLS24AQ3ANCNFSM4P2QV25A>
.
|
This might be EFF's answer to our question: https://www.eff.org/wp/interoperability-and-privacy I urge everyone working on SSI to read it. |
Here's the TLDR version of EFF's proposals to smash monopolies but keep user privacy, but the whole thing is certainly worth reading:
|
I'll also note that DIDs (ownership over identifiers), VCs (customer consent), zcaps (delegation), and EDVs (always on encryption, data portability) go a long way towards addressing EFF's architectural needs. I think we're aligned, it's just that no one knows that yet. |
We are aligned but we also need to realize that EDVs are not enough. We need GNAP as an authorization protocol to provide the API neutrality across service providers that is at the core of their design. The IETF folks are willing to work with us to ensure GNAP doesn't preclude transport other than HTTP. The biggest issue I see right now is the connection between the cloud agent (delegation in the EFF piece) and the mobile wallet (user agent in the EFF piece). That part of the overall API protocol may be out of band for GNAP but it seems essential for key management whether the cloud agent is a client of an EDV or Facebook as a service provider. The other issue we need to deal with relative to authorization and audit is the options for the authorization token. capabilities, jwt, and encrypted tokens will likely all be needed with the protocol will have to deal with that. |
@agropper, we are getting to this issue now in the WG. Are you interested in writing a PR around this? If not we will need to close it. |
This was discussed during the #did meeting on 06 December 2024. View the transcriptw3c/did-core#370decentralgabe: This one was opened by Adrian, four years ago. Suggestion to classify DIDs into Accountable and Voluntary categories. There doesn't seem to be consensus or concrete proposal, suggestion is to ask Adrian to write language and mark them as pending close. manu: Agreed. |
On August 4, EFF and ACLU again objected to a CA law about digital verifiable credentials. Although it's about VCs, the objection raises significant issues around DIDs that we could try to mitigate in DID Core. There are 4 reasons for their objection [my summary]:
The first two of these issues are as much about DID as they are about VC. 3. Could be mitigated if the presentation was just a QR code on a paper card. 4. Relates to DIDs in the sense that context implies a chain of trust and that trust is mediated through DIDs.
To mitigate these objections, DID Core might separate DIDs and DID methods into Accountable or Voluntary.
Accountable: Law enforcement officers, physicians, public notaries and others acting on the basis of their public credentials would be required to present their A-DID before requesting any DID or VC from a person. It would be illegal to ask an individual for an A-DID in the same way it's illegal to condition a service on providing a Social Security Number. Exceptions would be made for law enforcement and signing of contracts with the expectation of legal enforcement. Some DID methods would be compatible with A-DID based on strict criteria for non-repudiation.
Voluntary: A person could present any V-DID they chose without expectation of accountability, identity de-duplication, or correlation.
For any use-case, the Issuer or Verifier would be required to specify whether an A-DID or a V-DID is acceptable. Our protocols would need to consider the sequence where a requesting party needs to present their (public) credentials in an accountable manner (signed and timestamped) before an individual responds with an A-DID or a related VC.
The text was updated successfully, but these errors were encountered: