-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cosine similarity between two facial features and cross origin tracking #22
Comments
Good point to me. I agree that we need to pay attention to what could be a possible threat when using WebNN API later on. In the above example, both image capture and individual tracking rely on visual information, so it may be relatively easy to let users be aware of such a threat by prompting permission to use the camera. However, there are possible cases where machine learning can unexpectedly infer a different type of privacy information from some input data: e.g. eavesdropping by recovering speech from gyroscope. Security and Privacy Consideration in Generic Sensor spec may provide relevant examples. I'm currently skeptical of whether we could protect from such a threat only by API design, but I hope we could make best efforts to mitigate it. |
I also see some of the proposals discussed in other issues mention capabilities, this is a fingerprinting surface that also should be looked into by the group. |
The fingerprinting concern due to capabilities is pointed out in: webmachinelearning/webnn#3 (comment) |
Can we update this title or be more descriptive about the privacy concerns here? It seems like the particular privacy concern is inferences of personal information from pre-trained NN models. (Although I imagine there will be more as we talk about different features in more detail.) |
@cynthia, I think this concerns should be noted in the privacy considerations of the WebNN API spec, perhaps with a note camera access permission is a partial mitigation. I think this issue was not specifically discussed in the context of the TAG review of the API, so let's consider this an amendment to that review. Feel free to polish the following description before we turn it into a PR:
@npdoty, I renamed the issue to be more descriptive. Feel free to chime in with any comments you or PING may have for the concern itself. We'll seek your review for the PR. |
The WG discussed this issue on its 2 June 2022 call and thinks that this issue is best discussed in the context of the Ethical Principles for Web Machine Learning effort. The https://webmachinelearning.github.io/ethical-webmachinelearning/#privacy section would be a good place where this issue could be highlighted as an example. I'm transferring this issue to that repo accordingly. |
(TAG hat on.)
Having fast access to be able to get say, features out of a model trained against faces is a reasonably scary vector in the aspect of privacy.
Cosine similarity between two facial features (e.g. ripped out from the last FC layer, for a pretty blunt approach) effectively enables a cross origin individual tracking tool, given that you can get the user to opt-in to getUserMedia. Probably something that will have to be considered later on.
The text was updated successfully, but these errors were encountered: