You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wonder if it is possible to use the universal confidence threshold b proposed in SphereFace2 as an actual threshold?
We set -b/s as the threshold and compared it with the similarity score. However, it does not yield good performance.
The following code explains the method I described.
self.fun_g(cos, self.t) > -self.bias[0][0] / self.scale -> Samples predicted as positive
self.fun_g(cos, self.t) < -self.bias[0][0] / self.scale -> Samples predicted as negative
(name of these variables is same in https://github.com/wenet-e2e/wespeaker/blob/master/wespeaker/models/projections.py)
If there are any related codes or methods, please let me know.
Thank you.
The text was updated successfully, but these errors were encountered:
I wonder if it is possible to use the universal confidence threshold b proposed in SphereFace2 as an actual threshold?
We set -b/s as the threshold and compared it with the similarity score. However, it does not yield good performance. The following code explains the method I described. self.fun_g(cos, self.t) > -self.bias[0][0] / self.scale -> Samples predicted as positive self.fun_g(cos, self.t) < -self.bias[0][0] / self.scale -> Samples predicted as negative (name of these variables is same in https://github.com/wenet-e2e/wespeaker/blob/master/wespeaker/models/projections.py)
If there are any related codes or methods, please let me know. Thank you.
Thank you for your attention to this work.
In my opinion, the universal confidence threshold b can only serve as the boundary between speakers in the training set. During evaluation, the speakers don't has overlap with spks in training set, so the threshold should be set according to actual needs.
Thank you for the nice work!
I wonder if it is possible to use the universal confidence threshold b proposed in SphereFace2 as an actual threshold?
We set -b/s as the threshold and compared it with the similarity score. However, it does not yield good performance.
The following code explains the method I described.
self.fun_g(cos, self.t) > -self.bias[0][0] / self.scale -> Samples predicted as positive
self.fun_g(cos, self.t) < -self.bias[0][0] / self.scale -> Samples predicted as negative
(name of these variables is same in https://github.com/wenet-e2e/wespeaker/blob/master/wespeaker/models/projections.py)
If there are any related codes or methods, please let me know.
Thank you.
The text was updated successfully, but these errors were encountered: