Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weighted and un-weighted accuracy metrics #4

Open
vandana-rajan opened this issue Sep 6, 2021 · 0 comments
Open

Weighted and un-weighted accuracy metrics #4

vandana-rajan opened this issue Sep 6, 2021 · 0 comments

Comments

@vandana-rajan
Copy link

Hi David,

Thanks a lot for providing the code for your wonderful work.

I have a doubt about the WA and UA metrics. As far as I understand, "Weighted accuracy is computed by taking the average, over all the classes, of the fraction of correct predictions in this class (i.e. the number of correctly predicted instances in that class, divided by the total number of instances in that class). Unweighted accuracy is the fraction of instances predicted correctly (i.e. total correct predictions, divided by total instances). Unweighted accuracy gives the same weight to each class, regardless of how many samples of that class the dataset contains. Weighted accuracy weighs each class according to the number of samples that belong to that class in the dataset."

However, in your code "https://github.com/david-yoon/attentive-modality-hopping-for-SER/blob/master/util/measure_WA_UA.py", you have given the parameter 'sample weight' for the 'unweighted_accuracy' function. Also, the discussion in issue (#1) seems to indicate the exact opposite of my understanding of WA and UA metrics. Could you kindly clarify?

Thanks,
VR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant