Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Utilize synthetic bad images #27

Closed
dzenanz opened this issue Jun 9, 2021 · 4 comments · Fixed by #247
Closed

Utilize synthetic bad images #27

dzenanz opened this issue Jun 9, 2021 · 4 comments · Fixed by #247
Assignees

Comments

@dzenanz
Copy link
Member

dzenanz commented Jun 9, 2021

https://torchio.readthedocs.io/transforms/augmentation.html

@dzenanz dzenanz self-assigned this Jun 9, 2021
@aashish24 aashish24 added this to the Sprint August 1-15 milestone Jul 29, 2021
@aashish24 aashish24 removed this from the Sprint August 1-15 milestone Oct 28, 2021
@dzenanz
Copy link
Member Author

dzenanz commented Nov 30, 2021

Baseline 5-fold cross validation results using dd222c0 before augmentation is added:

Loaded NN model from file "/home/dzenan/miqa/miqa/learning/models/miqaT1-val0.pth"
Evaluating NN model on validation data
:val_confusion_matrix:
[[ 4  5  1 15  9  8  9  2  2  1  1]
 [ 0  0  1  0  1  1  1  0  0  0  0]
 [ 0  0  0  1  1  1  1  0  1  0  0]
 [ 0  0  0  0  2  0  0  0  0  0  0]
 [ 0  0  0  1  0  1  1  0  1  0  0]
 [ 0  0  1  0  0  0  1  1  0  0  0]
 [ 1  0  6 10 29 45 68 77 38 23  8]
 [ 0  0  0  0  1  5  8  6 11 10  6]
 [ 0  0  1  0  5 10 13 33 52 69 31]
 [ 0  0  0  1  4  5  7 23 54 64 65]
 [ 0  0  1  4  9 12 15 22 40 32 34]]
              precision    recall  f1-score   support
         0.0       0.80      0.07      0.13        57
         1.0       0.00      0.00      0.00         4
         2.0       0.00      0.00      0.00         5
         3.0       0.00      0.00      0.00         2
         4.0       0.00      0.00      0.00         4
         5.0       0.00      0.00      0.00         3
         6.0       0.55      0.22      0.32       305
         7.0       0.04      0.13      0.06        47
         8.0       0.26      0.24      0.25       214
         9.0       0.32      0.29      0.30       223
        10.0       0.23      0.20      0.22       169
    accuracy                           0.22      1033
   macro avg       0.20      0.10      0.12      1033
weighted avg       0.37      0.22      0.26      1033

train_confusion_matrix:
[[472  59   7   2   0   0   0   0   0   0   0]
 [343 149   0   0   0   0   0   0   0   0   0]
 [  0 178 148  23   0   0   0   0   0   0   0]
 [  0   0 192 286   0   0   0   0   0   0   0]
 [  0   0   0 137 336  36   0   0   0   0   0]
 [  0   0   0   0   0 245  56   0   0   0   0]
 [  0   1   1   2  18  50  92  94  48   8   2]
 [  0   0   0   0   0   4  13  90 117  15   0]
 [  0   1   0   1   2   7   8  37  65  84  34]
 [  0   0   0   1   1   1   5  22  64 163 105]
 [  0   0   0   0   0   2   8  11  49 121 168]]
              precision    recall  f1-score   support
         0.0       0.58      0.87      0.70       540
         1.0       0.38      0.30      0.34       492
         2.0       0.43      0.42      0.42       349
         3.0       0.63      0.60      0.62       478
         4.0       0.94      0.66      0.78       509
         5.0       0.71      0.81      0.76       301
         6.0       0.51      0.29      0.37       316
         7.0       0.35      0.38      0.37       239
         8.0       0.19      0.27      0.22       239
         9.0       0.42      0.45      0.43       362
        10.0       0.54      0.47      0.50       359
    accuracy                           0.53      4184
   macro avg       0.52      0.50      0.50      4184
weighted avg       0.55      0.53      0.53      4184


Loaded NN model from file "/home/dzenan/miqa/miqa/learning/models/miqaT1-val1.pth"
val_confusion_matrix:
[[ 2  7 10  3  7 14 10  6  1  1  0]
 [ 0  0  0  0  1  0  0  1  2  0  0]
 [ 0  1  1  1  0  0  0  0  0  0  0]
 [ 0  0  0  0  2  0  0  0  0  0  0]
 [ 0  0  0  2  2  0  1  0  0  0  0]
 [ 0  0  1  0  0  0  0  1  0  0  0]
 [ 0  0  5 10 25 59 83 97 50 20  2]
 [ 0  1  0  1  2  6  7  8 13 13 15]
 [ 0  1  0  2  1  9 18 33 58 70 43]
 [ 0  0  0  1  1  5  8 22 57 52 32]
 [ 0  0  3  2  5  8 16 33 27 22 17]]
              precision    recall  f1-score   support
         0.0       1.00      0.03      0.06        61
         1.0       0.00      0.00      0.00         4
         2.0       0.05      0.33      0.09         3
         3.0       0.00      0.00      0.00         2
         4.0       0.04      0.40      0.08         5
         5.0       0.00      0.00      0.00         2
         6.0       0.58      0.24      0.34       351
         7.0       0.04      0.12      0.06        66
         8.0       0.28      0.25      0.26       235
         9.0       0.29      0.29      0.29       178
        10.0       0.16      0.13      0.14       133
    accuracy                           0.21      1040
   macro avg       0.22      0.16      0.12      1040
weighted avg       0.39      0.21      0.25      1040

train_confusion_matrix:
[[ 28  92 139 103  65  28  10  13   0   1   0]
 [  0 189 298   0   0   0   0   0   0   0   0]
 [  0   0 287  90   0   0   0   0   0   0   0]
 [  0   0   0 512   0   0   0   0   0   0   0]
 [  0   0   0  61 147 278   0   0   0   0   0]
 [  0   0   0   0   0 171 199   0   0   0   0]
 [  1   3   0   6  22  41  76  72  40  14   4]
 [  0   0   0   4   5   2  21  48  80  54  12]
 [  0   0   0   1   2   2  21  30  77  76  41]
 [  0   0   0   3   3   6  13  31  80 117  73]
 [  0   0   0   2   8  17  42  65  71 107  73]]
              precision    recall  f1-score   support
         0.0       0.97      0.06      0.11       479
         1.0       0.67      0.39      0.49       487
         2.0       0.40      0.76      0.52       377
         3.0       0.65      1.00      0.79       512
         4.0       0.58      0.30      0.40       486
         5.0       0.31      0.46      0.37       370
         6.0       0.20      0.27      0.23       279
         7.0       0.19      0.21      0.20       226
         8.0       0.22      0.31      0.26       250
         9.0       0.32      0.36      0.34       326
        10.0       0.36      0.19      0.25       385
    accuracy                           0.41      4177
   macro avg       0.44      0.39      0.36      4177
weighted avg       0.49      0.41      0.38      4177


Loaded NN model from file "/home/dzenan/miqa/miqa/learning/models/miqaT1-val2.pth"
val_confusion_matrix:
[[ 1  7  8 14 16  8  6  5  2  1  0]
 [ 0  0  1  0  0  1  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0]
 [ 0  0  1  0  0  1  0  0  0  0  1]
 [ 0  0  0  0  0  0  0  1  0  0  0]
 [ 0  1  5 10 31 47 75 73 48 32  8]
 [ 0  0  0  1  1  2  5  9  7 10  5]
 [ 0  0  1  2  4  4 12 34 60 58 23]
 [ 0  0  2  1  1  5 18 47 57 51 34]
 [ 0  0  0  2  8 13 19 24 45 35 24]]
Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
              precision    recall  f1-score   support
         0.0       1.00      0.01      0.03        68
         1.0       0.00      0.00      0.00         2
         2.0       0.00      0.00      0.00         0
         3.0       0.00      0.00      0.00         0
         4.0       0.00      0.00      0.00         3
         5.0       0.00      0.00      0.00         1
         6.0       0.56      0.23      0.32       330
         7.0       0.05      0.23      0.08        40
         8.0       0.27      0.30      0.29       198
         9.0       0.27      0.24      0.25       216
        10.0       0.25      0.14      0.18       170
    accuracy                           0.21      1028
   macro avg       0.22      0.10      0.10      1028
weighted avg       0.40      0.21      0.25      1028

train_confusion_matrix:
[[230 108  64  30   5   4   0   0   0   0   0]
 [499   0   0   0   0   0   0   0   0   0   0]
 [  0 298 128   0   0   0   0   0   0   0   0]
 [  0   0 249 264  87   0   0   0   0   0   0]
 [  0   0   0 101 362  27   0   0   0   0   0]
 [  0   0   0   0   0 437   0   0   0   0   0]
 [  0   1   4   8  13  43  52  73  37  19   7]
 [  0   0   0   0   6  11  20  70  91  37   0]
 [  0   0   0   1   2  10  17  37  64  59  27]
 [  0   0   1   0   1   3  13  29 100  81  58]
 [  0   0   0   0   3   9  15  34  73  86  81]]
              precision    recall  f1-score   support
         0.0       0.32      0.52      0.39       441
         1.0       0.00      0.00      0.00       499
         2.0       0.29      0.30      0.29       426
         3.0       0.65      0.44      0.53       600
         4.0       0.76      0.74      0.75       490
         5.0       0.80      1.00      0.89       437
         6.0       0.44      0.20      0.28       257
         7.0       0.29      0.30      0.29       235
         8.0       0.18      0.29      0.22       217
         9.0       0.29      0.28      0.29       286
        10.0       0.47      0.27      0.34       301
    accuracy                           0.42      4189
   macro avg       0.41      0.40      0.39      4189
weighted avg       0.43      0.42      0.42      4189


Loaded NN model from file "/home/dzenan/miqa/miqa/learning/models/miqaT1-val3.pth"
val_confusion_matrix:
[[ 1  2 13  7  9 11  9  7  6  4  2]
 [ 0  0  0  1  2  0  0  0  1  0  0]
 [ 0  0  0  1  2  1  0  0  0  0  0]
 [ 0  1  1  0  0  0  0  0  0  0  0]
 [ 0  0  0  1  1  0  0  0  0  0  0]
 [ 0  0  0  0  0  1  0  0  0  0  0]
 [ 3  1  3  4 20 35 61 62 48 41 16]
 [ 0  0  0  2  2  3  2  5  9  6  2]
 [ 0  2  2  3  4  6 10 16 61 60 40]
 [ 0  0  0  1  5  4 14 27 67 84 52]
 [ 0  0  1  0  6 10 16 30 40 35 42]]
              precision    recall  f1-score   support
         0.0       0.25      0.01      0.03        71
         1.0       0.00      0.00      0.00         4
         2.0       0.00      0.00      0.00         4
         3.0       0.00      0.00      0.00         2
         4.0       0.02      0.50      0.04         2
         5.0       0.01      1.00      0.03         1
         6.0       0.54      0.21      0.30       294
         7.0       0.03      0.16      0.06        31
         8.0       0.26      0.30      0.28       204
         9.0       0.37      0.33      0.35       254
        10.0       0.27      0.23      0.25       180
    accuracy                           0.24      1047
   macro avg       0.16      0.25      0.12      1047
weighted avg       0.36      0.24      0.27      1047

train_confusion_matrix:
[[280 108  20   9   0   0   0   0   0   0   0]
 [384 106   0   0   0   0   0   0   0   0   0]
 [  0  97 206   0   0   0   0   0   0   0   0]
 [  0   0  86 398   0   0   0   0   0   0   0]
 [  0   0   0   0 542  40   0   0   0   0   0]
 [  0   0   0   0   0 274 116   0   0   0   0]
 [  0   0   0   1  11  32  56  85  57  27   5]
 [  0   0   0   0   0   5  14  59 152  38   0]
 [  0   0   0   0   2   5   6  27 106 107  44]
 [  0   0   0   0   1   7  11  15  76 130 102]
 [  0   0   0   0   2   1   2   4  40 104 170]]
              precision    recall  f1-score   support
         0.0       0.42      0.67      0.52       417
         1.0       0.34      0.22      0.26       490
         2.0       0.66      0.68      0.67       303
         3.0       0.98      0.82      0.89       484
         4.0       0.97      0.93      0.95       582
         5.0       0.75      0.70      0.73       390
         6.0       0.27      0.20      0.23       274
         7.0       0.31      0.22      0.26       268
         8.0       0.25      0.36      0.29       297
         9.0       0.32      0.38      0.35       342
        10.0       0.53      0.53      0.53       323
    accuracy                           0.56      4170
   macro avg       0.53      0.52      0.52      4170
weighted avg       0.57      0.56      0.56      4170


Loaded NN model from file "/home/dzenan/miqa/miqa/learning/models/miqaT1-val4.pth"
val_confusion_matrix:
[[ 1  2  5  5 10  9 13  9  9  6  0]
 [ 0  0  0  0  2  0  0  0  1  0  0]
 [ 0  0  0  1  0  0  1  0  2  0  0]
 [ 0  0  0  0  1  0  0  0  0  0  0]
 [ 0  0  0  0  1  2  1  0  0  0  0]
 [ 0  0  0  1  0  0  0  0  0  0  0]
 [ 1  2  9 11 16 25 59 74 53 30 18]
 [ 0  0  0  3  1  4  2  9 19  8  6]
 [ 0  0  0  1  4  5 18 36 69 69 30]
 [ 0  3  1  0  1  5  5 26 66 86 39]
 [ 0  0  0  3  8 12 24 22 44 36 24]]
              precision    recall  f1-score   support
         0.0       0.50      0.01      0.03        69
         1.0       0.00      0.00      0.00         3
         2.0       0.00      0.00      0.00         4
         3.0       0.00      0.00      0.00         1
         4.0       0.02      0.25      0.04         4
         5.0       0.00      0.00      0.00         1
         6.0       0.48      0.20      0.28       298
         7.0       0.05      0.17      0.08        52
         8.0       0.26      0.30      0.28       232
         9.0       0.37      0.37      0.37       232
        10.0       0.21      0.14      0.17       173
    accuracy                           0.23      1069
   macro avg       0.17      0.13      0.11      1069
weighted avg       0.34      0.23      0.25      1069

train_confusion_matrix:
[[376  61   3   4   0   0   0   0   0   0   0]
 [395 139   0   0   0   0   0   0   0   0   0]
 [  0 133 174   0   0   0   0   0   0   0   0]
 [  0   0  79 461   0   0   0   0   0   0   0]
 [  0   0   0  54 423   0   0   0   0   0   0]
 [  0   0   0   0   0 310 108   0   0   0   0]
 [  0   0   0   4   5  34  66 106  72  15   6]
 [  0   0   0   0   0   6   5 109 103  19   0]
 [  1   0   0   1   5   2  10  44 103  74  17]
 [  0   0   0   0   2   0   3  28  98 118  46]
 [  0   0   0   0   1   0   5  12  52 123 133]]
              precision    recall  f1-score   support
         0.0       0.49      0.85      0.62       444
         1.0       0.42      0.26      0.32       534
         2.0       0.68      0.57      0.62       307
         3.0       0.88      0.85      0.87       540
         4.0       0.97      0.89      0.93       477
         5.0       0.88      0.74      0.81       418
         6.0       0.34      0.21      0.26       308
         7.0       0.36      0.45      0.40       242
         8.0       0.24      0.40      0.30       257
         9.0       0.34      0.40      0.37       295
        10.0       0.66      0.41      0.50       326
    accuracy                           0.58      4148
   macro avg       0.57      0.55      0.54      4148
weighted avg       0.61      0.58      0.58      4148

@dzenanz
Copy link
Member Author

dzenanz commented Dec 6, 2021

A simple addition of synthetic ghosting and motion artifacts increases R² (R squared) metric by about 0.1 to 0.2. Note that R² goes from -inf (perfectly uncorrelated) to +1.0 (perfectly correlated). Here is before and after graph of R² on validation set at each epoch during training of 5-fold cross validation.

Augmentation G M before and after

Before commit: dd222c0
After commit: a573c9a

@dzenanz
Copy link
Member Author

dzenanz commented Dec 6, 2021

Created follow-up issue #248.

@dzenanz
Copy link
Member Author

dzenanz commented Dec 6, 2021

5-fold cross validation results using a573c9a after augmentation is added:

Loaded NN model from file "/home/dzenan/miqa/miqa/learning/models/miqaT1-val0.pth"
val_confusion_matrix:
[[  4   5   3   7  16  10   7   2   2   1   0]
 [  0   0   0   1   2   0   0   1   0   0   0]
 [  1   0   0   1   0   0   1   0   2   0   0]
 [  0   0   0   0   0   1   1   0   0   0   0]
 [  0   0   0   0   0   1   2   1   0   0   0]
 [  0   0   0   0   1   1   1   0   0   0   0]
 [  0   0   1   5  12  42  76  84  62  17   6]
 [  0   0   0   0   1   3   5   8  11  12   7]
 [  0   0   0   1   6  14  13  27  47  76  30]
 [  0   0   0   0   2   2   8  16  34 106  55]
 [  0   0   0   2   3   7  12  34  43  44  24]]
              precision    recall  f1-score   support
           0       0.80      0.07      0.13        57
           1       0.00      0.00      0.00         4
           2       0.00      0.00      0.00         5
           3       0.00      0.00      0.00         2
           4       0.00      0.00      0.00         4
           5       0.01      0.33      0.02         3
           6       0.60      0.25      0.35       305
           7       0.05      0.17      0.07        47
           8       0.23      0.22      0.23       214
           9       0.41      0.48      0.44       223
          10       0.20      0.14      0.16       169
    accuracy                           0.26      1033
   macro avg       0.21      0.15      0.13      1033
weighted avg       0.39      0.26      0.28      1033

train_confusion_matrix:
[[165 126 123 118  72  35  16   3   1   0   0]
 [ 30  48  41  43  33   9   2   2   0   0   0]
 [ 11  25  52  72  72  30   6   3   1   0   0]
 [  3  20  42  75  80  32  13   4   1   0   0]
 [  6  10  27  73  88  45  18   6   6   0   0]
 [  1   7  15  27  45  42  14   8   0   0   0]
 [  0   4   8  24  58 101 200 280 152  21   1]
 [  0   0   0   5  12  19  22  25  24  33   7]
 [  0   0   1   0   4  20  13  52 135 197  76]
 [  0   0   0   0   1   4  13  20  84 214 135]
 [  0   0   0   0   2   5  19  41  83 112 110]]
              precision    recall  f1-score   support
           0       0.76      0.25      0.38       659
           1       0.20      0.23      0.21       208
           2       0.17      0.19      0.18       272
           3       0.17      0.28      0.21       270
           4       0.19      0.32      0.24       279
           5       0.12      0.26      0.17       159
           6       0.60      0.24      0.34       849
           7       0.06      0.17      0.08       147
           8       0.28      0.27      0.27       498
           9       0.37      0.45      0.41       471
          10       0.33      0.30      0.31       372
    accuracy                           0.28      4184
   macro avg       0.30      0.27      0.25      4184
weighted avg       0.40      0.28      0.30      4184


Loaded NN model from file "/home/dzenan/miqa/miqa/learning/models/miqaT1-val1.pth"
val_confusion_matrix:
[[  0   5   9   5  16   6  14   3   3   0   0]
 [  0   0   0   0   1   0   1   1   1   0   0]
 [  0   0   0   1   1   1   0   0   0   0   0]
 [  0   0   0   1   1   0   0   0   0   0   0]
 [  0   0   0   0   3   1   1   0   0   0   0]
 [  0   0   1   0   0   0   0   0   1   0   0]
 [  0   0   2   7  28  69  99 101  40   5   0]
 [  0   0   0   0   4   6   8  14  23   8   3]
 [  0   0   1   1   1  11  16  55  94  51   5]
 [  0   0   0   1   1   4  10  33  71  56   2]
 [  0   0   0   1   7   8  23  41  32  19   2]]
Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
              precision    recall  f1-score   support
           0       0.00      0.00      0.00        61
           1       0.00      0.00      0.00         4
           2       0.00      0.00      0.00         3
           3       0.06      0.50      0.11         2
           4       0.05      0.60      0.09         5
           5       0.00      0.00      0.00         2
           6       0.58      0.28      0.38       351
           7       0.06      0.21      0.09        66
           8       0.35      0.40      0.38       235
           9       0.40      0.31      0.35       178
          10       0.17      0.02      0.03       133
    accuracy                           0.26      1040
   macro avg       0.15      0.21      0.13      1040
weighted avg       0.37      0.26      0.28      1040

train_confusion_matrix:
[[188 110 146 111  64  44  13   4   2   0   0]
 [ 28  35  44  49  28   9   3   2   0   0   0]
 [ 12  24  58  63  48  23   6   2   0   0   0]
 [  3  19  54  87  57  23  10   1   0   0   0]
 [  8  13  50  72  74  29   8   0   0   0   0]
 [  2   9  13  55  52  31  12   3   0   0   0]
 [  2   0  11  43 112 163 269 180  58   1   0]
 [  0   0   6   9  15  19  30  30  41   8   0]
 [  0   0   2   7  10  20  47  88 197  95   8]
 [  0   0   0   1   7   5  25  78 214 157  22]
 [  0   0   0   1   3  13  45  93 140  88  13]]
              precision    recall  f1-score   support
           0       0.77      0.28      0.41       682
           1       0.17      0.18      0.17       198
           2       0.15      0.25      0.19       236
           3       0.17      0.34      0.23       254
           4       0.16      0.29      0.20       254
           5       0.08      0.18      0.11       177
           6       0.57      0.32      0.41       839
           7       0.06      0.19      0.09       158
           8       0.30      0.42      0.35       474
           9       0.45      0.31      0.37       509
          10       0.30      0.03      0.06       396
    accuracy                           0.27      4177
   macro avg       0.29      0.25      0.24      4177
weighted avg       0.40      0.27      0.29      4177


Loaded NN model from file "/home/dzenan/miqa/miqa/learning/models/miqaT1-val2.pth"
val_confusion_matrix:
[[  3   1   2  11  13  18  16   2   2   0   0]
 [  0   0   0   1   0   1   0   0   0   0   0]
 [  0   0   0   0   0   0   0   0   0   0   0]
 [  0   0   0   0   0   0   0   0   0   0   0]
 [  0   0   1   0   0   0   1   0   0   0   1]
 [  0   0   0   0   0   0   1   0   0   0   0]
 [  0   0   0   3   9  42 107 110  48   9   2]
 [  0   0   0   1   1   3   6   3  10  14   2]
 [  0   0   0   3   0   5  17  24  57  73  19]
 [  0   1   1   1   1   4   9  18  62  91  28]
 [  0   0   1   4   4   6  19  35  33  57  11]]
Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
              precision    recall  f1-score   support
           0       1.00      0.04      0.08        68
           1       0.00      0.00      0.00         2
           2       0.00      0.00      0.00         0
           3       0.00      0.00      0.00         0
           4       0.00      0.00      0.00         3
           5       0.00      0.00      0.00         1
           6       0.61      0.32      0.42       330
           7       0.02      0.07      0.03        40
           8       0.27      0.29      0.28       198
           9       0.37      0.42      0.40       216
          10       0.17      0.06      0.09       170
    accuracy                           0.26      1028
   macro avg       0.22      0.11      0.12      1028
weighted avg       0.42      0.26      0.29      1028

train_confusion_matrix:
[[213 125 115  91  80  29  10   3   4   0   0]
 [ 26  42  40  34  34  15   7   1   0   0   0]
 [ 21  19  62  75  58  22   7   4   0   1   0]
 [  6  17  55  90  59  28  10   0   0   0   0]
 [  6  15  42  74  76  36  13   5   0   0   0]
 [  3  10  18  43  56  35  20   7   1   0   0]
 [  0   1   9  21  76 137 269 239  71   9   0]
 [  0   0   0   2  17  30  22  26  35  23   3]
 [  0   0   0   4   4  12  35  69 172 185  29]
 [  0   0   0   0   1   6  14  27 121 245  63]
 [  0   0   0   0   0  10  19  63  95 113  49]]
              precision    recall  f1-score   support
           0       0.77      0.32      0.45       670
           1       0.18      0.21      0.20       199
           2       0.18      0.23      0.20       269
           3       0.21      0.34      0.26       265
           4       0.16      0.28      0.21       267
           5       0.10      0.18      0.13       193
           6       0.63      0.32      0.43       832
           7       0.06      0.16      0.09       158
           8       0.34      0.34      0.34       510
           9       0.43      0.51      0.47       477
          10       0.34      0.14      0.20       349
    accuracy                           0.31      4189
   macro avg       0.31      0.28      0.27      4189
weighted avg       0.42      0.31      0.33      4189


Loaded NN model from file "/home/dzenan/miqa/miqa/learning/models/miqaT1-val3.pth"
val_confusion_matrix:
[[  0   4   1   4  14  17  18   8   3   2   0]
 [  0   0   0   0   1   0   2   1   0   0   0]
 [  0   0   0   0   0   2   2   0   0   0   0]
 [  0   0   0   1   0   1   0   0   0   0   0]
 [  0   0   0   1   0   1   0   0   0   0   0]
 [  0   0   0   0   1   0   0   0   0   0   0]
 [  0   0   2   3  11  33  54  97  80  12   2]
 [  0   0   1   0   1   3   3   5   9   8   1]
 [  0   0   0   2   3   4  10  22  56  84  23]
 [  0   0   0   0   1   5  12  24  73 109  30]
 [  0   0   0   1   3  11  14  39  40  46  26]]
Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
              precision    recall  f1-score   support
           0       0.00      0.00      0.00        71
           1       0.00      0.00      0.00         4
           2       0.00      0.00      0.00         4
           3       0.08      0.50      0.14         2
           4       0.00      0.00      0.00         2
           5       0.00      0.00      0.00         1
           6       0.47      0.18      0.26       294
           7       0.03      0.16      0.04        31
           8       0.21      0.27      0.24       204
           9       0.42      0.43      0.42       254
          10       0.32      0.14      0.20       180
    accuracy                           0.24      1047
   macro avg       0.14      0.15      0.12      1047
weighted avg       0.33      0.24      0.26      1047

train_confusion_matrix:
[[107 114 120 118 111  59  30  16   1   0   0]
 [ 14  30  60  49  30  18   9   2   0   0   0]
 [  0  18  40  78  69  39  15   2   1   0   0]
 [  1   5  31  69  88  44  21   3   1   0   0]
 [  0   7  29  64  77  36  20   9   1   0   0]
 [  0   7  19  35  43  52  19   9   2   1   0]
 [  1   3   4   9  48  93 216 288 162  16   1]
 [  0   0   0   1  12  24  28  27  41  37   2]
 [  0   0   0   0   2  15  24  49 191 190  22]
 [  0   0   0   0   0   3   7  23 105 251  74]
 [  0   0   0   0   0   5  13  43 108 139  50]]
              precision    recall  f1-score   support
           0       0.87      0.16      0.27       676
           1       0.16      0.14      0.15       212
           2       0.13      0.15      0.14       262
           3       0.16      0.26      0.20       263
           4       0.16      0.32      0.21       243
           5       0.13      0.28      0.18       187
           6       0.54      0.26      0.35       841
           7       0.06      0.16      0.08       172
           8       0.31      0.39      0.35       493
           9       0.40      0.54      0.46       463
          10       0.34      0.14      0.20       358
    accuracy                           0.27      4170
   macro avg       0.30      0.25      0.24      4170
weighted avg       0.40      0.27      0.28      4170


Loaded NN model from file "/home/dzenan/miqa/miqa/learning/models/miqaT1-val4.pth"
val_confusion_matrix:
[[  0   1   6   2   7  15  21   6   3   7   1]
 [  0   0   0   0   1   1   1   0   0   0   0]
 [  0   0   0   0   2   0   0   1   0   1   0]
 [  0   0   0   0   0   1   0   0   0   0   0]
 [  0   0   0   1   0   2   1   0   0   0   0]
 [  0   0   0   0   0   1   0   0   0   0   0]
 [  0   0   0   7  11  43  84  91  50  12   0]
 [  0   1   0   0   1   5   4   6  18  15   2]
 [  0   0   0   0   1   7  28  22  71  93  10]
 [  0   0   0   2   2   4   6  16  71 104  27]
 [  0   0   0   0   2  15  39  31  35  38  13]]
Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
              precision    recall  f1-score   support
           0       0.00      0.00      0.00        69
           1       0.00      0.00      0.00         3
           2       0.00      0.00      0.00         4
           3       0.00      0.00      0.00         1
           4       0.00      0.00      0.00         4
           5       0.01      1.00      0.02         1
           6       0.46      0.28      0.35       298
           7       0.03      0.12      0.05        52
           8       0.29      0.31      0.30       232
           9       0.39      0.45      0.41       232
          10       0.25      0.08      0.12       173
    accuracy                           0.26      1069
   macro avg       0.13      0.20      0.11      1069
weighted avg       0.31      0.26      0.27      1069

train_confusion_matrix:
[[ 94 101 114 145 108  70  21   7   5   0   0]
 [  8  27  37  51  41  34  13   1   0   0   0]
 [  1  11  31  65  73  47  15   4   2   0   0]
 [  4   9  27  74  98  46  22   3   3   0   0]
 [  1   7  25  57  88  45  24   9   1   0   0]
 [  1   4  11  20  52  56  26   7   1   0   0]
 [  0   0   6  12  49 145 291 241  70   7   0]
 [  0   0   1   2  16  34  27  27  36  25   3]
 [  0   0   0   2   7  16  40  61 151 173  16]
 [  0   0   0   1   2  10  22  46 133 232  35]
 [  0   0   0   0   1  13  44  81  80 109  34]]
              precision    recall  f1-score   support
           0       0.86      0.14      0.24       665
           1       0.17      0.13      0.15       212
           2       0.12      0.12      0.12       249
           3       0.17      0.26      0.21       286
           4       0.16      0.34      0.22       257
           5       0.11      0.31      0.16       178
           6       0.53      0.35      0.43       821
           7       0.06      0.16      0.08       171
           8       0.31      0.32      0.32       466
           9       0.42      0.48      0.45       481
          10       0.39      0.09      0.15       362
    accuracy                           0.27      4148
   macro avg       0.30      0.25      0.23      4148
weighted avg       0.41      0.27      0.28      4148

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants