Model | KLUE-ynat |
---|---|
RoBERTaGCN | 87.25 |
Model | KLUE-ynat |
---|---|
BERT | 86.31 |
RoBERTa | 85.90 |
BertGCN | 86.39 |
RoBERTaGCN | 86.25 |
BertGCN_TBGC | 86.99 |
RoBERTaGCN_TBGC | 86.72 |
-
Run klue_data_convert.py
-
Run build_graph.py
-
Run robertagcn_klue.py
dgl-cu113 == 0.9.1.post1
ignite == 1.1.0
python == 3.6.9
torch == 1.10.0+cu113
scikit-learn =< 0.24.2
transformers =< 4.18.0
numpy =< 1.19.5
networkx == 2.5.1
When constructing an adjacency matrix, there is a data leakage problem. (The paperwithcode doesn't seem to consider that issue.)
- BertGCN: Transductive Text Classification by Combining GCN and BERT
- KLUE: Korean Language Understanding Evaluation
- Change build graph method
- Preprocessing
- Finetune RoBERTaGCN