All the code for getting the final data is in this notebook
Corpus from RuCor and script to convert it to CoNNLL format (modified from @lubakit script)
- update to
MarkupSafe==1.1.1
- comment out
## scikit-learn==0.19.1
and## scipy==1.0.0
- add
tensorflow == 1.14.0
- comment out all
assert
- fix
get_document()
by adding missingstats
argument - fix encoding for Russian by adding
encoding='utf-8'
andensure_ascii=False
- fix by removing
-D_GLIBCXX_USE_CXX11_ABI=0
flag
This repository contains code and models for the paper, BERT for Coreference Resolution: Baselines and Analysis. Additionally, we also include the coreference resolution model from the paper SpanBERT: Improving Pre-training by Representing and Predicting Spans, which is the current state of the art on OntoNotes (79.6 F1). Please refer to the SpanBERT repository for other tasks.
The model architecture itself is an extension of the e2e-coref model.
- Install python3 requirements:
pip install -r requirements.txt
export data_dir=</path/to/data_dir>
./setup_all.sh
: This builds the custom kernels
Please download following files to use the pretrained coreference models on your data. If you want to train your own coreference model, you can skip this step.
Model | <model_name> for download |
F1 (%) |
---|---|---|
BERT-base | bert_base | 73.9 |
SpanBERT-base | spanbert_base | 77.7 |
BERT-large | bert_large | 76.9 |
SpanBERT-large | spanbert_large | 79.6 |
./download_pretrained.sh <model_name>
(e.g,: bert_base, bert_large, spanbert_base, spanbert_large; assumes that $data_dir
is set) This downloads BERT/SpanBERT models finetuned on OntoNotes. The original/non-finetuned version of SpanBERT weights is available in this repository. You can use these models with evaluate.py
and predict.py
(the section on Batched Prediction Instructions)
- Finetuning a BERT/SpanBERT large model on OntoNotes requires access to a 32GB GPU. You might be able to train the large model with a smaller
max_seq_length
,max_training_sentences
,ffnn_size
, andmodel_heads = false
on a 16GB machine; this will almost certainly result in relatively poorer performance as measured on OntoNotes. - Running/testing a large pretrained model is still possible on a 16GB GPU. You should be able to finetune the base models on smaller GPUs.
This assumes access to OntoNotes 5.0.
./setup_training.sh <ontonotes/path/ontonotes-release-5.0> $data_dir
. This preprocesses the OntoNotes corpus, and downloads the original (not finetuned on OntoNotes) BERT models which will be finetuned using train.py
.
- Experiment configurations are found in
experiments.conf
. Choose an experiment that you would like to run, e.g.bert_base
- Note that configs without the prefix
train_
load checkpoints already tuned on OntoNotes. - Training:
GPU=0 python train.py <experiment>
- Results are stored in the
log_root
directory (seeexperiments.conf
) and can be viewed via TensorBoard. - Evaluation:
GPU=0 python evaluate.py <experiment>
. This currently evaluates on the dev set.
- Create a file where each line similar to
cased_config_vocab/trial.jsonlines
(make sure to strip the newlines so each line is well-formed json):
{
"clusters": [], # leave this blank
"doc_key": "nw", # key closest to your domain. "nw" is newswire. See the OntoNotes documentation.
"sentences": [["[CLS]", "subword1", "##subword1", ".", "[SEP]"]], # list of BERT tokenized segments. Each segment should be less than the max_segment_len in your config
"speakers": [["[SPL]", "-", "-", "-", "[SPL]"]], # speaker information for each subword in sentences
"sentence_map": [0, 0, 0, 0, 0], # flat list where each element is the sentence index of the subwords
"subtoken_map": [0, 0, 0, 1, 1] # flat list containing original word index for each subword. [CLS] and the first word share the same index
}
clusters
should be left empty and is only used for evaluation purposes.doc_key
indicates the genre, which can be one of the following:"bc", "bn", "mz", "nw", "pt", "tc", "wb"
speakers
indicates the speaker of each word. These can be all empty strings if there is only one known speaker.- Run
GPU=0 python predict.py <experiment> <input_file> <output_file>
, which outputs the input jsonlines with an additional keypredicted_clusters
.
- The current config runs the Independent model.
- When running on test, change the
eval_path
andconll_eval_path
from dev to test. - The
model_dir
inside thelog_root
containsstdout.log
. Check themax_f1
after 57000 steps. For example2019-06-12 12:43:11,926 - INFO - __main__ - [57000] evaL_f1=0.7694, max_f1=0.7697
- You can also load pytorch based model files (ending in
.pt
) which share BERT's architecture. Seepytorch_to_tf.py
for details.
log_root
: This is where all models and logs are stored. Check this before running anything.bert_learning_rate
: The learning rate for the BERT parameters. Typically,1e-5
and2e-5
work well.task_learning_rate
: The learning rate for the other parameters. Typically, LRs between0.0001
to0.0003
work well.init_checkpoint
: The checkpoint file from which BERT parameters are initialized. Both TF and Pytorch checkpoints work as long as they use the same BERT architecture. Use*ckpt
files for TF and*pt
for Pytorch.max_segment_len
: The maximum size of the BERT context window. Larger segments work better for SpanBERT while BERT suffers a sharp drop at 512.
If you have access to a slurm GPU cluster, you could use the following for set of commands for training.
python tune.py --generate_configs --data_dir <coref_data_dir>
: This generates multiple configs for tuning (BERT and task) learning rates, embedding models, andmax_segment_len
. This modifiesexperiments.conf
. Use--trial
to print to stdout instead. If you need to generate this from scratch, refer tobasic.conf
.grep "\{best\}" experiments.conf | cut -d = -f 1 > torun.txt
: This creates a list of configs that can be used by the script to launch jobs. You can use a regexp to restrict the list of configs. For example,grep "\{best\}" experiments.conf | grep "sl512*" | cut -d = -f 1 > torun.txt
will select configs withmax_segment_len = 512
.python tune.py --data_dir <coref_data_dir> --run_jobs
: This launches jobs from torun.txt on the slurm cluster.
- If you like using Colab, check out Jonathan K. Kummerfeld's notebook.
- Some
g++
versions may not play nicely with this repo. If you get this:tensorflow.python.framework.errors_impl.NotFoundError: ./coref_kernels.so: undefined symbol: _ZN10tensorflow12OpDefBuilder4AttrESs
, try removing the flag-D_GLIBCXX_USE_CXX11_ABI=0
fromsetup_all.sh
. Thanks to Naman Jain for the solution.
If you use the pretrained BERT-based coreference model (or this implementation), please cite the paper, BERT for Coreference Resolution: Baselines and Analysis.
@inproceedings{joshi2019coref,
title={{BERT} for Coreference Resolution: Baselines and Analysis},
author={Mandar Joshi and Omer Levy and Daniel S. Weld and Luke Zettlemoyer},
year={2019},
booktitle={Empirical Methods in Natural Language Processing (EMNLP)}
}
Additionally, if you use the pretrained SpanBERT coreference model, please cite the paper, SpanBERT: Improving Pre-training by Representing and Predicting Spans.
@article{joshi2019spanbert,
title={{SpanBERT}: Improving Pre-training by Representing and Predicting Spans},
author={Mandar Joshi and Danqi Chen and Yinhan Liu and Daniel S. Weld and Luke Zettlemoyer and Omer Levy},
year={2019},
journal={arXiv preprint arXiv:1907.10529}
}