-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathBILSTM-CRF.py
520 lines (418 loc) · 19.6 KB
/
BILSTM-CRF.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
# -*- coding: utf-8 -*-
r"""
Advanced: Making Dynamic Decisions and the Bi-LSTM CRF
======================================================
Dynamic versus Static Deep Learning Toolkits
--------------------------------------------
Pytorch is a *dynamic* neural network kit. Another example of a dynamic
kit is `Dynet <https://github.com/clab/dynet>`__ (I mention this because
working with Pytorch and Dynet is similar. If you see an example in
Dynet, it will probably help you implement it in Pytorch). The opposite
is the *static* tool kit, which includes Theano, Keras, TensorFlow, etc.
The core difference is the following:
* In a static toolkit, you define
a computation graph once, compile it, and then stream instances to it.
* In a dynamic toolkit, you define a computation graph *for each
instance*. It is never compiled and is executed on-the-fly
Without a lot of experience, it is difficult to appreciate the
difference. One example is to suppose we want to build a deep
constituent parser. Suppose our model involves roughly the following
steps:
* We build the tree bottom up
* Tag the root nodes (the words of the sentence)
* From there, use a neural network and the embeddings
of the words to find combinations that form constituents. Whenever you
form a new constituent, use some sort of technique to get an embedding
of the constituent. In this case, our network architecture will depend
completely on the input sentence. In the sentence "The green cat
scratched the wall", at some point in the model, we will want to combine
the span :math:`(i,j,r) = (1, 3, \text{NP})` (that is, an NP constituent
spans word 1 to word 3, in this case "The green cat").
However, another sentence might be "Somewhere, the big fat cat scratched
the wall". In this sentence, we will want to form the constituent
:math:`(2, 4, NP)` at some point. The constituents we will want to form
will depend on the instance. If we just compile the computation graph
once, as in a static toolkit, it will be exceptionally difficult or
impossible to program this logic. In a dynamic toolkit though, there
isn't just 1 pre-defined computation graph. There can be a new
computation graph for each instance, so this problem goes away.
Dynamic toolkits also have the advantage of being easier to debug and
the code more closely resembling the host language (by that I mean that
Pytorch and Dynet look more like actual Python code than Keras or
Theano).
Bi-LSTM Conditional Random Field Discussion
-------------------------------------------
For this section, we will see a full, complicated example of a Bi-LSTM
Conditional Random Field for named-entity recognition. The LSTM tagger
above is typically sufficient for part-of-speech tagging, but a sequence
model like the CRF is really essential for strong performance on NER.
Familiarity with CRF's is assumed. Although this name sounds scary, all
the model is is a CRF but where an LSTM provides the features. This is
an advanced model though, far more complicated than any earlier model in
this tutorial. If you want to skip it, that is fine. To see if you're
ready, see if you can:
- Write the recurrence for the viterbi variable at step i for tag k.
- Modify the above recurrence to compute the forward variables instead.
- Modify again the above recurrence to compute the forward variables in
log-space (hint: log-sum-exp)
If you can do those three things, you should be able to understand the
code below. Recall that the CRF computes a conditional probability. Let
:math:`y` be a tag sequence and :math:`x` an input sequence of words.
Then we compute
.. math:: P(y|x) = \frac{\exp{(\text{Score}(x, y)})}{\sum_{y'} \exp{(\text{Score}(x, y')})}
Where the score is determined by defining some log potentials
:math:`\log \psi_i(x,y)` such that
.. math:: \text{Score}(x,y) = \sum_i \log \psi_i(x,y)
To make the partition function tractable, the potentials must look only
at local features.
In the Bi-LSTM CRF, we define two kinds of potentials: emission and
transition. The emission potential for the word at index :math:`i` comes
from the hidden state of the Bi-LSTM at timestep :math:`i`. The
transition scores are stored in a :math:`|T|x|T|` matrix
:math:`\textbf{P}`, where :math:`T` is the tag set. In my
implementation, :math:`\textbf{P}_{j,k}` is the score of transitioning
to tag :math:`j` from tag :math:`k`. So:
.. math:: \text{Score}(x,y) = \sum_i \log \psi_\text{EMIT}(y_i \rightarrow x_i) + \log \psi_\text{TRANS}(y_{i-1} \rightarrow y_i)
.. math:: = \sum_i h_i[y_i] + \textbf{P}_{y_i, y_{i-1}}
where in this second expression, we think of the tags as being assigned
unique non-negative indices.
If the above discussion was too brief, you can check out
`this <http://www.cs.columbia.edu/%7Emcollins/crf.pdf>`__ write up from
Michael Collins on CRFs.
Implementation Notes
--------------------
The example below implements the forward algorithm in log space to
compute the partition function, and the viterbi algorithm to decode.
Backpropagation will compute the gradients automatically for us. We
don't have to do anything by hand.
The implementation is not optimized. If you understand what is going on,
you'll probably quickly see that iterating over the next tag in the
forward algorithm could probably be done in one big operation. I wanted
to code to be more readable. If you want to make the relevant change,
you could probably use this tagger for real tasks.
"""
# Author: Robert Guthrie
import sklearn
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.optim as optim
import pickle
from tqdm import tqdm
import numpy as np
import warnings
warnings.filterwarnings('ignore')
torch.manual_seed(1)
torch.set_default_tensor_type('torch.cuda.FloatTensor')
#####################################################################
# Helper functions to make the code more readable.
def argmax(vec):
# return the argmax as a python int
_, idx = torch.max(vec, 1)
return idx.item()
# def prepare_sequence(seq, to_ix):
# idxs = [to_ix[w] for w in seq]
# return torch.tensor(idxs, dtype=torch.long)
def prepare_sequence(seq, to_ix):
idxs = [to_ix[w] if w in to_ix else to_ix['<UNK>'] for w in seq]
return torch.tensor(idxs, dtype=torch.long)
# Compute log sum exp in a numerically stable way for the forward algorithm
def log_sum_exp(vec):
max_score = vec[0, argmax(vec)]
max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1])
return max_score + \
torch.log(torch.sum(torch.exp(vec - max_score_broadcast)))
#####################################################################
# Create model
class BiLSTM_CRF(nn.Module):
def __init__(self, vocab_size, tag_to_ix, embedding_dim, hidden_dim):
super(BiLSTM_CRF, self).__init__()
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.vocab_size = vocab_size
self.tag_to_ix = tag_to_ix
self.tagset_size = len(tag_to_ix)
self.word_embeds = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim // 2,
num_layers=1, bidirectional=True)
# Maps the output of the LSTM into tag space.
self.hidden2tag = nn.Linear(hidden_dim, self.tagset_size)
# Matrix of transition parameters. Entry i,j is the score of
# transitioning *to* i *from* j.
self.transitions = nn.Parameter(
torch.randn(self.tagset_size, self.tagset_size))
# These two statements enforce the constraint that we never transfer
# to the start tag and we never transfer from the stop tag
self.transitions.data[tag_to_ix[START_TAG], :] = -10000
self.transitions.data[:, tag_to_ix[STOP_TAG]] = -10000
self.hidden = self.init_hidden()
def init_hidden(self):
return (torch.randn(2, 1, self.hidden_dim // 2),
torch.randn(2, 1, self.hidden_dim // 2))
def _forward_alg(self, feats):
# Do the forward algorithm to compute the partition function
init_alphas = torch.full((1, self.tagset_size), -10000.)
# START_TAG has all of the score.
init_alphas[0][self.tag_to_ix[START_TAG]] = 0.
# Wrap in a variable so that we will get automatic backprop
forward_var = init_alphas
# Iterate through the sentence
for feat in feats:
alphas_t = [] # The forward tensors at this timestep
for next_tag in range(self.tagset_size):
# broadcast the emission score: it is the same regardless of
# the previous tag
emit_score = feat[next_tag].view(
1, -1).expand(1, self.tagset_size)
# the ith entry of trans_score is the score of transitioning to
# next_tag from i
trans_score = self.transitions[next_tag].view(1, -1)
# The ith entry of next_tag_var is the value for the
# edge (i -> next_tag) before we do log-sum-exp
next_tag_var = forward_var + trans_score + emit_score
# The forward variable for this tag is log-sum-exp of all the
# scores.
alphas_t.append(log_sum_exp(next_tag_var).view(1))
forward_var = torch.cat(alphas_t).view(1, -1)
terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]
alpha = log_sum_exp(terminal_var)
return alpha
def _get_lstm_features(self, sentence):
self.hidden = self.init_hidden()
embeds = self.word_embeds(sentence).view(len(sentence), 1, -1)
lstm_out, self.hidden = self.lstm(embeds, self.hidden)
lstm_out = lstm_out.view(len(sentence), self.hidden_dim)
lstm_feats = self.hidden2tag(lstm_out)
return lstm_feats
def _score_sentence(self, feats, tags):
# Gives the score of a provided tag sequence
score = torch.zeros(1)
tags = torch.cat([torch.tensor([self.tag_to_ix[START_TAG]], dtype=torch.long), tags])
for i, feat in enumerate(feats):
score = score + \
self.transitions[tags[i + 1], tags[i]] + feat[tags[i + 1]]
score = score + self.transitions[self.tag_to_ix[STOP_TAG], tags[-1]]
return score
def _viterbi_decode(self, feats):
backpointers = []
# Initialize the viterbi variables in log space
init_vvars = torch.full((1, self.tagset_size), -10000.)
init_vvars[0][self.tag_to_ix[START_TAG]] = 0
# forward_var at step i holds the viterbi variables for step i-1
forward_var = init_vvars
for feat in feats:
bptrs_t = [] # holds the backpointers for this step
viterbivars_t = [] # holds the viterbi variables for this step
for next_tag in range(self.tagset_size):
# next_tag_var[i] holds the viterbi variable for tag i at the
# previous step, plus the score of transitioning
# from tag i to next_tag.
# We don't include the emission scores here because the max
# does not depend on them (we add them in below)
next_tag_var = forward_var + self.transitions[next_tag]
best_tag_id = argmax(next_tag_var)
bptrs_t.append(best_tag_id)
viterbivars_t.append(next_tag_var[0][best_tag_id].view(1))
# Now add in the emission scores, and assign forward_var to the set
# of viterbi variables we just computed
forward_var = (torch.cat(viterbivars_t) + feat).view(1, -1)
backpointers.append(bptrs_t)
# Transition to STOP_TAG
terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]
best_tag_id = argmax(terminal_var)
path_score = terminal_var[0][best_tag_id]
# Follow the back pointers to decode the best path.
best_path = [best_tag_id]
for bptrs_t in reversed(backpointers):
best_tag_id = bptrs_t[best_tag_id]
best_path.append(best_tag_id)
# Pop off the start tag (we dont want to return that to the caller)
start = best_path.pop()
assert start == self.tag_to_ix[START_TAG] # Sanity check
best_path.reverse()
return path_score, best_path
def neg_log_likelihood(self, sentence, tags):
feats = self._get_lstm_features(sentence)
forward_score = self._forward_alg(feats)
gold_score = self._score_sentence(feats, tags)
return forward_score - gold_score
def forward(self, sentence): # dont confuse this with _forward_alg above.
# Get the emission scores from the BiLSTM
lstm_feats = self._get_lstm_features(sentence)
# Find the best path, given the features.
score, tag_seq = self._viterbi_decode(lstm_feats)
return score, tag_seq
#####################################################################
# Run training
START_TAG = "<START>"
STOP_TAG = "<STOP>"
EMBEDDING_DIM = 256 # 32 #100
HIDDEN_DIM = 128 # 16 #300
# training_data = pickle.load(open("Important2.pk", "rb"))
training_data, A, Xtest, Ytest, Atest = pickle.load(open("FromSahir'sCode.pk", "rb"))
word_to_ix = {}
'''
#sanity code
for sentence, tags in training_data:
print(sentence, tags)
'''
for sentence, tags in training_data:
for word in sentence:
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)
word_to_ix['<UNK>'] = len(word_to_ix)
# tag_to_ix = {"B": 0, "I": 1, "O": 2, START_TAG: 3, STOP_TAG: 4}
tag_to_ix = {"O": 0, "I": 1, START_TAG: 2, STOP_TAG: 3}
# tag_to_ix = {"O": 0, "I": 1}
# torch.cuda.set_device(0)
# device = torch.device("cuda:0")
device=torch.device('cuda')
model = BiLSTM_CRF(len(word_to_ix), tag_to_ix, EMBEDDING_DIM, HIDDEN_DIM)
model.to(device)
# model = nn.DataParallel(model)
# optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4)
# optimizer = optim.Adam(model.parameters(), lr=0.01, weight_decay=1e-4)
optimizer = optim.Adam(model.parameters(), lr=0.0001)
# Check predictions before training
with torch.no_grad():
precheck_sent = prepare_sequence(training_data[0][0], word_to_ix)
precheck_tags = torch.tensor([tag_to_ix[t] for t in training_data[0][1]], dtype=torch.long)
# precheck_tags = torch.tensor([tag_to_ix[t] for t in training_data[0][1]], dtype=torch.long)
print('Before training', model(precheck_sent))
def unique(list1):
# intilize a null list
unique_list = []
# traverse for all elements
for x in list1:
# check if exists in unique_list or not
if x not in unique_list:
unique_list.append(x)
# print list
return unique_list
# Make sure prepare_sequence from earlier in the LSTM section is loaded
from sklearn.metrics import classification_report
import random
random.seed(0)
BATCH_SIZE = 16
num_steps = 0
model.zero_grad()
for epoch in range(50): # again, normally you would NOT do 300 epochs, it is toy data
random.shuffle(training_data)
train_loss = 0
model.train()
for sentence, tags in tqdm(training_data):
num_steps += 1
# Step 2. Get our inputs ready for the network, that is,
# turn them into Tensors of word indices.
sentence_in = prepare_sequence(sentence, word_to_ix)
targets = torch.tensor([tag_to_ix[t] for t in tags], dtype=torch.long)
# Step 3. Run our forward pass.
loss = model.neg_log_likelihood(sentence_in, targets)
# model.
# nn.modules
train_loss += loss.data.cpu().numpy()[0]
# Step 4. Compute the loss, gradients, and update the parameters by
# calling optimizer.step()
loss.backward()
if num_steps % BATCH_SIZE == 0:
optimizer.step()
model.zero_grad()
print("For run:", epoch)
print(f"Training loss is: {train_loss/len(training_data):.4f}")
predictions, true_labels = [], []
model.eval()
with torch.no_grad():
for i in range(0, len(Xtest)):
precheck_sent = prepare_sequence(Xtest[i], word_to_ix)
tags_predicted = model(precheck_sent)
# print(tags_predicted[1])
# print(type(tags_predicted[1]))
targets = torch.tensor([tag_to_ix[t] for t in Ytest[i]], dtype=torch.long)
targets = targets.tolist()
# print(targets)
# print(type(targets))
predictions.append(tags_predicted[1])
true_labels.append(targets)
predictions_flat = []
for sublist in predictions:
for item in sublist:
predictions_flat.append(item)
true_labels_flat = []
for sublist in true_labels:
for item in sublist:
true_labels_flat.append(item)
print("Printing predictions and true labels")
print(classification_report(true_labels_flat, predictions_flat, target_names=['class B', 'class I']))
# print(predictions_flat)
# true_labels = np.array(true_labels).flatten()
# # print(true_labels_flat)
# Check predictions after training
# inputs = prepare_sequence(Xtest[i], word_to_ix)
from sklearn.metrics import f1_score
# Python program to check if two
# to get unique values from list
# using traversal
# function to get unique values
# for x in unique_list:
# print
# x,
# We got it!
# # # #See what the scores are after training
# predictions, true_labels = [], []
# with torch.no_grad():
# # for i in range(len(Xtest)):
# for i in range(10):
# inputs = prepare_sequence(Xtest[i], word_to_ix)
# # targets = prepare_sequence(Ytest[i], tag_to_ix)
#
# # sentence_in = prepare_sequence(sentence, word_to_ix)
# targets = torch.tensor([tag_to_ix[t] for t in Ytest[i]], dtype=torch.long)
#
# tag_scores = model(inputs)
# np.array(tag_scores.detach.cpu())
#
#
# # print(type(tag_scores[1]))
# # print(type(targets.detach().cpu().numpy()))
# # predictions.append(tag_scores[1].detach().numpy())
# #
# # # predictions.append(list(np.argmax(tag_scores[1].numpy(), axis=1)))
# # true_labels.append(list(targets.detach().cpu().numpy()))
#
# for i in range(0,len(Xtest)):
# print(Xtest[i],Ytest[i],predictions[i])
# if(len(predictions)==len(true_labels)):
# print("yes")
# else:
# print("no")
# with open("prediction.txt", 'w') as f:
# for i in range(len(Xtest)):
# f.write(str(test_list[i][0]))
# f.write("\n")
# j = 0
# for word in test_list[i][2]:
# f.write("{} {} {}".format(word, 'O', str(predictions[i][j])))
# f.write("\n")
# j += 1
# f.write("\n")
# # print("F1-Score: {}".format(f1_score(valid_tags, pred_tags)))
######################################################################
# Exercise: A new loss function for discriminative tagging
# --------------------------------------------------------
#
# It wasn't really necessary for us to create a computation graph when
# doing decoding, since we do not backpropagate from the viterbi path
# score. Since we have it anyway, try training the tagger where the loss
# function is the difference between the Viterbi path score and the score
# of the gold-standard path. It should be clear that this function is
# non-negative and 0 when the predicted tag sequence is the correct tag
# sequence. This is essentially *structured perceptron*.
#
# This modification should be short, since Viterbi and score\_sentence are
# already implemented. This is an example of the shape of the computation
# graph *depending on the training instance*. Although I haven't tried
# implementing this in a static toolkit, I imagine that it is possible but
# much less straightforward.
#
# Pick up some real data and do a comparison!
#