You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
#31
Getting below error while running the backward pass:
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
Answering your question, no, in fact to begging with, we are not using one-hot encoding, based on what I've read, using embeddings is better, witch is what is done in the video, and it's done when building the vocabulary, where the char variable is, the stoi and itos variables (if I remember well their names) are basically doing the embedding part, then after that we do the the iteration first to the forward pass and then to the backward pass.
Getting below error while running the backward pass:
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
Code
#Forward Pass
logits = (xenc @ W)
counts = logits.exp()
prob = counts/counts.sum(1,keepdim=True)
loss = - prob[torch.arange(5),ys].log().mean()
print(loss.item())
#Backward Pass
W.grad=None
loss.backward()
#update the weights
W.data += -0.1 * W.grad
Query:
Why are performing the one-hot encoding for the input every-time while iterating the forward pass?
The text was updated successfully, but these errors were encountered: