-
Notifications
You must be signed in to change notification settings - Fork 771
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minor change to avoid confusion in ch2-5 #310
Conversation
The documentation is not available anymore as the PR was closed or merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this tweak @daspartho and welcome to the 🤗 community!
I've left a small suggestion and then we can merge this!
chapters/en/chapter2/5.mdx
Outdated
@@ -87,7 +87,7 @@ InvalidArgumentError: Input to reshape is a tensor with 14 values, but the reque | |||
|
|||
Oh no! Why did this fail? "We followed the steps from the pipeline in section 2. | |||
|
|||
The problem is that we sent a single sequence to the model, whereas 🤗 Transformers models expect multiple sentences by default. Here we tried to do everything the tokenizer did behind the scenes when we applied it to a `sequence`, but if you look closely, you'll see that it didn't just convert the list of input IDs into a tensor, it added a dimension on top of it: | |||
The problem is that we sent a single sequence to the model, whereas 🤗 Transformers models expect multiple sentences by default. Here we tried to do everything the tokenizer did behind the scenes when we applied it to a `sequence`, but if you look closely, you'll see that the tokenizer didn't just convert the list of input IDs into a tensor, it added a dimension on top of it: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can use this as an opportunity to split the long sentence :)
The problem is that we sent a single sequence to the model, whereas 🤗 Transformers models expect multiple sentences by default. Here we tried to do everything the tokenizer did behind the scenes when we applied it to a `sequence`, but if you look closely, you'll see that the tokenizer didn't just convert the list of input IDs into a tensor, it added a dimension on top of it: | |
The problem is that we sent a single sequence to the model, whereas 🤗 Transformers models expect multiple sentences by default. Here we tried to do everything the tokenizer did behind the scenes when we applied it to a `sequence`. But if you look closely, you'll see that the tokenizer didn't just convert the list of input IDs into a tensor, it added a dimension on top of it: |
@lewtun I've made the suggested changes :) |
Thanks for iterating! |
very small PR
Change
Replaced "it" with "the tokenizer" on one line in ch2-5
Motivation
I was going through the course (which is fantastic by the way), and I was a little confused by this one line; perhaps it's just me, but I think it's better to replace "it" with the exact thing here to avoid confusion.