-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ThreadPoolMapper #1052
Add ThreadPoolMapper #1052
Conversation
Currently I am unhappy with a few things and open for suggestions:
I also noticed two unrelated things, that should be improved:
|
Good catch. It might be better to just name them
Yeah. IMHO,
I think one extra test to validate multiple
Yea, that would be an ideal case.
I think they are not exactly same.
Thank you for catching that. Updated the issue. There has been always a project in my mind to do a auto-detection on those uncovered test cases. |
Alternatively we could modify lengthsetter to not only accept Noob question: _helper(lambda data: (data[0], data[1], data[0] + data[1]), fn_n1_def, [0, 1], 2) The return value of the lambda function is always a tuple. but in |
76d2252
to
c35ad5d
Compare
Actually, we can extend
I think it's executed. This test passes locally on my machine. |
After pytorch/pytorch#95067 got merged some tests are currently broken in torchvision (transforms.Identity(), AttributeError: 'Identity' object has no attribute 'name') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just realize this is almost a same DataPipe as prefetch
except we provided a function to be applied.
https://github.com/pytorch/data/blob/main/torchdata/datapipes/iter/util/prefetcher.py
This can be a base class for both prefetch
and pin_memory
where prefetch
takes a noop function but pin_memory
takes a default pin_memory_fn
and is not replicable.
Even fullsync takes the similar approach.
Would you pls take a reference from those them to see if we can consolidate them?
Are you sure? |
Emm you are right. Thanks for correcting me! Please let me know when the PR is ready to be reviewed. You can ask for review by doing re-request review to the reviewer section. |
I have made two more comments (see above) but in principal I think the PR is ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you. I think the implementation and test look good now. I only have the comment to do more benchmarking so that we can give users suggestion on the setting for max_workers
and scheduled_tasks
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you so much! This is awesome!
@ejguan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Fixes #1045
Changes