-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tune upload perf, better cleanup logic #2532
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2532 +/- ##
==========================================
- Coverage 43.75% 43.74% -0.02%
==========================================
Files 504 504
Lines 24257 24266 +9
Branches 3275 3276 +1
==========================================
Hits 10614 10614
- Misses 12816 12825 +9
Partials 827 827
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will test tomorrow and circle back
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am currently testing this with a 6.9GB upload on Chrome / Mac OS / 2016 MacBook Pro.
- OK for separate PR: We should bump the warning size to idk 10GB (I got a warning for being over 1GB)
- The UI thread stalled for me as I was typing in the name field (after dragging and dropping my files) — about a 2.5 sec pause but then it was OK.
- My uplink is 21.5 Mbps and I seem to be ~ about saturating it; which is good. I'd say I'll finish 6.9GB in about 35 min total wall time.
- OK for separate PR: The progress optically appears to stall when the upload goes over 1GB but it's only because the number changes more slowly :) We should either always show more decimal places of precision above 1GB and/or we should show indeterminate progress
- OK for separate PR; it would be really nice to have some concurrency knobs (if meaningful) in the catalog settings so that different machines/users have some control and we can learn from the field how to optimize performance
@@ -148,6 +148,8 @@ function* handleInit({ resource, input, resolver }) { | |||
} | |||
|
|||
function* cleanup() { | |||
// TODO: refactor cleanup logic, so that the cleanup action is only dispatched | |||
// when there's anything to cleanup (to avoid re-renders every 5 sec) | |||
while (true) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😬
it was processing the dropped files (synchronously), so it's kinda expected, tho not very nice, i guess |
@akarve i'm not really sure how to approach this, ping me in slack for a discussion if you feel this is important |
Description
TODO