Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Offload gzip compression and bucket loading/decoding/serializing to web workers #3162

Merged
merged 13 commits into from
Sep 10, 2018

Conversation

philippotto
Copy link
Member

@philippotto philippotto commented Sep 6, 2018

URL of deployed dev instance (used for testing):

Steps to test:

  • test the components which are related to the webworkers I created:
    • general bucket loading
    • 4-bit mode
    • saving (skeleton as well as volume tracings)

Issues:


Copy link
Member

@daniel-wer daniel-wer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, I couldn't find any errors and the screenshot tests ran successfully as well! 🎉 It also "feels faster"™

What I would really like to do, are proper benchmarks, but I understand that this is hard. We do know, that the gzipping and base64 operations are computationally expensive, though, so moving them off the main thread should definitely have a positive impact! I think this PR is good to merge (unless you can think of a nice benchmark that we could execute before) :)

@@ -0,0 +1,33 @@
# Web Worker Modules
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, awesome that you took the time to write this explanation, extremely helpful and easy to understand! 🥇

@philippotto
Copy link
Member Author

Awesome, I couldn't find any errors and the screenshot tests ran successfully as well! tada It also "feels faster"™

Nice, I'm glad to hear that!

What I would really like to do, are proper benchmarks, but I understand that this is hard. We do know, that the gzipping and base64 operations are computationally expensive, though, so moving them off the main thread should definitely have a positive impact! I think this PR is good to merge (unless you can think of a nice benchmark that we could execute before) :)

Yeah, proper benchmarks would be really cool, but I'm afraid it's very hard to achieve. In the case of bucket loading, this PR is an actual improvement regarding time, but usually web workers would only account for better responsiveness. Time-measuring benchmarks couldn't say much about that.

Nevertheless, we could have some sort of integration-benchmark which loads the tracing view and measures the time until the view is complete. However, I'm not sure whether puppeteer and co. are representative for performance 🤔


I pushed another commit which uses flow to enforce that web workers are always properly instantiated after import. That should give us some extra safety :)

@@ -1,9 +1,11 @@
// @flow
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should add a linting rule for that, too ^^

@philippotto philippotto merged commit 42b61c3 into master Sep 10, 2018
@philippotto philippotto deleted the comlink branch January 24, 2019 08:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Investigate whether bucket loading should/could be done in a webworker
2 participants