-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic bucket allocation & miscellaneous volume improvements #6055
Conversation
frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.js
Outdated
Show resolved
Hide resolved
frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.js
Outdated
Show resolved
Hide resolved
frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.js
Outdated
Show resolved
Hide resolved
…amic-bucket-allocation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great changes, code LGTM 👍
During testing I encountered some issues. I'm not sure whether they are related to the temporary changes to ease testing, but I would think not. Maybe buckets get collected, although they shouldn't be?
-
Something seems to be fishy with the data loading. When in mag 1 some buckets are not loaded/shown in mag 1, see for example the right side of the XY viewport here (loading had finished):
-
The loading pattern seems spotty and slower than before.
Before:
-
Also, I wasn't able to get a tooltip for the save button yet. I tried hovering before clicking the save button or afterwards. Is this very timing-dependent?
this.collectBucket(this.buckets[this.bucketIterator]); | ||
this.bucketCount--; | ||
} | ||
if (this.buckets.length > (2 * constants.MAXIMUM_BUCKET_COUNT_PER_LAYER) / 10) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (this.buckets.length > (2 * constants.MAXIMUM_BUCKET_COUNT_PER_LAYER) / 10) { | |
if (this.buckets.length > 2 * this.BUCKET_COUNT_SOFT_LIMIT) { |
…amic-bucket-allocation
…earing of handlers
Thanks for testing and raising these oddities 👍 I guess, I got a little bit betriebsblind and didn't notice this myself.
I finally found the issue for this and fixed it in 5f2f7bf. The code which marked a bucket as active happened to late which allowed the bucket collector to collect a bucket while allocating another one in
The spottiness should be solved with the above bug fix, too. The slowness is probably a consequence of the low bucket limit which makes GC more frequent (and this slows wk down). Due to the fix from above, wk recognizes that it's necessary to exceed the bucket limit, anyway (otherwise, not everything would be rendered). Therefore, garbage-collections and repeated re-downloads are now avoided. Could you please test again to see whether both issues are resolved now? (for me it looks good, but I didn't really notice a big speed difference before, either). Afterwards, I'll lift the artificial limits and we can do another testing round (however, I'm assuming that to be way more robust). I just deployed the newest version: https://dynamicbucketallocation.webknossos.xyz/
Yes, kind of. One needs a really big brush size (i.e., 300) and draw quickly (2 seconds should be enough). Then, hovering over the save button will briefly show the tooltip until all buckets are compressed. Depending on how many buckets were labeled, the tooltip might show progress for several seconds. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I finally found the issue for this and fixed it in 5f2f7bf. The code which marked a bucket as active happened to late which allowed the bucket collector to collect a bucket while allocating another one in consumeBucketsFromArrayBuffer. Through the artificial low bucket limit, this issue became present + frequent, but I assume this fixes also the rare disappearing buckets I saw from time to time in prod tada
Ah nice find! Good thing you introduced these strict limits for testing which made the issue much more apparent 👍
Could you please test again to see whether both issues are resolved now? (for me it looks good, but I didn't really notice a big speed difference before, either).
Both issues are solved for me and also was able to get a quick glance at the save button tooltip 👍
One thing that happened during testing when I was brushing with a large brush size was that the sagas crashed with "Maximum call stack size exceeded". Not sure if that's known. I don't think it has high priority since I brushed unrealistically fast using the maximum brush size. Still I wanted to mention it, stack trace attached. |
…webknossos into dynamic-bucket-allocation
Thanks for testing! I now cleaned up the debugging code and think this PR is ready to merge if you agree :)
Yes, I also noticed this error from time to time (only in unrealistic annotation scenarios and already before this PR) and also spent some time debugging it some time ago (unfortunately, without much success). Interestingly, your stack trace looks way bigger than mine (I "only" had 200 function calls on the stack) whereas you have more than 10000. This gave me the idea that redux-saga might do some sort of tail recursion which leads to this problem. I think I've got a fix for this, but I'd like to do this in a separate PR to unblock the current one. /edit: I opened #6074 for the above issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, see my one small suggestion.
Let's do a quick round of testing once the CI ran through and then this should be ready to be merged 👍
frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.js
Outdated
Show resolved
Hide resolved
Sounds great! I just tested a bit on https://dynamicbucketallocation.webknossos.xyz. Feel free to do so, too, and let me know whether I should merge :) |
I also tested a bit, LGTM 👍 |
…amic-bucket-allocation
WorkerPool
abstraction for that)PushQueue.push
could starve when the user never does a break longer than 1 second (unrealistic, but when testing, one can easily run into this)queue
) and add some commentsURL of deployed dev instance (used for testing):
Steps to test:
Issues:
(Please delete unneeded items, merge only when none are left open)