-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reject dataset uploads if organization storage quota is exceeded #6893
Conversation
@philippotto It is not simple for the backend to make the top-level error message very explanatory. Instead, the user currently gets this. If I remember correctly, there are already some instances where the frontend does exists-checks in the error message to add another message to the user, do you think something like that could work here too? Additionaly, I noticed that the state of the upload page is kind of stuck if the request fails like so. Maybe it would be possible to (somewhat?) reset it in this case, but not super important I guess. |
@philippotto ping :) Could you estimate if this would be a lot of effort? If so, we can also merge this as is. |
Sorry, I forgot about that! Looking into it right now :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I couldn't get the storage reporter to run even after adapting both application.conf keys mentioned in the PR description. The organization page always reported 0.0 GB 🤔 Still, the normal dataset upload succeeded. Also, I investigated and fixed another issue which I encountered during testing.
Huh, this is weird. Could it be that the displayed number is rounded down to 0.0? what does api/organizations contain? |
Good point! Indeed the response contains |
I retested and everything works as expected 🎉 |
I suppose you mean 0.0 TB? Otherwise, the rounding error explained in the following comments doesn't make sense to me 😅 |
In fact, the organization page didn't show any unit, because the upper limit was infinity. So at that point I assumed the unit might be GB, but it turned out to be TB. |
might be worth restructuring that display to avoid this kind of confusion for future users. Might make sense to separate the units for used vs allowed, but that does not play well with the current diagram style 🤷 |
…come-toast * 'master' of github.com:scalableminds/webknossos: Log all details on deleting annotation layer (#6950) fix typo Rename demo instance to wkorg instance (#6941) Add LOD mesh support for frontend (#6909) Fix layout of view mode switcher and move it (#6949) VaultPath no longer extends nio.Path (#6942) Release 23.04.0 (#6945) Use new zip.js version to allow zip64 uploads (#6939) Implement viewing sharded neuroglancer precomputed datasets (#6920) Reject dataset uploads if organization storage quota is exceeded (#6893) Refactor deprecated antd Dropdown menus (#6898)
…wings * 'master' of github.com:scalableminds/webknossos: updates docs for docker installation (#6963) Fix misc stuff when viewing tasks/annotations of another user (#6957) Remove segment from list and add undo/redo for segments (#6944) Log all details on deleting annotation layer (#6950) fix typo Rename demo instance to wkorg instance (#6941) Add LOD mesh support for frontend (#6909) Fix layout of view mode switcher and move it (#6949) VaultPath no longer extends nio.Path (#6942) Release 23.04.0 (#6945) Use new zip.js version to allow zip64 uploads (#6939) Implement viewing sharded neuroglancer precomputed datasets (#6920) Reject dataset uploads if organization storage quota is exceeded (#6893) Refactor deprecated antd Dropdown menus (#6898)
Steps to test:
reportUsedStorage.enabled = true
for your datastore in application.conf (note that if your organization is super new, you may want to lower rescanInterval as well so that you don’t have to wait 24 hours. initial data sets lastScanTime to creationTime.)Issues: