Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Download as archive produces corrupt archive if over limit #6555

Closed
wkloucek opened this issue Jun 19, 2023 · 7 comments · Fixed by #6590
Closed

Download as archive produces corrupt archive if over limit #6555

wkloucek opened this issue Jun 19, 2023 · 7 comments · Fixed by #6590
Assignees
Labels

Comments

@wkloucek
Copy link
Contributor

Describe the bug

The default archive max size is set to 1GiB. If one downloads a folder as archive, that is bigger that that, the archive will be capped at that size and is corrupt / can not be opened.

Steps to reproduce

Steps to reproduce the behavior:

  1. upload files in a folder, that produce an archive > 1GiB
  2. download folder as archive

Expected behavior

see an error message, that the archiving process failed because of the size limit.
have no archive downloaded.

Actual behavior

archive downloads happily without any error message, but one cannot open it.

Setup

oCIS 3.1.0-next.2 in Kubernetes with the oCIS Helm Chart

Additional context

@kulmann
Copy link
Contributor

kulmann commented Jun 19, 2023

web master had a change so that the archive download button will be disabled (and limitation explained via tooltip). But I don't think that that's sufficient. The archive should at least not be corrupted... or rejected in the first place.

@wkloucek
Copy link
Contributor Author

web master had a change so that the archive download button will be disabled (and limitation explained via tooltip). But I don't think that that's sufficient. The archive should at least not be corrupted... or rejected in the first place.

How can Web do that? My understanding is that this is the maximum size of the archive, so AFTER compression.

@kulmann
Copy link
Contributor

kulmann commented Jun 19, 2023

web master had a change so that the archive download button will be disabled (and limitation explained via tooltip). But I don't think that that's sufficient. The archive should at least not be corrupted... or rejected in the first place.

How can Web do that? My understanding is that this is the maximum size of the archive, so AFTER compression.

Announced to clients in capability. If that's not the uncompressed size I don't know why it would ever be announced as a capability.

@kulmann
Copy link
Contributor

kulmann commented Jun 19, 2023

... and how would the information about a max compressed archive size be useful to the archiver service? "throw it away if compressed archive exceeds 1GiB"? That doesn't sound very useful. 😜

@wkloucek
Copy link
Contributor Author

wkloucek commented Jun 19, 2023

... and how would the information about a max compressed archive size be useful to the archiver service? "throw it away if compressed archive exceeds 1GiB"? That doesn't sound very useful. stuck_out_tongue_winking_eye

But that's exactly the behavior I observed. To be honest, your current logic in Web should be safe, but you could prevent the download of files that can be well compressed. I'm fine with the current behavior you described. I just wanted to say that the backend might actually be doing different stuff.

@kobergj
Copy link
Collaborator

kobergj commented Jun 21, 2023

Server currently only counts uncompressed bytes when creating the archive. So your 1.000001 GB folder will fail.

There is a bug in the server implementation (it doesn't close the writer) which probably causes the archive to be corrupted. It is intended to be valid but not containing all files.

Is this still the desired behaviour? Then I'll just fix the bug. Otherwise I can change behaviour as needed

@wkloucek
Copy link
Contributor Author

Server currently only counts uncompressed bytes when creating the archive. So your 1.000001 GB folder will fail.

That's actually nice and aligned with oC Web. Probably I got it wrong, that this limit is about the compressed size.

There is a bug in the server implementation (it doesn't close the writer) which probably causes the archive to be corrupted. It is intended to be valid but not containing all files.

Is this still the desired behaviour? Then I'll just fix the bug. Otherwise I can change behaviour as needed

I think that's already a huge improvement, because oC Web is checking the size upfront.
For other clients, that are not checking the size upfront, this behavior is not too nice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants