-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
5Gb limit #617
5Gb limit #617
Comments
Does "uploads from comp. backend" also cover the case of saving work.zip to S3 when a study is closed? Afaik, this is also affected. |
Update on sprint MacaronsDone
Ongoing
Open
|
Update on sprint CroissantDone
OngoingImplement multipart upload links to break 5Gb limit:
Open
|
Update on sprint MeteoraDone
OngoingOpen |
todo
|
Update on sprint BrutalismNo progress this sprint on this issue Open
|
Update on sprint VaporwaveDoneOngoing
Open |
@sanderegg |
@esraneufeld no there isn't we decided in PM1 to concentrate on other priorities, since this one has a workaround and that we concentrate on dynamic services at the moment. |
@esraneufeld - just to add some detail, the workaround is to use the "curl" command's "put" feature. It's a substitution of 1 or 2 lines of code, and seems to work just fine without any drawbacks. Though at some point it would be nice to fix it, since the average user won't know the trick. |
@matusdrobuliak66 Investigate options AWS temporary credientials for S3 |
Current status is that presigned link are created with a very long valid time. this is not optimal. Best would be to use the osparc API directly, therefore we would not rely on a hard-coded time value. it would also be more secure. |
Problem:
AWS S3 has a 5 GB limit for a sinlge put operation. The single pre-signed link strategy fails. This affects:
Remedy:
Baklava
The text was updated successfully, but these errors were encountered: