-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backend: File upload, add generating S3 presigned urls for PUT and GET #1170
Comments
Hey @pbn4 , Can you flesh out your thoughts on this, so we can attempt to size it? Were you thinking a direct upload solution like outlined here https://devcenter.heroku.com/articles/s3-upload-node#direct-uploading? What about bucket strategy (i.e. one bucket per client or can we get away with one bucket for all?) and file naming conventions? From what I can tell, we currently aren't using the aws-sdk package to interact with s3 storage, so would this be the first implementation of this? |
Yes this is exactly what I wanted us to implement.
I'd go with file naming conventions and 2 buckets (one for public read and one for private reads). Right now there are two use cases I know of:
yes first implementation of this. There are react libraries for frontend that handle stuff like this e.g. https://react-dropzone-uploader.js.org/ see s3 uploads manual. Backend could return a standardized message for a presigning request. This is for frontend folks to evaluate if the tool suits our needs though. |
When we were discussing this last week, @jaredcwhite mentioned using Cloudinary. Giving it more thought, I think this is the way to go and cuts down on the overhead of having to deal with image transformations in the future (and allows us to do it right away). Plus, there's a free tier that gives us 25 credits. 1 credit is good for 1k transformation, 1gb of storage or 1gb of bandwidth. Transformations only count for the first time an asset is transformed, so requesting an asset with a transformation specified in its URL won't count against us each time. If storage or bandwidth becomes an issue, it also allows us to specify an s3 bucket to cut down on costs. Given the low number of listings, I don't think it'll be an issue. I created a Cloudinary account for Exygy to try out some things, and I think it'll work well. I can add the API secret to Heroku, so you can grab it for your local environment there too. This also supports signed URLs for documents we want to keep private, but I think we only need to handle the public use case of listings for MVP. So here's what I think should happen (attempting to write this so that anyone can pick this up). This outline is in broad strokes, so if you need any clarification on any part, please ask. Backend
Future considerations: Frontend
|
@seanmalbert OK for me. A side note is that we do not have to push a stream to the server for private upload, cloudinary supports the same flow as the one that I proposed with s3: create a presigned upload url server-side and expose it to frontend (manual), so that the upload can happen browser side. |
I added the cloudinary account keys to Heroku's config vars on https://dashboard.heroku.com/apps/bloom-reference-backend/settings. The only two that need to be kept secure are CLOUDINARY_SECRET and CLOUDINARY_ENV (if you need that one). |
Blocker for #1153
The text was updated successfully, but these errors were encountered: