Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

s3util memory consumption #34

Open
nathany opened this issue Feb 12, 2015 · 1 comment
Open

s3util memory consumption #34

nathany opened this issue Feb 12, 2015 · 1 comment

Comments

@nathany
Copy link

nathany commented Feb 12, 2015

Hi Keith,

Today I tried using s3util to stream 50 x 50MB videos to S3 in one second. My Heroku dyno ran out of memory (512MB).

vegeta attack -targets=targets-video.txt -rate=50 -duration=1s > results.bin

So time to figure out the memory profiler. This is what it reports for a smaller 500K upload:

      flat  flat%   sum%        cum   cum%
   76800kB 96.17% 96.17%    76800kB 96.17%  github.com/kr/s3/s3util.(*uploader).Write
  529.26kB  0.66% 96.83%   529.26kB  0.66%  bufio.NewReaderSize
  399.30kB   0.5% 97.33%   814.94kB  1.02%  crypto/x509.parseCertificate
   32.01kB  0.04% 97.37% 76970.79kB 96.38%  io.Copy

The io.Copy is allocating 32K for every upload. My app is reading from a multipart.Part which doesn't implement WriteTo to avoid creating that buffer, nor does s3util implement:

func (u *uploader) ReadFrom(r io.Reader) (n int64, err error)

I'm not sure if that is the main issue though. I'm still trying to understand uploader.go, particularly Write.

@nathany
Copy link
Author

nathany commented Feb 12, 2015

So I noticed that after each part is flushed (flush()) the u.buf is cleared rather than being reused.

I wonder if it's still possible to do the intelligent increasing of part size with minimal allocations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant