Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce multipart form upload memory usage! #309

Closed
n0v3xx opened this issue Feb 13, 2020 · 14 comments
Closed

Reduce multipart form upload memory usage! #309

n0v3xx opened this issue Feb 13, 2020 · 14 comments
Assignees

Comments

@n0v3xx
Copy link

n0v3xx commented Feb 13, 2020

Hi,
the problem with this multipart form implementation is that they need huge amount of RAM. You should include an option to switch to a diffrent method where you can use io.Pipe to stream the file.
Would be great if you can implement this.

@jeevatkm
Copy link
Member

@n0v3xx Thank you for sharing your experience. I will look into it when I get a chance.

If you're interested, you can take a stab at it too.

@amarjeetanandsingh
Copy link

amarjeetanandsingh commented May 25, 2020

@jeevatkm
Could you please help me a bit to gather the requirement? Am interested in taking up this task.

As far as I can understand, we need to implement an alternate of SetMultiValueFormData(params url.Values) method so that we can stream the values of multipart form data.
Since SetMultiValueFormData() takes url.Values(which is a map of string, string) as input, it stores all the data in memory that we need to escape that.
Am i correct?

@jeevatkm
Copy link
Member

jeevatkm commented Sep 4, 2020

@amarjeetanandsingh This issue created/reported to propose to use io.Pipe while processing multipart files to reduce memory usage during a request.

@bartzz
Copy link

bartzz commented Sep 7, 2020

Can confirm this is an issue

(pprof) top
Showing nodes accounting for 781.27MB, 100% of 781.27MB total
Showing top 10 nodes out of 19
flat flat% sum% cum cum%
781.27MB 100% 100% 781.27MB 100% bytes.makeSlice
0 0% 100% 195.24MB 24.99% bytes.(*Buffer).Grow
0 0% 100% 586.02MB 75.01% bytes.(*Buffer).Write
0 0% 100% 781.27MB 100% bytes.(*Buffer).grow
0 0% 100% 586.02MB 75.01% bytes.(*Reader).WriteTo
0 0% 100% 586.02MB 75.01% gopkg.in/resty%2ev1.(*Client).execute
0 0% 100% 586.02MB 75.01% gopkg.in/resty%2ev1.(*Request).Execute
0 0% 100% 586.02MB 75.01% gopkg.in/resty%2ev1.(*Request).Post
0 0% 100% 586.02MB 75.01% gopkg.in/resty%2ev1.addFileReader
0 0% 100% 586.02MB 75.01% gopkg.in/resty%2ev1.handleMultipart
(pprof)

@jeevatkm
Copy link
Member

jeevatkm commented Sep 9, 2020

@bartzz Thanks for sharing profiling info. Yeah, its good to optimize memory usage for multipart file upload follow.
BTW, I noticed, you're using resty v1, I would recommend you to upgrade it to v2 😄

@segevda
Copy link
Contributor

segevda commented Sep 20, 2023

Hi @jeevatkm , this might be a good candidate for V3, wdyt?

@jeevatkm
Copy link
Member

Agreed @segevda

@jeevatkm jeevatkm added v3-selected and removed enhancement v2 For resty v2 labels Sep 25, 2023
@jeevatkm jeevatkm added this to the v3.0.0 Milestone milestone Sep 25, 2023
@krystian-panek-vmltech
Copy link

krystian-panek-vmltech commented Oct 6, 2023

I guess... that extensive upload memory usage killed right now my process...

fatal error: runtime: out of memory

runtime stack:
runtime.throw({0xc40f0e?, 0x2030?})
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/panic.go:1047 +0x5d fp=0xc00005fe00 sp=0xc00005fdd0 pc=0x435d3d
runtime.sysMapOS(0xc0a0800000, 0xa0000000?)
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/mem_linux.go:187 +0x11b fp=0xc00005fe48 sp=0xc00005fe00 pc=0x41877b
runtime.sysMap(0x1345b20?, 0x7f5d53611000?, 0x42bc40?)
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/mem.go:142 +0x35 fp=0xc00005fe78 sp=0xc00005fe48 pc=0x418155
runtime.(*mheap).grow(0x1345b20, 0x50000?)
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/mheap.go:1468 +0x23d fp=0xc00005fee8 sp=0xc00005fe78 pc=0x428cfd
runtime.(*mheap).allocSpan(0x1345b20, 0x50000, 0x0, 0x1)
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/mheap.go:1199 +0x1be fp=0xc00005ff80 sp=0xc00005fee8 pc=0x42843e
runtime.(*mheap).alloc.func1()
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/mheap.go:918 +0x65 fp=0xc00005ffc8 sp=0xc00005ff80 pc=0x427ec5
runtime.systemstack()
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/asm_amd64.s:492 +0x49 fp=0xc00005ffd0 sp=0xc00005ffc8 pc=0x4643e9

goroutine 7 [running]:
runtime.systemstack_switch()
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/asm_amd64.s:459 fp=0xc0000acbc0 sp=0xc0000acbb8 pc=0x464380
runtime.(*mheap).alloc(0xa0000000?, 0x50000?, 0x0?)
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/mheap.go:912 +0x65 fp=0xc0000acc08 sp=0xc0000acbc0 pc=0x427e05
runtime.(*mcache).allocLarge(0xc0000acc90?, 0xa0000000, 0x1)
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/mcache.go:233 +0x85 fp=0xc0000acc58 sp=0xc0000acc08 pc=0x4170e5
runtime.mallocgc(0xa0000000, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/malloc.go:1029 +0x57e fp=0xc0000accd0 sp=0xc0000acc58 pc=0x40d51e
runtime.growslice(0x400000?, {0x0?, 0xc0000acd60?, 0x4d0b26?}, 0xc00006f3e0?)
	/opt/hostedtoolcache/go/1.19.13/x64/src/runtime/slice.go:284 +0x4ac fp=0xc0000acd38 sp=0xc0000accd0 pc=0x44d0ec
bytes.growSlice({0xc050290000, 0x4fff82cf, 0x0?}, 0x4d0b00?)
	/opt/hostedtoolcache/go/1.19.13/x64/src/bytes/buffer.go:240 +0x96 fp=0xc0000acdb0 sp=0xc0000acd38 pc=0x4fb9d6
bytes.(*Buffer).grow(0xc000237ce0, 0x8000)
	/opt/hostedtoolcache/go/1.19.13/x64/src/bytes/buffer.go:142 +0x14f fp=0xc0000acde8 sp=0xc0000acdb0 pc=0x4fb36f
bytes.(*Buffer).Write(0xc000237ce0, {0xc00026a000, 0x8000, 0x0?})
	/opt/hostedtoolcache/go/1.19.13/x64/src/bytes/buffer.go:170 +0x66 fp=0xc0000ace18 sp=0xc0000acde8 pc=0x4fb566
mime/multipart.(*part).Write(0xc000234460, {0xc00026a000?, 0x8000?, 0x8000?})
	/opt/hostedtoolcache/go/1.19.13/x64/src/mime/multipart/writer.go:196 +0x3b fp=0xc0000ace48 sp=0xc0000ace18 pc=0x70ebbb
io.copyBuffer({0xd92c80, 0xc000234460}, {0xd93000, 0xc000012960}, {0x0, 0x0, 0x0})
	/opt/hostedtoolcache/go/1.19.13/x64/src/io/io.go:429 +0x204 fp=0xc0000acec8 sp=0xc0000ace48 pc=0x4a7aa4
io.Copy(...)
	/opt/hostedtoolcache/go/1.19.13/x64/src/io/io.go:386
github.com/go-resty/resty/v2.writeMultipartFormFile(0xc000035e40?, {0xc000247141, 0x7}, {0xc000035e40, 0x31}, {0xd93000, 0xc000012960})
	/home/runner/go/pkg/mod/github.com/go-resty/resty/[email protected]/util.go:217 +0x1ad fp=0xc0000acf48 sp=0xc0000acec8 pc=0xa2a26d
github.com/go-resty/resty/v2.addFile(0x12f78e0?, {0xc000247141, 0x7}, {0xc000035e40, 0x31})
	/home/runner/go/pkg/mod/github.com/go-resty/resty/[email protected]/util.go:227 +0x11a fp=0xc0000acfd8 sp=0xc0000acf48 pc=0xa2

v2.writeMultipartFormFile() ... I need to fix it relatively quickly or drop using Resty.

any ideas for a quick fix? my case is that I need to upload 17 GB file :/ and my tool using Resty need to handle that...

@jeevatkm ?

@bartzz
Copy link

bartzz commented Oct 6, 2023

@krystian-panek-wttech I don't think there is a quick fix at the moment. Either use chunked upload on the client or create an endpoint that doesn't use resty for big uploads.

@krystian-panek-vmltech
Copy link

chunked upload ? what API/methods are you referring to?

@bartzz
Copy link

bartzz commented Oct 6, 2023

@krystian-panek-wttech

https://api.video/blog/tutorials/uploading-large-files-with-javascript/

Upload your 17GB as ~50-100MB chunks and create a single file from them on BE

@krystian-panek-vmltech
Copy link

I have no way to update BE; probably the only way to go for me is: https://medium.com/@owlwalks/sending-big-file-with-minimal-memory-in-golang-8f3fc280d2c

if it will work for me I will consider contributing this approach to Resty somehow/if possible

@krystian-panek-vmltech
Copy link

the approach from medium.com works well ;) I am convinced that it should be incorporated into Resty someday.

@jeevatkm
Copy link
Member

jeevatkm commented Oct 4, 2024

Done, refer to PR #879

@jeevatkm jeevatkm closed this as completed Oct 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

6 participants