-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPFS Services Memory Increases with Data Addition #9856
Comments
You can do |
ipfs-profile.zip IPFS binary in the dump is removed due to size constraints but we have not made any customization to docker Image ipfs/go-ipfs:v0.19.0 |
@smvdn thx, I've looked at your profile and 40% of it is spent generating the archive, the rest is spent doing network IO. That is a 20% which is not great, but not that bad given this assume one core (and I hope you have more than one). If this was just a lucky moment, you can run a way longer profile by passing |
@Jorropo we are seeing issues variation in utilization with memory during loads, CPU variation seems to be normal. Attaching the profile logs with 300s as well. Is there any reference article you would suggest for understanding more about different threads happening on background (like Archive, network IO, etc.) |
@smvdn in your last profile I see 23TB of memory allocated (but then freed) the heap is only 70MB big tho, so idk about the 4GB. |
As mentioned earlier we are running the service as Pods in Kubernetes. The Kubernetes nodes are having 16GB memory and IPFS containers are having a max of 8GB. Not clear how the tool is showing 23TB (which is way too high) memory available on the VM. I believe 70 MB of heap is on low side too. Should we look on optimizing heap as well? |
@smvdn the 23TB is a flow over how long you recorded the profile not an instant mesure. That means memory is allocated then gargage collected then it repeats, for a total of 23TB. |
@Jorropo We are not able to limit the Memory usage GOMEMLIMIT. We are setting the env GOMEMLIMIT=1024MiB in the IPFS containers, however the IPFS service memory is going beyond 2.5 GB post the env update. Attaching the snapshot of GOMEMLIMIT env config for reference. Need some help to see if I am missing some config here. |
@smvdn BTW |
@Jorropo have tested with IPFS v0.21.0 but GOMEMLIMIT was not effective in limiting memory in our case. The memory still seems to increase with data operations beyond the limit set. One more thing noticed is the memory seems to be persistent once it reached around 3 GB even when there are no data operations. If GC is running ideally this should reduce the memory post data operations correct ? |
@smvdn you can find the docs for GOMEMLIMIT here:
This is a soft not a hard limit, this is a feature of the go runtime, we don't have a whole lot of control on how it works. |
Triage notes:
|
Checklist
Installation method
built from source
Version
Config
Description
We are having IPFS run as pods in the Kubernetes. While starting the IPFS pod the memory consumption of IPFS container is under 1.5 GB but while doing the data addition (the data are being pinned as well) operation for 3 to 4 hours, the container memory is drastically increasing to nearly 4 GB. Would like to understand why the memory consumption is showing this drastic increase?
Is there an option with IPFS to limit the max memory consumption by the IPFS containers, otherwise this could crash Pod thus impacting the incoming new data?
The text was updated successfully, but these errors were encountered: