You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using memcache for session storage but on few nodes which are not restarted for more than a week it has been noticed that net.spy.memcached.protocol.binary.BinaryMemcachedNodeImpl class has occupied close to or more than 50% of total heap memory and not getting cleared even after major GC run.
Attaching heap memory snapshot for reference. Note : Nodes with this issue has writeQ (java.util.concurrent.LinkedBlockingQueue) size more than 400K (looks like cyclic dependency) where as other nodes which are working fine has writeQ size of 0.
Detailed snapshot of writeQ
Can someone help me in figuring out what is the cause of this issue?
The text was updated successfully, but these errors were encountered:
We have seen this issue in our cluster as well and this queue which is the operation queue was occupying almost 90% of the heap, I think this happens because of high concurrent load. A workaround is to limit the queue size by implementing custom operation queue like here and using that in the ConnectionFactory. Note - I did not implement any of it, just using it.
We are using memcache for session storage but on few nodes which are not restarted for more than a week it has been noticed that net.spy.memcached.protocol.binary.BinaryMemcachedNodeImpl class has occupied close to or more than 50% of total heap memory and not getting cleared even after major GC run.
Attaching heap memory snapshot for reference.
Note : Nodes with this issue has writeQ (java.util.concurrent.LinkedBlockingQueue) size more than 400K (looks like cyclic dependency) where as other nodes which are working fine has writeQ size of 0.
Detailed snapshot of writeQ
Can someone help me in figuring out what is the cause of this issue?
The text was updated successfully, but these errors were encountered: