You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In production we observe a continuous growth of direct memory which very often leads to an OOM.
While doing some tests in development we noticed that the problem occurs only if the cached mode is used.
In proxy mode the problem does not occur.
reactor.netty.ReactorNetty$InternalNettyException: java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 17176849418, limit: 17179869184)
Caused by: java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 17176849418, limit: 17179869184)
at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121)
at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:332)
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:701)
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:676)
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:215)
at io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:197)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:139)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:129)
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:396)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53)
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)
at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:785)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:833)
Using netty's leak detection we didn't find any leaks and the workaroud is to restart the carapace nodes.
The text was updated successfully, but these errors were encountered:
In production we observe a continuous growth of direct memory which very often leads to an OOM.
While doing some tests in development we noticed that the problem occurs only if the cached mode is used.
In proxy mode the problem does not occur.
Using netty's leak detection we didn't find any leaks and the workaroud is to restart the carapace nodes.
The text was updated successfully, but these errors were encountered: