rage1337 opened a new issue, #6233:
URL: https://github.com/apache/camel-k/issues/6233

   ### What happened?
   
   We are trying run a netty to mqtt integration.  Our platform is OpenShift 
4.16 and Camel K v2.7.0 installed via the operator.
   
   The use case is very simple:
   
   ```
   kamel run \
       --trait container.request-cpu=2 \
       --trait container.limit-cpu=4 \
       --trait container.request-memory=1Gi \
       --trait container.limit-memory=4Gi \
       --trait jvm.options=-Xms512m \
       --trait jvm.options=-Xmx1g \
       --trait jvm.options=-XX:MaxDirectMemorySize=1g \
       --trait jvm.options=-Dio.netty.maxDirectMemory=536870912 \
       --trait jvm.options=-Dio.netty.allocator.numDirectArenas=4 \
       --trait jvm.options=-Dio.netty.allocator.maxOrder=7 \
       --trait jvm.options=-XX:MaxMetaspaceSize=256m \
       --trait jvm.options=-XshowSettings:vm \
       --trait jvm.options=-XX:+PrintFlagsFinal \
       --trait jvm.options=-Dio.netty.leakDetectionLevel=advanced \
       --trait jvm.options=-Dcom.sun.management.jmxremote \
       --trait jvm.options=-Dcom.sun.management.jmxremote.port=7091 \
       --trait jvm.options=-Dcom.sun.management.jmxremote.rmi.port=7091 \
       --trait jvm.options=-Dcom.sun.management.jmxremote.local.only=false \
       --trait jvm.options=-Dcom.sun.management.jmxremote.authenticate=false \
       --trait jvm.options=-Dcom.sun.management.jmxremote.ssl=false \
       --trait logging.level=DEBUG \
       --trait jvm.options=-Djava.rmi.server.hostname=127.0.0.1 \
       --config configmap:camel-k-transformation-file test08.xml
   
   ```
   test08.xml looks as follows: 
   
   ```
   <routes xmlns="http://camel.apache.org/schema/spring";>
       <route id="bitstream-route">
           <from 
uri="netty:[tcp://abc-service.kamel-example.svc.cluster.local:10002?clientMode=true&amp;textline=true&amp;sync=false&amp;decoderMaxLineLength=5000000&amp;delimiter=LINE&amp;allowDefaultCodec=false](tcp://scampiqgen-service.kamel-example.svc.cluster.local:10002?clientMode=true&textline=true&sync=false&decoderMaxLineLength=5000000&delimiter=LINE&allowDefaultCodec=false)"/>
           <filter>
               <simple>${body} == null</simple>
               <stop/>
           </filter>
           <to 
uri="[paho-mqtt5:theQueue?userName=abc&amp;password=abc&amp;brokerUrl=tcp://mosquitto-service.kamel-example.svc.cluster.local:11883](paho-mqtt5:theQueue?userName=abc&password=abc&brokerUrl=tcp://mosquitto-service.kamel-example.svc.cluster.local:11883)"/>
       </route>
   </routes>
   ```
   
   This results in an OOM after a while with the following stack trace:
   ```
   
   2025-08-19 09:41:50,519 WARN  
[org.apa.cam.com.net.NettyConsumerExceptionHandler] (Camel (camel-1) thread #6 
- NettyConsumerExecutorGroup) Closing channel as an exception was thrown from 
Netty. Caused by: [java.lang.OutOfMemoryError - Cannot reserve 65536 bytes of 
direct buffer memory (allocated: 1073709164, limit: 1073741824)]: 
java.lang.OutOfMemoryError: Cannot reserve 65536 bytes of direct buffer memory 
(allocated: 1073709164, limit: 1073741824)
       at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
       at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121)
       at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:332)
       at 
io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:718)
       at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:693)
       at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:213)
       at io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:195)
       at io.netty.buffer.PoolArena.allocate(PoolArena.java:137)
       at io.netty.buffer.PoolArena.allocate(PoolArena.java:127)
       at 
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:403)
       at 
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
       at 
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
       at 
io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:140)
       at 
io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)
       at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:150)
       at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:796)
       at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:732)
       at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:658)
       at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
       at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:998)
       at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
       at java.base/java.lang.Thread.run(Thread.java:840)
   ```
   
   <img width="2296" height="1181" alt="Image" 
src="https://github.com/user-attachments/assets/0fb8c5a2-d0bb-4727-909a-8f1ba4ec3b8c";
 />
   
   Increasing the memory of the pod just prolongs the time until this hits.
   We experimented with various flags, but did not come to a solution that 
works over time.
   
   We found [this medium 
article](https://medium.com/@asafmesika/a-netty-bytebuf-memory-leak-story-and-lessons-learned-e715aeb0d275)
 but are unsure if this is the same problem.
   
   
   
   ### Steps to reproduce
   
   _No response_
   
   ### Relevant log output
   
   ```shell
   
   ```
   
   ### Camel K version
   
   v2.7.0


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to