This is a response to an existing thread (about Memory leak in recent versions of Tomcat):
https://www.mail-archive.com/users@tomcat.apache.org/msg141882.html I haven't found a way to reply publicly as a continuation of that thread. So here it goes what I am trying to say: I started experiencing exactly the same issue when updating from Spring 6.0.7 to 6.0.9, therefore updating tomcat from 10.1.5 to 10.1.8. The Memory leak is very clearly visible in my monitoring tools. A further heap dump reveals like many times more entries in waitingProcessors map than real active connections, and we end up with like 8 retained GB in memory full of those entries. I believe I have found a way to reproduce the issue locally. Open a websocket session from a client in Chrome, go to dev-tools and switch the tab to offline mode, wait > 50secs, go and switch it back to No Throttling. Sometimes I get an error back to the client like: a["ERROR\nmessage:AMQ229014\\c Did not receive data from /192.168.0.1\\c12720 within the 50000ms connection TTL. The connection will now be closed.\ncontent-length:0\n\n\u0000"] And other times I get instead something like c[1002, ""] from Artemis followed by an "Invalid frame header" error from Chrome (websockets view in dev-tools). Only when it is the latter case, looks to be leaking things in that map. Maybe it is a casualty or not, but that is what I have observed at least 2 times. After the error appeared, I waited long enough for FE to reconnect the session, and then I just quitted Chrome. Again, after forcefully downgrading Tomcat 10.1.8 to 10.1.5 while preserving the same Spring version, the issue is gone (confirmed in production), in fact I have never managed to get an "Invalid frame header" in Chrome again with Tomcat 10.1.5 (in like 10 attempts). Before I got it in 2 out of 4 attempts. Is this something already tracked? Best regards, Ruben