[ https://issues.apache.org/jira/browse/CXF-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17979288#comment-17979288 ]
Eric edited comment on CXF-9146 at 6/16/25 5:27 PM: ---------------------------------------------------- [~reta] I do not think that I am using CXF in an improper or undocumented way. I am not closing the response since there is no response to close. I just create a JAXRS-Client by Proxy with CXF. All HTTP-Requests are performend internally through a dynamic proxy created by the cxf-jaxrs-client-API which handles both the Request and the Response. The example should be perfectly valid: 1. A threadsafe Proxy-Client is created 2. Many Requests are performend with the same, threadsafe client in multiple threads This corresponds to our pattern of creating a CXF client, registering it as a springbean and using it for the entire lifetime of the application, which should be a common usecase. This also explains the MemoryLeak: Since the client lives in the application scope, its ThreadLocalClientState is never garbage collected, so any MemoryLeak within its WeakHashMap with Thread keys will sum up over the lifetime of the application. Note that as by comment above I created a newer, MUCH simpler PR to try to clear the reference to the PipedOutputStream which prevents the garbage collection of dead threads: [https://github.com/apache/cxf/pull/2467/files] Sorry for all of this infodump, I just want to help to solve this problem which made huge waves in our production environment. I have already fixed it the same way as in the PR by inheriting from HttpClientHTTPConduit by simply nulling the wrapped stream in an outer close block, and it's working fine. was (Author: JIRAUSER293945): [~reta] I do not think that I am using CXF in an improper or undocumented way. I am not closing the response since there is no response to close. I just create a JAXRS-Client by Proxy with CXF. All HTTP-Requests are performend internally through a dynamic proxy created by the cxf-jaxrs-client-API which handles both the Request and the Response. The example should be perfectly valid: 1. A threadsafe Proxy-Client is created 2. Many Requests are performend with the same, threadsafe client in multiple threads This corresponds to our pattern of creating a CXF client, registering it as a springbean and using it for the entire lifetime of the application, which should be a common usecase. This also explains the MemoryLeak: Since the client lives in the application scope, its ThreadLocalClientState is never garbage collected, so any MemoryLeak within its WeakHashMap with Thread keys will sum up over the lifetime of the application. Note that as by comment above I created a newer, MUCH simpler PR to try to clear the reference to the PipedOutputStream which prevents the garbagecollection of dead threads: [https://github.com/apache/cxf/pull/2467/files] > MemoryLeak caused by ThreadLocalClientState when used with the > HttpClientConduit > -------------------------------------------------------------------------------- > > Key: CXF-9146 > URL: https://issues.apache.org/jira/browse/CXF-9146 > Project: CXF > Issue Type: Bug > Reporter: Eric > Priority: Major > Attachments: CxfClientMemoryLeakTest.java, > image-2025-06-10-19-04-43-021.png, image-2025-06-10-19-05-03-593.png > > > {color:#000000}Is defined as following:{color} > {code:java} > public class ThreadLocalClientState implements ClientState { > private Map<Thread, LocalClientState> state = > Collections.synchronizedMap(new WeakHashMap<Thread, LocalClientState>()); > {code} > ... which does not make any sense to me, as is this is just ThreadLocal built > at home, a class which as been in the JDK for a very long time.. A > WeakHashMap of Threads might emulate ThreadLocal, but it can cause > unexpected memory leaks, because the Weak-Keys are strongly held until the > WeakHashMap is accessed for the next time. > > Let's take this simple example: > {code:java} > var mb = 1024 * 1024; > var rt = Runtime.getRuntime(); > var entries = new WeakHashMap<Object, byte[][]>(); > var keys = new ArrayList<>(); > System.gc(); > System.out.printf("Memory before Memory Load: %s mb%n", (rt.totalMemory() - > rt.freeMemory()) / mb); > IntStream.range(0, 100).mapToObj(i -> new Date(i + 1000)).forEach(key -> { > keys.add(key); > entries.put(key, new byte[1][10 * mb]); > }); > System.gc(); > System.out.printf("Memory after Memory Load: %s mb%n", (rt.totalMemory() - > rt.freeMemory()) / mb); > keys.clear(); > System.gc(); > System.out.printf("Memory after Memory Keys clear: %s mb%n", > (rt.totalMemory() - rt.freeMemory()) / mb); > System.out.printf("Map Size: %s%n", entries.size()); > System.gc(); > System.out.printf("Memory after first Memory Map access: %s mb%n", > (rt.totalMemory() - rt.freeMemory()) / mb);{code} > > This leads to the following output: > {noformat} > Memory before Memory Load: 13 mb > Memory after Memory Load: 1213 mb > Memory after Memory Keys clear: 1213 mb > Map Size: 0 > Memory after first Memory Map access: 13 mb > {noformat} > > Please ignore my inaccurate measurements with System.gc() without using JMH, > but when looking up the source for WeakHashMap then it becomes obvious that > no cleanup is technically possible until the WeakHashMap is deferenced for > the next time for a method of the Map interface. This is a scenario which > should rarely happen in a production enviroment, but we have seen it at least > once being the cause of an OutOfMemoryError wenn many threads in in parallel > executed requests which returned very large Response objects in memory. > Therefore, and especially when thinking of the future of virtual threads and > scoped values, I would advise to the refactor the code so that it internally > uses a simple Treadlocal from now on instead of the WeakHashMap. -- This message was sent by Atlassian Jira (v8.20.10#820010)