[ 
https://issues.apache.org/jira/browse/KUDU-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Serbin resolved KUDU-3567.
---------------------------------
    Fix Version/s: 1.18.0
       Resolution: Fixed

> Resource leakage related to HashedWheelTimer in AsyncKuduScanner
> ----------------------------------------------------------------
>
>                 Key: KUDU-3567
>                 URL: https://issues.apache.org/jira/browse/KUDU-3567
>             Project: Kudu
>          Issue Type: Bug
>          Components: client, java
>    Affects Versions: 1.18.0
>            Reporter: Alexey Serbin
>            Assignee: YifanZhang
>            Priority: Major
>             Fix For: 1.18.0
>
>
> With KUDU-3498 implemented in 
> [8683b8bdb|https://github.com/apache/kudu/commit/8683b8bdb675db96aac52d75a31d00232f7b9fb8],
>  now there are resource leak reports, see below.
> Overall, the way how {{HashedWheelTimer}} is used for keeping scanners alive 
> is in direct contradiction with the recommendation at [this documentation 
> page|https://netty.io/4.1/api/io/netty/util/HashedWheelTimer.html]:
> {quote}*Do not create many instances.*
> HashedWheelTimer creates a new thread whenever it is instantiated and 
> started. Therefore, you should make sure to create only one instance and 
> share it across your application. One of the common mistakes, that makes your 
> application unresponsive, is to create a new instance for every connection.
> {quote}
> Probably, a better way of implementing the keep-alive feature for scanner 
> objects in Kudu Java client would be reusing the {{HashedWheelTimer}} 
> instance from corresponding {{AsyncKuduClient}} client instance, not creating 
> a new instance of the timer (along with corresponding thread) per 
> AsyncKuduScanner object.  At least, an instance of {{HashedWheelTimer}} 
> should be properly released/shutdown to avoid resource leakages (a running 
> thread?) when GC-ing {{AsyncKuduScanner}} objects.
> For example, below is an example how the leak is reported when running 
> {{TestKuduClient.testStrings}}:
> {noformat}
> 23:04:57.774 [ERROR - main] (ResourceLeakDetector.java:327) LEAK: 
> HashedWheelTimer.release() was not called before it's garbage-collected. See 
> https://netty.io/wiki/reference-counted-objects.html for more information.
> Recent access records:
> Created at:
>   io.netty.util.HashedWheelTimer.<init>(HashedWheelTimer.java:312)
>   io.netty.util.HashedWheelTimer.<init>(HashedWheelTimer.java:251)
>   io.netty.util.HashedWheelTimer.<init>(HashedWheelTimer.java:224)
>   io.netty.util.HashedWheelTimer.<init>(HashedWheelTimer.java:203)
>   io.netty.util.HashedWheelTimer.<init>(HashedWheelTimer.java:185)
>   org.apache.kudu.client.AsyncKuduScanner.<init>(AsyncKuduScanner.java:296)
>   org.apache.kudu.client.AsyncKuduScanner.<init>(AsyncKuduScanner.java:431)
>   
> org.apache.kudu.client.KuduScanner$KuduScannerBuilder.build(KuduScanner.java:260)
>   org.apache.kudu.client.TestKuduClient.testStrings(TestKuduClient.java:692)
>   sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   java.lang.reflect.Method.invoke(Method.java:498)
>   
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>  
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>   java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to