ammu20-dev commented on code in PR #25656: URL: https://github.com/apache/flink/pull/25656#discussion_r1895149338
########## flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/runtime/stream/sql/FunctionITCase.java: ########## @@ -1542,6 +1542,21 @@ void testUsingAddJar() throws Exception { "drop function lowerUdf"); } + @Test + void testUsingAddJarWithCheckpointing() throws Exception { + env().enableCheckpointing(100); + tEnv().executeSql(String.format("ADD JAR '%s'", jarPath)); + ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader(); + testUserDefinedFunctionByUsingJar( + env -> + env.executeSql( + String.format( + "create function lowerUdf as '%s' LANGUAGE JAVA", + udfClassName)), + "drop function lowerUdf"); + assertThat(contextClassLoader.equals(Thread.currentThread().getContextClassLoader())); + } Review Comment: > Btw are you sure that the issue was solved within [FLINK-36065](https://issues.apache.org/jira/browse/FLINK-36065)? Based on comments there it is not that clear Checking the git logs on the related files for this issue, I could find only this PR which is updating the logic around compilation of jobgraph from the stream graph within the Job manager. > Then The question would be why are we creating something new here instead of backporting existing solution? I assume this issue fix in 2.0 is a side effect of the refactoring and not a direct fix. As confirmed by @JunRuiLee on the https://issues.apache.org/jira/browse/FLINK-36065 they have no plans to backport this refactoring to lower versions. So I sticked onto this approach. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org