[ https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274278#comment-17274278 ]
Yang Wang commented on FLINK-21143: ----------------------------------- I have some thoughts about the three scenario you have tested. # If the exception is caused by the old jar, then it is the expected behavior. # I think this is the expected behavior. Because sql client is trying to the load the connector classes on Flink client side. # It is also the expected behavior. I have to say again about the mechanism of shared lib. The jars in the provided lib directory are only used for JobManager and TaskManager. It is used to skip the unnecessary uploading(lib, plugins) from local and then register remote directory as Yarn public distributed cache. This means that you still need to have the connector jars on the client side when user main is executing on client. Sql client is in a similar situation. > 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` > config jars > --------------------------------------------------------------------------------------- > > Key: FLINK-21143 > URL: https://issues.apache.org/jira/browse/FLINK-21143 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN, Runtime / Configuration > Affects Versions: 1.12.0 > Reporter: zhisheng > Priority: Major > Attachments: flink-deploy-sql-client-.log, > image-2021-01-27-16-53-11-255.png, image-2021-01-27-16-55-06-104.png, > image-2021-01-27-16-56-47-400.png, image-2021-01-27-16-58-43-372.png, > image-2021-01-27-17-00-01-553.png, image-2021-01-27-17-00-38-661.png > > > Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job > start,so I upload all jars in HDFS,but I update the jars in HDFS(not > flink-1.12.0/lib/),it will still use the lib/ jars instead of use the new > HDFS jars when I submit new job. -- This message was sent by Atlassian Jira (v8.3.4#803005)