[ https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17272600#comment-17272600 ]
zhisheng commented on FLINK-21143: ---------------------------------- [~fly_in_gis] yes, the /data/HDATA/yarn/local/filecache/40/flink-sql-connector-hbase-2.2_2.11-1.12.0.jar file is a local jar in the Yarn local cache directory. Also it is a public local resource. the file is the same with hdfs:///flink/composite-lib/flink-1.12.0/flink-sql-connector-hbase-2.2_2.11-1.12.0.jar Maybe, you can have a test, for example: 1、don't add any connector file in flink lib/ 2、config the yarn.provided.lib.dirs in the flink-conf.yaml and add flink sql HBase connector in the HDFS 3、you can start the sql client, and then create a HBase table, then select the table, it may has exception like: [ERROR] Could not execute SQL statement. Reason: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'hbase-2.2' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath. if add the flink-sql-connector-hbase-2.2_2.11-1.12.0.jar to the lib/, it may run, so I doubt flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars? > 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` > config jars > --------------------------------------------------------------------------------------- > > Key: FLINK-21143 > URL: https://issues.apache.org/jira/browse/FLINK-21143 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN, Runtime / Configuration > Affects Versions: 1.12.0 > Reporter: zhisheng > Priority: Major > Attachments: flink-deploy-sql-client-.log > > > Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job > start,so I upload all jars in HDFS,but I update the jars in HDFS(not > flink-1.12.0/lib/),it will still use the lib/ jars instead of use the new > HDFS jars when I submit new job. -- This message was sent by Atlassian Jira (v8.3.4#803005)