[ https://issues.apache.org/jira/browse/FLINK-31818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17712921#comment-17712921 ]
SeungMin commented on FLINK-31818: ---------------------------------- [~JunRuiLi] Oh, I will test, and if this modification is correct than i will create pr to correct the documentation :D > parsing error of 'security.kerberos.access.hadoopFileSystems' in > flink-conf.yaml > -------------------------------------------------------------------------------- > > Key: FLINK-31818 > URL: https://issues.apache.org/jira/browse/FLINK-31818 > Project: Flink > Issue Type: Bug > Components: Runtime / Configuration > Affects Versions: 1.17.0 > Reporter: SeungMin > Priority: Major > Labels: bug > Fix For: 1.17.0 > > > There is a parsing error when I gave two or more hdfs namenodes URI separated > by commas as the value of key attribute > 'security.kerberos.access.hadoopFileSystems'. > > For example, I set this key attribute and value like below in flink-conf.yaml, > {code:java} > security.kerberos.access.hadoopFileSystems: > hdfs://hadoop-nn1.testurl.com:8020,hdfs://hadoop-nn2.testurl.com:8020 {code} > > then, the slash "/" is missing in second URI in parsed value > {code:java} > hdfs://hadoop-nn1.testurl.com:8020,hdfs:/hadoop-nn2.testurl.com:8020{code} > > > Received error message is here. > {code:java} > Caused by: org.apache.flink.util.FlinkRuntimeException: java.io.IOException: > Incomplete HDFS URI, no host: > hdfs://hadoop-nn1.testurl.com:8020,hdfs:/hadoop-nn2.testurl.com:8020 > at > org.apache.flink.runtime.security.token.hadoop.HadoopFSDelegationTokenProvider.lambda$getFileSystemsToAccess$2(HadoopFSDelegationTokenProvider.java:168) > ~[flink-dist-1.17.0.jar:1.17.0] > at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_362] > at > org.apache.flink.runtime.security.token.hadoop.HadoopFSDelegationTokenProvider.getFileSystemsToAccess(HadoopFSDelegationTokenProvider.java:157) > ~[flink-dist-1.17.0.jar:1.17.0] > at > org.apache.flink.runtime.security.token.hadoop.HadoopFSDelegationTokenProvider.lambda$obtainDelegationTokens$1(HadoopFSDelegationTokenProvider.java:113) > ~[flink-dist-1.17.0.jar:1.17.0] > at java.security.AccessController.doPrivileged(Native Method) > ~[?:1.8.0_362] > at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_362] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1966) > ~[hadoop-common-2.10.0-khp-20210414.jar:?] > at > org.apache.flink.runtime.security.token.hadoop.HadoopFSDelegationTokenProvider.obtainDelegationTokens(HadoopFSDelegationTokenProvider.java:108) > ~[flink-dist-1.17.0.jar:1.17.0] > at > org.apache.flink.runtime.security.token.DefaultDelegationTokenManager.lambda$obtainDelegationTokensAndGetNextRenewal$1(DefaultDelegationTokenManager.java:228) > ~[flink-dist-1.17.0.jar:1.17.0] > ... 13 more > Caused by: java.io.IOException: Incomplete HDFS URI, no host: > hdfs://hadoop-bi-nn1.dakao.io:8020,hdfs:/hadoop-bi-nn2.dakao.io:8020 > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:156) > ~[hadoop-hdfs-client-2.10.0-khp-20210414.jar:?] > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3241) > ~[hadoop-common-2.10.0-khp-20210414.jar:?] > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:122) > ~[hadoop-common-2.10.0-khp-20210414.jar:?] > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290) > ~[hadoop-common-2.10.0-khp-20210414.jar:?] > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258) > ~[hadoop-common-2.10.0-khp-20210414.jar:?] > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:471) > ~[hadoop-common-2.10.0-khp-20210414.jar:?] > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) > ~[hadoop-common-2.10.0-khp-20210414.jar:?] > at > org.apache.flink.runtime.security.token.hadoop.HadoopFSDelegationTokenProvider.lambda$getFileSystemsToAccess$2(HadoopFSDelegationTokenProvider.java:163) > ~[flink-dist-1.17.0.jar:1.17.0] > at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_362] > at > org.apache.flink.runtime.security.token.hadoop.HadoopFSDelegationTokenProvider.getFileSystemsToAccess(HadoopFSDelegationTokenProvider.java:157) > ~[flink-dist-1.17.0.jar:1.17.0] > at > org.apache.flink.runtime.security.token.hadoop.HadoopFSDelegationTokenProvider.lambda$obtainDelegationTokens$1(HadoopFSDelegationTokenProvider.java:113) > ~[flink-dist-1.17.0.jar:1.17.0] > at java.security.AccessController.doPrivileged(Native Method) > ~[?:1.8.0_362] > at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_362] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1966) > ~[hadoop-common-2.10.0-khp-20210414.jar:?] > at > org.apache.flink.runtime.security.token.hadoop.HadoopFSDelegationTokenProvider.obtainDelegationTokens(HadoopFSDelegationTokenProvider.java:108) > ~[flink-dist-1.17.0.jar:1.17.0] > at > org.apache.flink.runtime.security.token.DefaultDelegationTokenManager.lambda$obtainDelegationTokensAndGetNextRenewal$1(DefaultDelegationTokenManager.java:228) > ~[flink-dist-1.17.0.jar:1.17.0] > ... 13 more {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)