Hi!

Thanks for diagnosing this - the fix you suggested is correct.

Can you still share some of the logs indicating where it fails? The reason
is that the fallback code path (using "hdfs://localhost:12345") should not
really try to connect to a local HDFS, but simply use this as a placeholder
URI to get the correct configuration for Hadoop's file system.

Best,
Stephan


On Wed, Jan 10, 2018 at 2:17 PM, jelmer <jkupe...@gmail.com> wrote:

> Hi I am trying to convert some jobs from flink 1.3.2 to flink 1.4.0
>
> But i am running into the issue that the bucketing sink will always try
> and connect to hdfs://localhost:12345/ instead of the hfds url i have
> specified in the constructor
>
> If i look at the code at
>
> https://github.com/apache/flink/blob/master/flink-
> connectors/flink-connector-filesystem/src/main/java/org/
> apache/flink/streaming/connectors/fs/bucketing/BucketingSink.java#L1125
>
>
> It tries to create the hadoop filesystem like this
>
> final org.apache.flink.core.fs.FileSystem flinkFs =
> org.apache.flink.core.fs.FileSystem.get(path.toUri());
> final FileSystem hadoopFs = (flinkFs instanceof HadoopFileSystem) ?
> ((HadoopFileSystem) flinkFs).getHadoopFileSystem() : null;
>
> But FileSystem.getUnguardedFileSystem will always return a
>
>
> But FileSystem.get will always return a SafetyNetWrapperFileSystem so the
> instanceof check will never indicate that its a hadoop filesystem
>
>
> Am i missing something or is this a bug and if so what would be the
> correct fix ? I guess replacing FileSystem.get with 
> FileSystem.getUnguardedFileSystem
> would fix it but I am afraid I lack the context to know if that would be
> safe
>

Reply via email to