Re: hadoop-free hdfs config

2018-01-11 Thread Till Rohrmann
Thanks for trying it out and letting us know. Cheers, Till On Thu, Jan 11, 2018 at 9:56 AM, Oleksandr Baliev wrote: > Hi Till, > > thanks for your reply and clarification! With RocksDBStateBackend btw the > same story, looks like a wrapper over FsStateBackend: > > 01/11/2018 09:27:22 Job execut

Re: hadoop-free hdfs config

2018-01-11 Thread Oleksandr Baliev
Hi Till, thanks for your reply and clarification! With RocksDBStateBackend btw the same story, looks like a wrapper over FsStateBackend: 01/11/2018 09:27:22 Job execution switched to status FAILING. org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implem

Re: hadoop-free hdfs config

2018-01-10 Thread Till Rohrmann
Hi Sasha, you're right that if you want to access HDFS from the user code only it should be possible to use the Hadoop free Flink version and bundle the Hadoop dependencies with your user code. However, if you want to use Flink's file system state backend as you did, then you have to start the Fli

hadoop-free hdfs config

2018-01-09 Thread Oleksandr Baliev
Hello guys, want to clarify for myself: since flink 1.4.0 allows to use hadoop-free distribution and dynamic hadoop dependencies loading, I suppose that if to download hadoop-free distribution, start cluster without any hadoop and then load any job's jar which has some hadoop dependencies (i used