Re: hadoop-free hdfs config

2018-01-11 Thread Oleksandr Baliev
hink it should use > the Hadoop dependencies when trying to load the filesystem. > > Cheers, > Till > > On Tue, Jan 9, 2018 at 10:46 PM, Oleksandr Baliev < > aleksanderba...@gmail.com> wrote: > >> Hello guys, >> >> want to clarify for myself: since

hadoop-free hdfs config

2018-01-09 Thread Oleksandr Baliev
Hello guys, want to clarify for myself: since flink 1.4.0 allows to use hadoop-free distribution and dynamic hadoop dependencies loading, I suppose that if to download hadoop-free distribution, start cluster without any hadoop and then load any job's jar which has some hadoop dependencies (i used

Re: Kafka consumer are too fast for some partitions in "flatMap" like jobs

2017-08-30 Thread Oleksandr Baliev
input messages are ordered per > input partition, that would guarantee their order in the output partitions. > > On Tue, Aug 29, 2017 at 1:54 AM, Oleksandr Baliev < > aleksanderba...@gmail.com> wrote: > >> Hello, >> >> There is one Flink job which consumes fro

Re: Kafka Offset settings in Flink Kafka Consumer 10

2017-08-30 Thread Oleksandr Baliev
Hi, it's there https://ci.apache.org/projects/flink/flink-docs- release-1.3/api/java/org/apache/flink/streaming/connectors/kafka/ FlinkKafkaConsumerBase.html#setStartFromSpecificOffsets-java.util.Map- just defined in FlinkKafkaConsumerBase 2017-08-30 16:34 GMT+02:00 sohimankotia : > Hi, > > I se

Kafka consumer are too fast for some partitions in "flatMap" like jobs

2017-08-29 Thread Oleksandr Baliev
Hello, There is one Flink job which consumes from Kafka topic (TOPIC_IN), simply flatMap / map data and push to another Kafka topic (TOPIC_OUT). TOPIC_IN has around 30 partitions, data is more or less sequential per partition and the job has parallelism 30. So in theory there should be 1:1 mapping