hink it should use
> the Hadoop dependencies when trying to load the filesystem.
>
> Cheers,
> Till
>
> On Tue, Jan 9, 2018 at 10:46 PM, Oleksandr Baliev <
> aleksanderba...@gmail.com> wrote:
>
>> Hello guys,
>>
>> want to clarify for myself: since
Hello guys,
want to clarify for myself: since flink 1.4.0 allows to use hadoop-free
distribution and dynamic hadoop dependencies loading, I suppose that if to
download hadoop-free distribution, start cluster without any hadoop and
then load any job's jar which has some hadoop dependencies (i
used
input messages are ordered per
> input partition, that would guarantee their order in the output partitions.
>
> On Tue, Aug 29, 2017 at 1:54 AM, Oleksandr Baliev <
> aleksanderba...@gmail.com> wrote:
>
>> Hello,
>>
>> There is one Flink job which consumes fro
Hi,
it's there https://ci.apache.org/projects/flink/flink-docs-
release-1.3/api/java/org/apache/flink/streaming/connectors/kafka/
FlinkKafkaConsumerBase.html#setStartFromSpecificOffsets-java.util.Map-
just defined in FlinkKafkaConsumerBase
2017-08-30 16:34 GMT+02:00 sohimankotia :
> Hi,
>
> I se
Hello,
There is one Flink job which consumes from Kafka topic (TOPIC_IN), simply
flatMap / map data and push to another Kafka topic (TOPIC_OUT).
TOPIC_IN has around 30 partitions, data is more or less sequential per
partition and the job has parallelism 30. So in theory there should be 1:1
mapping