Great to hear and thanks for letting us know.
Cheers,
Till
On Wed, Dec 19, 2018 at 5:39 PM Gerard Garcia wrote:
> We finally figure it out. We had a large value in the Kafka consumer
> option 'max.partition.fetch.bytes', this made the KafkaConsumer to not
> consume at a balanced rate from all p
We finally figure it out. We had a large value in the Kafka consumer option
'max.partition.fetch.bytes', this made the KafkaConsumer to not consume at
a balanced rate from all partitions.
Gerard
I understand your problem correctly, there is a similar JIRA
>>> issue FLINK-10348, reported by me. Maybe you can take a look at it.
>>>
>>>
>>> Jiayi Liao,Best
>>>
>>> Original Message
>>> *Sender:* Gerard Garcia
>>> *Rec
t
>>
>> Original Message
>> *Sender:* Gerard Garcia
>> *Recipient:* fearsome.lucidity
>> *Cc:* user
>> *Date:* Monday, Oct 29, 2018 17:50
>> *Subject:* Re: Unbalanced Kafka consumer consumption
>>
>> The stream is partitioned by key after inge
ao,Best
>
> Original Message
> *Sender:* Gerard Garcia
> *Recipient:* fearsome.lucidity
> *Cc:* user
> *Date:* Monday, Oct 29, 2018 17:50
> *Subject:* Re: Unbalanced Kafka consumer consumption
>
> The stream is partitioned by key after ingestion at the finest granularity
Hi,
If I understand your problem correctly, there is a similar JIRA
issueFLINK-10348, reported by me. Maybe you can take a look at it.
Jiayi Liao,Best
Original Message
Sender:Gerard garciager...@talaia.io
Recipient:fearsome.lucidityfearsome.lucid...@gmail.com
Cc:useru...@flink.apache.org
Date
You can always shuffle the stream generated by the Kafka source
(dataStream.shuffle()) to evenly distribute records downstream.
On Fri, Oct 26, 2018 at 2:08 AM gerardg wrote:
> Hi,
>
> We are experience issues scaling our Flink application and we have observed
> that it may be because Kafka mess