Hello
We are testing a simple use case where we read from kafka -> process and
write to kafka.
I have set parallelism of the job to 3 and parallelism for the process
function to 6. When I looked at the job graph in the Flink UI noticed that
between the source and process function a rebalance step
Hello
I am trying to understand if Flink has a mechanism to automatically apply
any backpressure by throttling any operators ? For example if I have a
Process function that reads from a Kafkaa source and writes to a Kafka
sink. If the process function is slow will the kafka source be
automatically
Hello
Wanted to understand the best practices around running Flink in Kubernetes
especially from a continuous deployment perspective.
Is it possible to do a rolling update ?
Thanks
Suraj
ce API.
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#partition-discovery
> [2] https://issues.apache.org/jira/browse/FLINK-15703
>
> Suraj Puvvada 于2020年4月21日周二 上午6:01写道:
>
>> Hello,
>>
>> I have a flink job that read
Hello,
I have two JVMs that run LocalExecutionEnvorinments each using the same
consumer group.id.
i noticed that the consumers in each instance has all partitions assigned.
I was expecting that the partitions will be split across consumers
across the two JVMs
Any help on what might be happening
Hello,
We currently have a lot of jobs running in LocalExecutionEnvorinment and
wanted to understand the limitations and if it is recommended to run in
this mode.
Appreciate your thoughts on this.
Thanks
Suraj
Hello,
I have a flink job that reads from a source topic that currently has 4
partitions and I need to increase the partition count to 8.
Do you need to restart the job for that to take effect ?
How does it work in case there is persistent state (like a window operator)
involved ?
Any design doc