Thanks Wei Chen, Giannis for the time,
For starters, you need to better size and estimate the required number of
> partitions you will need on the Kafka side in order to process 1000+
> messages/second.
> The number of partitions should also define the maximum parallelism for
> the Flink job read
Thank you,
so in other words to have TM HA on k8s I have to configure [1] correct?
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/#kubernetes-ha-services
niedz., 17 wrz 2023 o 07:27 Chen Zhanghao
napisał(a):
> Hi Krzysztof,
>
> TM HA is taken charge by
The thing is that when I've deployed an application cluster like in example
[1] without any extra configuration and then I killed the TM, submitted job
was moved to "RESTARTING state and then new TM was created after which job
was running again. This is a different behavior that i see when I'm runn
Hi Kirti,
AFAIK, u should pay attention to how are the filesystems mounted on the
pod instead of what should be configured as the tmp directory.
In common cases, user may mount a filesystem with a small space(less than
30GB) for system and a filesystem with a large space(more than 200GB) to
stor
Hi Chen,
I now see what you was trying to tell me.
The problem was on my end... sorry for that. The job I was using for
session cluster had NoRestart() set as Restart Strategy, whereas
Application Cluster was execution job with some "proper" restart strategy.
Thanks.
Krzysztof Chmielewski
niedz.
Hi, Emre
Thanks for driving this proposal. It looks cool. Intuitively though, I
don't really see what type of compatibility issues it's trying to solve,
can you explain in a bit more detail? Is it solving compatibility issues
within the Flink project itself, or compatibility issues with the Flink
Hi, Karthick
It looks like a data skewing problem, and I think one of the easiest and
most efficient ways for this issue is to increase the number of Partitions
and see how it works first, like try expanding by 100 first.
Best,
Ron
Karthick 于2023年9月17日周日 17:03写道:
> Thanks Wei Chen, Giannis for
Hello,
Checkpointing is enabled and works fine if configured parquet page size is at
least 64 bytes as otherwise there is exception thrown at back-end.
Looks to be an issue which is not handled by file sink bulk writer?
Rgds,
Kamal
From: Feng Jin
Sent: 15 September 2023 04:14 PM
To: Kamal Mit
Thanks Liu Ron for the suggestion.
Can you please give any pointers/Reference for the custom partitioning
strategy, we are currently using murmur hashing with the device unique id.
It would be helpful if we guide/refer any other strategies.
Thanks and regards
Karthick.
On Mon, Sep 18, 2023 at 9: