Hi,
We're trying to implement some module to help autoscale our pipeline which
is built with Flink on YARN. According to the document, the suggested
procedure seems to be:
1. cancel job with savepoint
2. start new job with increased YARN TM number and parallelism.
However, step 2 always gave er
Hi guys
I'm trying to run official "Kafka010Example.scala", but unortunatelly it
doesn't read from input topic and write to output as expected. What am I
missing or doing wrong? Any help or hints much appreciated. Here's exactly
what I did:
1) Started kafka in docker container (spotify/kafka:late
KeySelector was exactly what I need. Thank you a lot.
I modified my code in this way and now it works:
DataStream LCxAccStream = env
.addSource(new FlinkKafkaConsumer010<>("LCacc",
new
CustomDeserializer(), properties)).setParallelism(4)