Hi Team,
When a flink job fails we use an implementation of joblistener interface to
get an handle of onJobSubmitted(job submitted) and
onJobExecuted(failure,success). But if my flink application has restart
strategy enabled flink automatically restarts specific task(which fail).
How do we get an
he parallelism set to 16? Or do you observe the
>> described behavior also on a job level?
>> I'm adding Chesnay to the thread as he might have more insights on this
>> topic.
>>
>> Best,
>> Matthias
>>
>> On Mon, Mar 22, 2021 at 6:31 PM Vigne
Hello Everyone,
Can someone help me with a solution?
I have a flink job(2 task-managers) with a job parallelism of 64 and task
slot of 64.
I have a parallelism set for one of the operators as 16. This operator(16
parallelism) slots are not getting evenly distributed across two task
managers. It o
I use Flink Elasticsearch sink to bulk insert the records to ES.
I want to do an operation after the record is successfully synced to
Elasticsearch. There is a failureHandler by which we can retry failures. Is
there a successHandler in flink elasticsearch sink?
*Note*: I couldn't do the operation
Hi,
I have a flink pipeline which reads from a kafka topic does a map
operation(builds an ElasticSearch model) and sinks it to Elasticsearch
*Pipeline-1:*
Flink-Kafka-Connector-Consumer(topic1) (parallelism 8) -> Map (parallelism
8) -> Flink-Es-connector-Sink(es1) (parallelism 8)
Now i want som
My requirement is to send the data to a different ES sink (based on the
data). Ex: If the data contains a particular info send it to sink1 else
send it to sink2 etc(basically send it dynamically to any one sink based on
the data). I also want to set parallelism separately for ES sink1, ES
sink2, Es