~[testFlink-1.0.jar:?]
> at
> org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:47)
> ~[flink-dist_2.11-1.12.4.jar:1.12.4]
> at
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:112)
> ~[flink-dist_2.11-1.12.4.jar:1.12.4]
> ... 32 more
>
>
> My question :
>
> 1. what can I do to deal with this error ?
> 2. if I cancel job with savepoint , will this error affect savepoint ?
>
>
> Best !
>
>
>
>
>
.processElement(StreamFlatMap.java:47)
~[flink-dist_2.11-1.12.4.jar:1.12.4]
at
org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:112)
~[flink-dist_2.11-1.12.4.jar:1.12.4]
... 32 more
My question :
1. what can I do to deal with this error ?
2. if I cancel job
Hi White,
Can you describe your problem in more detail?
* What is your Flink version?
* How do you deploy the job (application / session cluster), (Kubernetes,
Docker, YARN, ...)
* What kind of job are you running (DataStream, Table/SQL, DataSet)?
Best, Fabian
Am Mo., 20. Juli 2020 um 08:42 Uhr
Hi,
When I using rest api to cancel my job , the rest 9 TM has been canceled
quickly , but the other one TM is always cancelling status , someone can show
me how can I solve the question .
Thanks,
White
and producing to another Kafka
>> topic.
>>
>> ./bin/flink cancel b89f45024cf2e45914eaa920df95907f
>> Cancelling job b89f45024cf2e45914eaa920df95907f.
>>
>>
>> The program fi
:
org.apache.flink.util.FlinkException: Could not cancel job
b89f45024cf2e45914eaa920df95907f.
at
org.apache.flink.client.cli.CliFrontend.lambda$cancel$5(CliFrontend.java:603)
at
org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:955)
at
.
./bin/flink cancel b89f45024cf2e45914eaa920df95907f
Cancelling job b89f45024cf2e45914eaa920df95907f.
The program finished with the following exception:
org.apache.flink.util.FlinkException: Could not cancel job
I see, thanks. Looks like it's better for us to switch to triggering
savepoint & cancel separately.
On Wed, Aug 22, 2018 at 1:26 PM Till Rohrmann wrote:
> Calling cancel-with-savepoint multiple times will trigger multiple
> savepoints. The first issued savepoint will complete first and then canc
Calling cancel-with-savepoint multiple times will trigger multiple
savepoints. The first issued savepoint will complete first and then cancel
the job. Thus, the later savepoints might complete or not depending on the
correct timing. Since savepoint can flush results to external systems, I
would rec
What I meant to ask was, does it do any harm to keep calling
cancel-with-savepoint until the job exits? If the job is already cancelling
with savepoint, I would assume that another cancel-with-savepoint call is
just ignored.
On Tue, Aug 21, 2018 at 1:18 PM Till Rohrmann wrote:
> Just a small add
Just a small addition. Concurrent cancel call will interfere with the
cancel-with-savepoint command and directly cancel the job. So it is better
to use the cancel-with-savepoint call in order to take savepoint and then
cancel the job automatically.
Cheers,
Till
On Thu, Aug 9, 2018 at 9:53 AM vino
Hi Juho,
We use REST client API : triggerSavepoint(), this API returns a
CompletableFuture, then we call it's get() API.
You can understand that I am waiting for it to complete in sync.
Because cancelWithSavepoint is actually waiting for savepoint to complete
synchronization, and then execute the
Thanks for the suggestion. Is the separate savepoint triggering async?
Would you then separately poll for the savepoint's completion before
executing cancel? If additional polling is needed, then I would say that
for my purpose it's still easier to call cancel with savepoint and simply
ignore the r
Hi Juho,
This problem does exist, I suggest you separate these two steps to
temporarily deal with this problem:
1) Trigger Savepoint separately;
2) execute the cancel command;
Hi Till, Chesnay:
Our internal environment and multiple users on the mailing list have
encountered similar problems.
In
I was trying to cancel a job with savepoint, but the CLI command failed
with "akka.pattern.AskTimeoutException: Ask timed out".
The stack trace reveals that ask timeout is 10 seconds:
Caused by: akka.pattern.AskTimeoutException: Ask timed out on
[Actor[akka://flink/user/jobmanager_0#106635280]] a
Hi Bruno,
the lacking documentation for akka.client.timeout is an oversight on our
part [1]. I'll update it asap.
Unfortunately, at the moment there is no other way than to specify the
akka.client.timeout in the flink-conf.yaml file.
[1] https://issues.apache.org/jira/browse/FLINK-5700
Cheers,
Maybe, though it could be good to be able to override in the command line
somehow, though I guess I could just change the flink config.
Many thanks Yuri,
Bruno
On Wed, 1 Feb 2017 at 07:40 Yury Ruchin wrote:
> Hi Bruno,
>
> From the code I conclude that "akka.client.timeout" setting is what
> a
Hi Bruno,
>From the code I conclude that "akka.client.timeout" setting is what affects
this. It defaults to 60 seconds.
I'm not sure why this setting is not documented though as well as many
other "akka.*" settings - maybe there are some good reasons behind.
Regards,
Yury
2017-01-31 17:47 GMT+0
Hi there,
I am trying to cancel a job and create a savepoint (ie flink cancel -s) but
it takes more than a minute to do that and then it fails due to the
timeout. However, it seems that the job will be cancelled successfully and
the savepoint made, but I can only see that through the dasboard.
Ca
On Thu, May 5, 2016 at 1:59 AM, Bajaj, Abhinav
wrote:
> Or can we resume a stopped streaming job ?
You can use savepoints [1] to take a snapshot of a streaming program from
which you can restart the job at a later point in time. This is independent
of whether you cancel or stop the program afte
Hi Abhi,
The difference between cancelling and stopping a (streaming) job is the
following:
On a cancel call, the operators in a job immediately receive a `cancel()`
method call to cancel them as
soon as possible.
If operators are not not stopping after the cancel call, Flink will start
interrupt
Hi,
Can some one please clarify the difference between stop and cancel of a Job.
Stop documentation mentions it is only for "streaming jobs only” but cancel
also works for it.
Or can we resume a stopped streaming job ?
Thanks,
Abhi
[cid:DACBF116-FD8C-48DB-B91D-D54510B306E8]
Abhinav Bajaj
Seni
Thanks Matthias !
El lun., 18 ene. 2016 a las 20:51, Matthias J. Sax ()
escribió:
> Hi,
>
> currently, messaged in flight will be dropped if a streaming job gets
> canceled.
>
> There is already WIP to add a STOP signal which allows for a clean
> shutdown of a streaming job. This should get merge
Hi,
currently, messaged in flight will be dropped if a streaming job gets
canceled.
There is already WIP to add a STOP signal which allows for a clean
shutdown of a streaming job. This should get merged soon and will be
available in Flink 1.0.
You can follow the JIRA an PR here:
https://issues.a
Hi,
When some streaming job is manually canceled, what's about the messages in
process ? Flink's engine wait to task finish process messages inside (some
like apache-storm) ? If not, there is a safe way for stop streaming jobs ?
Thanks in advance!
Best regards
25 matches
Mail list logo