inclined to develop a
fast solution. I want to hear some community advice.
https://issues.apache.org/jira/browse/FLINK-19335?focusedCommentId=17199927&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17199927
Best Regards,
Husky Zeng
--
Sent from: http://apache-f
When we submit a job which use udf of hive , the job will dependent on udf's
jars and configuration files.
We have already store udf's jars and configuration files in hive metadata
store,so we excpet that flink could get those files hdfs paths by
hive-connector,and get those files in hdfs by paths
Hi Cristian,
I don't know if it was designed to be like this deliberately.
So I have already submitted an issue ,and wait for somebody to response.
https://issues.apache.org/jira/browse/FLINK-19154
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
I means that checkpoints are usually dropped after the job was terminated by
the user (except if explicitly configured as retained Checkpoints). You
could use "ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION" to save
your checkpoint when te cames to failure.
When your zookeeper lost connect
,you could change the code in
org.apache.flink.runtime.dispatcher.DispatcherGateway#shutDownCluster,when
it came to faied,save the data.
In fact, I'm wondering why it ignore the Throwable,default to delete Ha Data
in any solution. Is there anyone could help me to solve this question?
Best,
Husky Zeng
-