Hi, Shuyi
Thank you for your reply.
I read the blog post you suggested.
I understand that YARN don’t count container failure when container exited
with status one of PREEMPTED, ABORTED, DISK_FAILED, and
KILLED_BY_RESOURCEMANAGER, and stopping JobManager with kill command will
cause one of that
AFAIR, your manual kill won't count towards the max-attempt counter in
hadoop's logic. Please see this post for more details:
http://johnjianfang.blogspot.com/2015/04/the-number-of-maximum-attempts-of-yarn.html
.
On Sun, Jun 2, 2019 at 9:48 AM 新平和礼 wrote:
> Hi all,
>
>
> I'm Flink newbie, and t
Hi,
I am working with nested JSON e.g.
{
"document": {
"_id": "qwery",
"meetingstatus": 3,
"city": 100,
"users": {
"created": "5c9243033eee61a14e5b",
"assigned": "5c9496ad1e91f10f44f"
}
},
"operation": "update"
}
Code usage:
tableEnv.connect(new Kafka()
.version("0.11")
Hi all,
I'm Flink newbie, and trying to understand Flink cluster’s recovery feature
using Flink 1.7.2 and YARN 2.8.
To confirm HA cluster’s behavior, I created Flink YARN session cluster and
stopped JobManager repeatedly using kill command after job deployment.
In that test, I set “yarn.applica
Anyone using Kineis Analytics?
It is managed flink in AWS.
I am studying it now with the help of AWS. It looks very promising. It
automatically creates and manages savepoints when deploying new versions,
and changing parelelism. I will post more later.
--
*Leo Woessner*
*Domain Engineering*
*P
I have created a process *outside* of flink for this. would be nice to use
Flink though.
It is important to us that the we only checkpoint after the records are
successfully saved in S3.
This is to insure all records are saved during node failure.
The process I wrote adds records to a file on dis
Hi Folks,
When I read the flink client api code, the concept of session is a little
vague and unclear to me. It looks like the session concept is only applied
in batch mode (I only see it in ExecutionEnvironment but not in
StreamExecutionEnvironment). But for local mode
(LocalExecutionEnvironment