o send result to statsd
* psrc: source parallelism
* pJ2R: parallelism of map operator(JsonRecTranslator)
* pAggr: parallelism of process+timer operator(AggregationDuration)
Thank you!
Yow
From: Jörn Franke
Sent: Saturday, June 16, 2018 4:46 PM
To:
less I get?
From: Siew Wai Yow
Sent: Saturday, June 16, 2018 5:09 PM
To: Jörn Franke
Cc: user@flink.apache.org
Subject: Re: Flink application does not scale as expected, please help!
Hi Jorn, the input data is 1kb per record, in production it will have 10
billions
Hi Jorn, Please find the source @https://github.com/swyow/flink_sample_git
Thank you!
From: Jörn Franke
Sent: Saturday, June 16, 2018 6:03 PM
To: Siew Wai Yow
Cc: user@flink.apache.org
Subject: Re: Flink application does not scale as expected, please help!
Can
ecause all happens in same TM. When scale to 32 the
performance drop, not even in par with case of parallelism 16. Is this
something expected? Thank you.
Regards,
Yow
From: Fabian Hueske
Sent: Monday, June 18, 2018 3:47 PM
To: Siew Wai Yow
Cc: Jörn Fr
*additional info in bold.
From: Siew Wai Yow
Sent: Monday, June 18, 2018 3:57 PM
To: Fabian Hueske
Cc: Jörn Franke; user@flink.apache.org
Subject: Re: Flink application does not scale as expected, please help!
Hi Fabian,
We are using Flink 1.5.0. Any
o you mind to share your
thoughts?
Thank you guys!
From: Ovidiu-Cristian MARCU
Sent: Monday, June 18, 2018 6:28 PM
To: Fabian Hueske
Cc: Siew Wai Yow; Jörn Franke; user@flink.apache.org
Subject: Re: Flink application does not scale as expected, please help!
Hi a
Hi,
I get the following error when upgrade flink from 1.3.2 to 1.5.0 when using
REST api to upload and run a jar.
{"errors":["Expected only one value [--KAFKA_IN PREBLN_O@192.168.56.120:9092,
192.168.56.121:9092, 192.168.56.122:9092/BHARTI_FL_PREBLN_O_124 --KAFKA_OUT
FX_AGGR_ASCII@192.168.56
have to replace all commas with some other delimiter.
On 19.06.2018 04:15, Siew Wai Yow wrote:
Hi,
I get the following error when upgrade flink from 1.3.2 to 1.5.0 when using
REST api to upload and run a jar.
{"errors":["Expected only one value [--KAFKA_IN
PREBLN_O@192.1
and 2 and the second source to only talk
to mapper 3 and 4.
Cheers,
Till
From: Fabian Hueske
Sent: Tuesday, June 19, 2018 3:55 PM
To: Siew Wai Yow
Cc: Ovidiu-Cristian MARCU; Jörn Franke; user@flink.apache.org;
trohrm...@apache.org
Subject: Re: Flink applicati
Hi,
Regarding to Flink 1.5.0 REST API breaking change,
* The REST API to cancel a job was changed.
* The REST API to cancel a job with savepoint was changed.
I have few dump questions,
1. Any replacement for cancellation ONLY without save-point? Only found
"/jobs/:jobid/savepoints
cel-job" : {
"type" : "boolean"
}
}
3. GET to /jobs/:jobid/savepoints/:triggerid
On 19.06.2018 17:40, Esteban Serrano wrote:
For #1, you need to use a PATCH request to "/jobs/:jobid"
On Tue, Jun 19, 2018 at 11:35 AM Siew Wai Yow
mailto:wai_...@hotmail.c
"cancel-job" : {
"type" : "boolean"
}
}
3. GET to /jobs/:jobid/savepoints/:triggerid
On 19.06.2018 17:40, Esteban Serrano wrote:
For #1, you need to use a PATCH request to "/jobs/:jobid"
On Tue, Jun 19, 2018 at 11:35 AM Siew Wai Yow
mailto:wai_.
t requires this, other request allow optional fields to simply
be ommitted...
On 20.06.2018 06:12, Siew Wai Yow wrote:
Hi all,
Seems pass in target-directory is a must now for checkpoints REST API, and the
status will not response with save point directory anymore. I can pass in but
the
a bug. Flink use
Checkpoint path's FileSystem to create the output stream for the Savepoint, but
in your case the checkpoint & savepoint are not using the same file system. A
workaround is to use the same file system for both checkpoint & savepoint.
Best, Sihua
On 06/21/2018
Hi all,
Is there any use case in virtualized environment? Is it a good choice? Please
share your thought. Thank you.
-Yow
Hi,
We configure rocksdb as statebackend and checkpoint dir persists to hdfs. When
the job is run, rocksdb automatically mount to tmpfs /tmp, which consume memory.
RocksDBStateBackend rocksdb = new RocksDBStateBackend(new
FsStateBackend(hdfs://), true);
env.setStateBac
Hi Yun Tang,
Thanks to your reply, this is exactly the answer we need.
also thanks @CongXian's reply.
-Siew Wai
From: Tang Cloud
Sent: Monday, July 9, 2018 11:05 AM
To: Siew Wai Yow; user@flink.apache.org
Subject: 答复: flink rocksdb where to configure
Hi,
When one of the task manager is killed, the whole cluster die, is this
something expected? We are using Flink 1.4. Thank you.
Regards,
Yow
Hello,
May i know what happen to state stored in Flink Task Manager when this Task
manager crash. Say the state storage is rocksdb, would those data transfer to
other running Task Manager so that complete state data is ready for data
processing?
Regards,
Yow
ord only recover when TM2 being recover?
Thanks.
From: Jamie Grier
Sent: Saturday, January 12, 2019 2:26 AM
To: Siew Wai Yow
Cc: user@flink.apache.org
Subject: Re: What happen to state in Flink Task Manager when crash?
Flink is designed such that local state is b
, and state updates since the last checkpoint
will be lost. Of course that lost state should be recreated as the job rewinds
and resumes."
Hi, Siew Wai Yow
When the job is running, the states are stored in the local RocksDB, Flink will
copy all the needed states to checkpointPath when doing a
same in Flink 1.7? Restart strategy is
for job though not for TM failure.
Thanks!
Hi, Siew Wai Yow
Yes, David is correct, the TM must be recovered, the number of TMs before and
after the crash must be the same.
In my last reply, I want to say that the states may not on the same TM after
the
n Qiu
Sent: Sunday, January 13, 2019 9:39 AM
To: Siew Wai Yow
Cc: Jamie Grier; user
Subject: Re: Reply: Re: Reply: Re: What happen to state in Flink Task Manager
when crash?
Hi, Yow
I think there is another restart strategy in flink: region failover[1], but I
could not find the documentation,
Thanks Dawid and Qiu!
Both of you clear all my doubts, perfect!
From: Dawid Wysakowicz
Sent: Monday, January 14, 2019 9:26 PM
To: Congxian Qiu; Siew Wai Yow
Cc: Jamie Grier; user@flink.apache.org
Subject: Re: What happen to state in Flink Task Manager when crash
Hi guys,
Anyone can share experience on sftp source? Should i use hadoop sftpfilesystem
or i can simply use any sftp java library in a user-defined source?
Thanks.
Regards,
Yow
Hi guys,
I have question regarding to the title that need your expertise,
1. I need to build a SFTP SourceFunction, may I know if hadoop
SFTPFileSystem suitable?
2. I need to build a SFTP SinkFunction as well, may I know if per-defined
HDFS rolling file sink accept SFTP connection since
Hi guys,
May i know flink support ipv6?
Thanks
Yow
um 16:25 Uhr schrieb Siew Wai Yow
mailto:wai_...@hotmail.com>>:
Hi guys,
May i know flink support ipv6?
Thanks
Yow
28 matches
Mail list logo