Ah, I think I can just use ./bin/jobmanager.sh
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/cluster_setup.html#adding-a-jobmanager
Thanks!
On Thu, Feb 1, 2018 at 4:00 PM, Mu Kong wrote:
> Hi Tony,
>
> Thanks for your response!
> I would definitely check supervisord
Hi Tony,
Thanks for your response!
I would definitely check supervisord.
I wonder if there is a way that I can recover the killed JM and add it back
to the cluster by using one of the scripts in the *flink/bin/*
Thanks!
Best regards,
Mu
On Thu, Feb 1, 2018 at 3:50 PM, Tony Wei wrote:
> Hi
Hi Mu,
AFAIK, that is the expected behavior when you launch your cluster in
standalone mode. Flink HA guarantees that the standby JM will take over the
whole cluster. The illustration just said recovered JM will become another
standby machine, but recovering a single instance is not the Flink HA's
Hi all,
I have a Flink HA cluster with 2 job managers and a zookeeper quorum of 3
nodes.
My failed job manager didn't get recovered after I killed it.
Here is how I didn't it and what I've observed:
1. I started the HA cluster with start-cluster.sh
2. Job manager A got elected.
3. I killed job m
As of now flink doesnt support this feature few days i came across the same
requirement..
On Thu, Feb 1, 2018 at 9:55 AM, Austin York wrote:
> I'm relatively new to Flink, so apologies in advance if this is a simple
> question.
>
> I have seen several mentions of an "upsert mode" for dynamic tab
I'm relatively new to Flink, so apologies in advance if this is a simple
question.
I have seen several mentions of an "upsert mode" for dynamic tables based
on a unique key in the Flink documentation and on the official Flink blog.
However, I do not see any examples / documentation regarding how t
Thanks a lot for the response Jins.
Still I couldn’t figure out what is wrong. I am able access flink job manager
ui from running application on YARN. But, I want to use Monitoring thru REST
Api, which I could not figure out ☹
Some more details FYR:
Below is what I receive when I submit jobs
8081 is the default port for standalone cluster.
For Yarn flink cluster,
Go to the Running applications and from the list of applications.
You can get the Flink UI by clicking Application master link for the yarn
session.
Regards,
Jins
On Feb 1, 2018, at 8:06 AM, Raja.Aravapalli
mailto:raja.a
Hi,
I have deployed Flink cluster on Hadoop YARN and I am able to trigger jobs and
run it.
But, I am not able to work the running flink cluster’s Montoring REST API!
As listed here @
https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/rest_api.html
I am trying to connect us
Hi all,
In unit tests that use the LocalFilinkMiniCluster, with Flink 1.4, I now get
this warning in my logs:
> 18/01/31 13:28:19 WARN query.QueryableStateUtils:76 - Could not load
> Queryable State Client Proxy. Probable reason: flink-queryable-state-runtime
> is not in the classpath. Please
Hi Aljoscha,
Thinking a little bit more about this, although IBM Object storage is
compatible with Amazon's S3, it's not an eventually consistent file system,
but rather immediately consistent.
So we won't need the support for eventually consistent FS for our use case
to work, but we would only
Hi Tim,
"job_env" is a variable I passed to launch YARN application. I just want
to access it in my flink application main method. There is is no
documentation on how to access customized job environment variables or
settings.
Thanks,
Tao
--
Sent from: http://apache-flink-user-mailing-list-arc
Hi,
there is currently no workaround for this limitation if your operator uses
timers, but it is pretty high on our TODO list for release 1.6.
Best,
Stefan
> Am 31.01.2018 um 09:29 schrieb Sofer, Tovi :
>
> Hi Stefan,
>
> Thank you for the answer.
> So you mean that any window use in the str
Hi,
Unfortunately not yet, though it's high on my personal list of stuff that I
want to get resolved. It won't make it into 1.5.0 but I think 1.6.0.
Best,
Aljoscha
> On 31. Jan 2018, at 16:31, Edward Rojas wrote:
>
> Thanks Aljoscha. That makes sense.
> Do you have a more specific date for
Thanks Aljoscha. That makes sense.
Do you have a more specific date for the changes on BucketingSink and/or the
PR to be released ?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
I don't have this property in my local running Flink cluster.
Which Flink version and deployment are you using? Are you sure this
property is not set in your flink-conf.yaml?
Regards,
Timo
Am 1/31/18 um 7:51 AM schrieb xiatao123:
In the web UI, I can see these information under JobManager.
Hi Juho,
I think that neither a) nor b) hold.
The reported times are wall-clock times (or processing time in Flink
terminology) when the checkpoint
started and when it finished.
What you want, if I understand correctly, is these times to reflect the event
time of your pipeline. In other
wo
Hi Teena,
a potential fix for the issue has been merged:
https://issues.apache.org/jira/browse/FLINK-8489
It would be great if you could check if that fixes the problem and report
back.
Thank you,
Fabian
2018-01-23 20:04 GMT+01:00 Stephan Ewen :
> As mentioned in the issue, please check if usin
Hi Wouter,
you could use the Java classes as a workaround. If you take a look at
the implementation [1], you will see that Scala only wraps the Java
classes. I think you can implement the same. You can convert your result
stream back into a Scala stream by calling `new
o.a.f.streaming.api.sca
Hi Edward,
The problem here is that readTextFile() and writeAsText() use the Flink
FileSystem abstraction underneath, which will pick up the s3 filesystem from
opt. The BucketingSink, on the other hand, uses the Hadoop FileSystem
abstraction directly, meaning that there has to be some HadoopFil
Hello everyone and Happy New Year!
Regarding the Heavy Hitter tracking...I wanna do it in a distributed manner.
Thus,
1 -- Round Robin the input stream to a number of parallel map instances (say
p = env.parallelism)
2 -- Each one of the p mappers maintains approximately the HH of its
correspondi
Hi,
We are having a similar problem when trying to use Flink 1.4.0 with IBM
Object Storage for reading and writing data.
We followed
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/aws.html
and the suggestion on https://issues.apache.org/jira/browse/FLINK-851.
We put
Hi,
Currently there is no way of using the RichAsyncFunction in Scala, this
means I can't get access to the RuntimeContext. I know someone is working
on this: https://issues.apache.org/jira/browse/FLINK-6756 , however in the
meantime is there a workaround for this? I'm particularly interested in
g
"Flink will only commit the kafka offsets when the data has been saved to S3"
-> no, you can check the BucketingSink code, and it would mean BucketingSink
depends on Kafka which is not reasonable.
Flink stores checkpoint in disk of each worker, not Kafka.
(KafkaStream, the other streaming API prov
Hi:
I think so too! But I have a question that when should I add this logic in
BucketingSink! And who does this logic, and ensures that the logic is executed
only once, not every parallel instance of the sink that executes this logic!
Best,
Ben
> On 31 Jan 2018, at 5:58 PM, Hung wrote:
>
> i
So there are three ways.
1. make your model as stream source
2. let master read the model once, distribute it via constructor, and update
it periodically
3. let worker read the model and update it periodically(you mentioned)
option 3 would be problematic if you scale a lot and use many parallelis
after you keyBy() each of your window has its group of events.
or what you want is a global window?
Best,
Sendoh
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
it depends on how you partition your file. in my case I write file per hour,
so I'm sure that file is ready after that hour period, in processing time.
Here, read to be ready means this file contains all the data in that hour
period.
If the downstream runs in a batch way, you may want to ensure th
Hi,
if the workarounds that Xingcan and me mentioned are no options for your
use-case, then I think this might currently be the better option. But I would
expect some better support for stream joins in the near future.
Best,
Stefan
> Am 31.01.2018 um 07:04 schrieb Marchant, Hayden :
>
> Stefa
Hi:
How does BucketingSink generate a SUCCESS file when a directory is
finished, so that the downstream judge when the directory can be read.
Best
Hi Stefan,
Thank you for the answer.
So you mean that any window use in the stream will result in synchronous
snapshotting?
When are you planning to fix this?
And is there a workaround?
Thanks again,
Tovi
From: Stefan Richter [mailto:s.rich...@data-artisans.com]
Sent: יום ג 30 ינואר 2018 21:10
T
31 matches
Mail list logo