Hi Emmanuel,
yes Master HA is currently under development and only available in 0.10
snapshot. AFAIK, it is almost but not completely done yet.
Best, Fabian
On Sep 10, 2015 01:29, "Emmanuel" wrote:
> is this a 0.10 snapshot feature only? I'm using 0.9.1 right now
>
>
> --
is this a 0.10 snapshot feature only? I'm using 0.9.1 right now
From: ele...@msn.com
To: user@flink.apache.org
Subject: RE: Flink HA mode
Date: Wed, 9 Sep 2015 16:11:38 -0700
Been playing with the HA...I find the UIs confusing here: in the dashboard on
one side I see 0 slots 0 taskmanagers, b
Been playing with the HA...I find the UIs confusing here: in the dashboard on
one side I see 0 slots 0 taskmanagers, but a job running, while on the other
side I see my taskmanagers and slots but no jobs... putting the UI being a
proxy, it's load balanced to the JM, so I can't tell which is whic
Yes, it says 'fork in run := true’, I had forgotten to change it to ‘fork :=
true’ to make it work with the test tasks too.
On 9 Sep 2015, at 13:48, Robert Metzger
mailto:rmetz...@apache.org>> wrote:
Damn. I saw this discussion too late. I think the "fork = true" is documented
here:
https://c
Damn. I saw this discussion too late. I think the "fork = true" is
documented here:
https://ci.apache.org/projects/flink/flink-docs-release-0.9/quickstart/scala_api_quickstart.html#alternative-build-tools-sbt
On Wed, Sep 9, 2015 at 2:46 PM, Giancarlo Pagano
wrote:
> I’ve actually found the probl
I’ve actually found the problem in the meanwhile, build.sbt was missing 'fork
:= true'.
Sorry about that.
Thanks,
Giancarlo
On 9 Sep 2015, at 11:43, Anwar Rizal
mailto:anriza...@gmail.com>> wrote:
Can you share the build.sbt or the scala , and maybe a small portion of the
code ?
On Wed, S
Hi,
Currently, Flink does not support automatic scaling of the YARN containers.
There are certainly plans to add this feature:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/add-some-new-api-to-the-scheduler-in-the-job-manager-tp7153p7448.html
Adding an API for manually starting a
I created a Jira for this: https://issues.apache.org/jira/browse/FLINK-2643
On Fri, 4 Sep 2015 at 13:01 Matthias J. Sax wrote:
> +1 for dropping
>
> On 09/04/2015 11:04 AM, Maximilian Michels wrote:
> > +1 for dropping Hadoop 2.2.0 binary and source-compatibility. The
> > release is hardly used
Can you share the build.sbt or the scala , and maybe a small portion of the
code ?
On Wed, Sep 9, 2015 at 12:27 PM, Giancarlo Pagano
wrote:
> Hi,
>
> I’m trying to write a simple test project using Flink streaming and sbt.
> The project is in scala and it’s basically the set version of the mave
Hi,
I’m trying to write a simple test project using Flink streaming and sbt. The
project is in scala and it’s basically the set version of the maven archetype.
I’m using the latest Flink 0.10-SNAPSHOT, building it for scala 2.11. I’ve
written a simple test that starts a local environment, create
Hi,
I have started a Flink YARN session using yarn-session.sh and the configuration
of number of YARN container seems to be pretty static.
Is it possible to have Flink adjust the number of containers depending on the
actual workload. E.g. stop containers that are idle for too long and start the
Hi Michele,
If I see that correctly, you are using the groupBy and groupReduce to
partition and group the data.
This does work, but you can do it even easier like this:
ds.partitionByHash(0).sortPartition(0, Order.ASCENDING).output(yourOF);
This will partition and sort the data on field 0 withou
Hi!
The order of tuples in stream may vary, depending on certain operations.
When windows are computed on "processing time" (sometimes called "stream
time"), then the result of the windowing depends on the speed of the tuple
streams. There are multiple possible outcomes of the computation.
Upon r
Hi, thanks Fabian,
this night I got rid of that problem in an other way (I convinced my professor
to add an “order" attribute, so the actual position in the output is useless)
now what I am going to do is this (if I understood it correctly from you
yesterday):
at first
ds.groupBy(0).reduceGroup
The only necessary information for the JobManager and TaskManager is to
know where to find the ZooKeeper quorum to do leader election and retrieve
the leader address from. This will be configured via the config parameter
`ha.zookeeper.quorum`.
On Wed, Sep 9, 2015 at 10:15 AM, Stephan Ewen wrote:
TL;DR is that you are right, it is only the initial list. If a JobManager
comes back with a new IP address, it will be available.
On Wed, Sep 9, 2015 at 8:35 AM, Ufuk Celebi wrote:
>
> > On 09 Sep 2015, at 04:48, Emmanuel wrote:
> >
> > my questions is: how critical is the bootstrap ip list in
Btw, there was a discussion about this problem a while back:
https://mail-archives.apache.org/mod_mbox/flink-dev/201506.mbox/%3ccadxjeyci9_opro4oqtzhvi-gifek6_66ybtjz_pb0aqop_n...@mail.gmail.com%3E
And here is the jira:
https://issues.apache.org/jira/browse/FLINK-2181
Best,
Gabor
2015-09-09 10:0
For your use case is would make more sense to partition and sort the data
on the same key on which you want to partition the output files, i.e.,
partitioning on key1 and sorting on key3 might not help a lot.
Any order is destroyed if you have to partition the data.
What you can try to do is to enf
Aljoscha and me are currently working on an alternative Windowing
implementation. That new implementation will support out-of-order event
time and release keys properly. We will hopefully have a first version to
try out in a week or so...
Greetings,
Stephan
On Wed, Sep 9, 2015 at 9:08 AM, Aljosc
Ok, that's a special case but the system still shouldn't behave that way.
The problem is that the grouped discretizer that is responsible for
grouping the elements into grouped windows is keeping state for every key
that it encounters. And that state is never released, ever. That's the
reason for t
20 matches
Mail list logo