-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I think timestamping results of a time window operator is essential.
Without timestamps in the results, it is not possible to execute two
time window operators one after the other.
Cheers,
Bruno
On 12.05.2015 18:30, Aljoscha Krettek wrote:
> Hi,
I think I saw it once, yes. But dismissed it as a fluke.
On Wed, May 13, 2015 at 1:13 AM, Stephan Ewen wrote:
> I have observed that a Flink-on-Tez test job stalls in two cases on the
> Travis CI server.
>
> https://travis-ci.org/StephanEwen/incubator-flink/jobs/62302207
>
> It looks like a shuff
@Robert: I have a little storm experience. I will try to run some examples
on the cluster.
Peter
2015-05-12 11:40 GMT+02:00 Matthias J. Sax :
> Hi,
>
> some UnsupportedOperationExceptions are required, because Storm
> interfaces are implement but Flink cannot support those functionality.
> Some
That would be great, thank you!
Please write down everything you see and post it on the mailing list.
On Wed, May 13, 2015 at 9:42 AM, Szabó Péter
wrote:
> @Robert: I have a little storm experience. I will try to run some examples
> on the cluster.
>
> Peter
>
> 2015-05-12 11:40 GMT+02:00 Matth
Robert Metzger created FLINK-2006:
-
Summary: TaskManagerTest.testRunJobWithForwardChannel:432
expected: but was:
Key: FLINK-2006
URL: https://issues.apache.org/jira/browse/FLINK-2006
Project: Flink
I think we can agree that real multi-user support in Flink (standalone) is
neither desirable, because there are already sophisticated solutions out
there (YARN or Mesos), nor feasible because it is a lot of work to get it
right.
At the current state of affairs, resource sharing between two users
s
Márton Balassi created FLINK-2007:
-
Summary: Initial data point in Delta function needs to be
serializable
Key: FLINK-2007
URL: https://issues.apache.org/jira/browse/FLINK-2007
Project: Flink
Robert Metzger created FLINK-2008:
-
Summary: PersistentKafkaSource is sometimes emitting tuples
multiple times
Key: FLINK-2008
URL: https://issues.apache.org/jira/browse/FLINK-2008
Project: Flink
This is a pretty central question, actually (timestamping the results of
windows). Let us kick off a separate thread for this...
On Wed, May 13, 2015 at 9:20 AM, Bruno Cadonna <
cado...@informatik.hu-berlin.de> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
>
> I think timestamp
I think this is that thread :)
But as I said it is just a matter of what we want to add, and we can
already do it.
On Wed, May 13, 2015 at 11:37 AM, Stephan Ewen wrote:
> This is a pretty central question, actually (timestamping the results of
> windows). Let us kick off a separate thread for t
On first thought, the sessions and the multi-job vs. job queue question are
almost two separate issues.
Can you add the sessions without removing the concurrent jobs we currently
have?
On Wed, May 13, 2015 at 10:34 AM, Maximilian Michels wrote:
> I think we can agree that real multi-user suppor
Okay, I thought that this thread is about how to make timestamps and window
information of a record accessible to the user code.
This involves how represent the information in which window a record is,
whether to attach it to that records, etc
The semantics of what happens if you have multiple win
Yes, should be possible to implement both independently.
On Wed, May 13, 2015 at 11:41 AM, Stephan Ewen wrote:
> On first thought, the sessions and the multi-job vs. job queue question are
> almost two separate issues.
>
> Can you add the sessions without removing the concurrent jobs we currentl
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
Aljoscha originally wanted to discuss whether it is necessary
to make information about the event time of an element and information
about windows in which it resides accessible to the user.
I guess, we all agree that this information is neces
Hi Squirrels!
I think it is time we started finalizing the the 0.9 release. The latest
milestone is a few weeks old and given the sheer amount of new features and
the big interest in Flink these days, we should publish the next release
rather soon in my opinion.
There are a few issues that we nee
+1 for cutting a release soon.
The planning document looks reasonable ..
On Wed, May 13, 2015 at 1:37 PM, Stephan Ewen wrote:
> Hi Squirrels!
>
> I think it is time we started finalizing the the 0.9 release. The latest
> milestone is a few weeks old and given the sheer amount of new features a
Aljoscha Krettek created FLINK-2009:
---
Summary: Time-Based Windows fail with Chaining Disabled
Key: FLINK-2009
URL: https://issues.apache.org/jira/browse/FLINK-2009
Project: Flink
Issue Type
Stephan Ewen created FLINK-2010:
---
Summary: Add test that verifies that chained tasks are properly
checkpointed
Key: FLINK-2010
URL: https://issues.apache.org/jira/browse/FLINK-2010
Project: Flink
Stephan Ewen created FLINK-2011:
---
Summary: Improve error message when user-defined serialization
logic is wrong
Key: FLINK-2011
URL: https://issues.apache.org/jira/browse/FLINK-2011
Project: Flink
Hi Robert,
thank you for your reply. Yes I read on the mailing list about it, very
nice that you maintain it as part of the flink project now. I might swap to
use those dockerfiles.
Thank you for that tip, I will look more into the TPC-* direction.
You are right I expect some impact in reading a
For me personally, would like to still keep the architecture image [1]
to show how it interacts with other systems.
This usually help people to see how Flink could fit into their
existing infrastructure pieces.
- Henry
[1] http://flink.apache.org/img/WhatIsFlink.png
On Tue, May 12, 2015 at 1:5
That is a good thing to have, agreed.
On Wed, May 13, 2015 at 5:13 PM, Henry Saputra
wrote:
> For me personally, would like to still keep the architecture image [1]
> to show how it interacts with other systems.
>
> This usually help people to see how Flink could fit into their
> existing infras
Hello ,
Thank @Stephan for the explanations. Though I with these information, I
still have no clue to trace the error.
Now, the exception stack in the *cluster mode* always looks like this
(even I set env.setParallelism(1)):
org.apache.flink.runtime.client.JobExecutionException: Job execu
You are probably starting the system with very little memory, or you have
an immensely large job.
Have a look here, I think this discussion on the user mailing list a few
days ago is about the same issue:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Memory-exception-td1206.h
Thanks Aljioscha. I was able to change as recommended and able to run the
entire test suite in local successfully.
However Travis build is failing for pull request:
https://github.com/apache/flink/pull/673.
It's a compilation failure:
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-
@Robert, this seems like a problem with the Shading?
On Thu, May 14, 2015 at 5:41 AM, Lokesh Rajaram
wrote:
> Thanks Aljioscha. I was able to change as recommended and able to run the
> entire test suite in local successfully.
> However Travis build is failing for pull request:
> https://github.c
Hi Yi,
The problem here, as Stephan already suggested, is that you have a very
large job. Each complex operation (join, coGroup, etc) needs a
share of memory.
In Flink, for the test cases at least, they restrict the TaskManagers'
memory to just 80MB in order to run multiple tests in parallel on Tr
27 matches
Mail list logo