Márton Balassi created FLINK-1849:
-
Summary: Refactor StreamingRuntimeContext
Key: FLINK-1849
URL: https://issues.apache.org/jira/browse/FLINK-1849
Project: Flink
Issue Type: Improvement
Please vote on releasing the following candidate as Apache Flink version
0.9.0-milestone-1.
We've decided to create a release outside the regular 3 monthly release
schedule for the ApacheCon announcement and for giving our users a convenient
way of trying out our great new features.
--
I started to work on an in-memory merge on a record-timestamp attribute
for total ordered streams. But I got distracted by the Storm
compatibility layer... I will continue to work on it, when I find some
extra time ;)
On 04/08/2015 03:18 PM, Márton Balassi wrote:
> +1 for Stephan's suggestion.
>
Fabian Hueske created FLINK-1848:
Summary: Paths containing a Windows drive letter cannot be used in
FileOutputFormats
Key: FLINK-1848
URL: https://issues.apache.org/jira/browse/FLINK-1848
Project: Fl
+1 for Stephan's suggestion.
If we would like to support event time and also sorting inside a window we
should carefully consider where to actually put the timestamp of the
records. If the timestamp is part of the record then it is more
straight-forward, but in case of we assign the timestamps in
Stephan Ewen created FLINK-1847:
---
Summary: Change Scala collect() method to return a Seq
Key: FLINK-1847
URL: https://issues.apache.org/jira/browse/FLINK-1847
Project: Flink
Issue Type: Bug
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Stephan,
that sounds reasonable to me.
Cheers,
Bruno
On 08.04.2015 15:06, Stephan Ewen wrote:
> With the current network layer and the agenda we have for
> windowing, we should be able to support widows on event time this
> in the near future. In
With the current network layer and the agenda we have for windowing, we
should be able to support widows on event time this in the near future.
Inside the window, you can sort all records by time and have a full
ordering. That is independent of the order of the stream.
How about this as a first go
Overall I think this is a nice approach, but let us then also discuss where
would we like to put these jars. Currently these jars are not in the lib
folder of the Flink distribution, which mean that whenever a user would
like to use them they have to package it with there usercode which is a bit
in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Stephan,
how much of CEP depends on fully ordered streams depends on the
operators that you use in the pattern query. But in general, they need
fully ordered events within a window or at least some strategies to
deal with out-of-order events.
If I
Markus Holzemer created FLINK-1846:
--
Summary: Sinks inside of iterations
Key: FLINK-1846
URL: https://issues.apache.org/jira/browse/FLINK-1846
Project: Flink
Issue Type: Improvement
Stephan Ewen created FLINK-1845:
---
Summary: NonReusingSortMergeCoGroupIterator uses
ReusingKeyGroupedIterator
Key: FLINK-1845
URL: https://issues.apache.org/jira/browse/FLINK-1845
Project: Flink
Exactly each streaming connector would be a separate jar:
- stream-connector-kafka
- stream-connector-rabbitmq
- stream-connector-flume
- ...
On Tue, Apr 7, 2015 at 10:59 PM, Henry Saputra
wrote:
> Would this proposal also include packaging streaming connectors into
> separate source an
I agree, any ordering guarantees would need to be actively enabled.
How much of CEP depends on fully ordered streams? There is a lot you can do
with windows on event time, which are triggered by punctuations.
This is like a "soft" variant of the ordered streams, where order relation
occurs only w
This reasoning makes absolutely sense. That's why I suggested, that the
user should actively choose ordered data processing...
About deadlocks: Those can be avoided, if the buffers are consumed
continuously in an in-memory merge buffer (maybe with spilling to disk
if necessary). Of course, latency
Faye Beligianni created FLINK-1844:
--
Summary: Add Normaliser to ML library
Key: FLINK-1844
URL: https://issues.apache.org/jira/browse/FLINK-1844
Project: Flink
Issue Type: Improvement
This also happens for cluster setups.
On Wed, Apr 8, 2015 at 11:29 AM, Maximilian Michels (JIRA)
wrote:
> Maximilian Michels created FLINK-1843:
> -
>
> Summary: Job History gets cleared too fast
> Key: FLINK-1843
>
Here is the state in Flink and why we have chosen not to do global ordering
at the moment:
- Individual streams are FIFO, that means if the sender emits in order,
the receiver receives in order.
- When streams are merged (think shuffle / partition-by), then the streams
are not merged, but buffe
Saved the date! This sounds very exciting. Looking forward to hearing a lot
of nice talks and meeting a lot of great people!
On Tue, Apr 7, 2015 at 2:24 PM, Kostas Tzoumas wrote:
> Hi everyone,
>
> The folks at data Artisans and the Berlin Big Data Center are organizing
> the first physical conf
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Paris,
what's the reason for not guaranteeing global ordering across partitions
in the stream model? Is it the smaller overhead or are there any
operations not computable in a distributed environment with global
ordering?
In any case, I agree with
Maximilian Michels created FLINK-1843:
-
Summary: Job History gets cleared too fast
Key: FLINK-1843
URL: https://issues.apache.org/jira/browse/FLINK-1843
Project: Flink
Issue Type: Bug
Till Rohrmann created FLINK-1842:
Summary: ApplicationMaster for YARN FIFO tests does not shut down
properly
Key: FLINK-1842
URL: https://issues.apache.org/jira/browse/FLINK-1842
Project: Flink
I jusy wrote an example ob my branch!you can find it at
https://github.com/fpompermaier/flink
On Apr 8, 2015 10:00 AM, "santosh_rajaguru" wrote:
> is there any link or code samples or any sortf of pointers for
> TableOutPutFormat for Hbase like HbaseReadExample?
>
>
>
>
> --
> View this message i
is there any link or code samples or any sortf of pointers for
TableOutPutFormat for Hbase like HbaseReadExample?
--
View this message in context:
http://apache-flink-incubator-mailing-list-archive.1008284.n3.nabble.com/Hbase-OutputFormat-examples-tp5004.html
Sent from the Apache Flink (Incuba
24 matches
Mail list logo