Thanks for great suggestion.
+1 for this proposal.
Regards,
Chiwan Park
> On May 13, 2016, at 1:44 AM, Nick Dimiduk wrote:
>
> For what it's worth, this is very close to how HBase attempts to manage the
> community load. We break out components (in Jira), with a list of named
> component maint
Please create a JIRA issue for this and send the PR with JIRA issue number.
Regards,
Chiwan Park
> On May 12, 2016, at 7:15 PM, Flavio Pompermaier wrote:
>
> Do I need to open also a Jira or just the PR?
>
> On Thu, May 12, 2016 at 12:03 PM, Stephan Ewen wrote:
>
>> Yes, please open a pull r
Hi, I am trying to use the flink-kafka-connector and I notice that every time I
restart my application it re-reads the last message on the kafka topic. So if
the latest offset on the topic is 10, then when the application is restarted,
kafka will re-read message 10. Why is this the behavior? I w
FYI the brew formula has been updated to 1.0.3.
$ brew info apache-flink
apache-flink: stable 1.0.3, HEAD
Scalable batch and stream data processing
https://flink.apache.org/
Not installed
From:
https://github.com/Homebrew/homebrew-core/blob/master/Formula/apache-flink.rb
> On May 12, 2016, at 1
Eron Wright created FLINK-3903:
---
Summary: Homebrew Installation
Key: FLINK-3903
URL: https://issues.apache.org/jira/browse/FLINK-3903
Project: Flink
Issue Type: Task
Components: Docu
Hi,
if it just require implementing a custom operator(i mean does not require
changes to network stack or other engine level changes) i can try to
implement it since i am working on optimizer and plan generation for a
month. Also we are going to implement our etl framework on flink and this
kind
For what it's worth, this is very close to how HBase attempts to manage the
community load. We break out components (in Jira), with a list of named
component maintainers. Actually, having components alone has given a Big
Bang for the buck because when properly labeled, it makes it really easy
for p
Ufuk Celebi created FLINK-3902:
--
Summary: Discarded FileSystem checkpoints are lingering around
Key: FLINK-3902
URL: https://issues.apache.org/jira/browse/FLINK-3902
Project: Flink
Issue Type: B
Flavio Pompermaier created FLINK-3901:
-
Summary: Create a RowCsvInputFormat to use as default CSV IF in
Table API
Key: FLINK-3901
URL: https://issues.apache.org/jira/browse/FLINK-3901
Project: Fli
+1
The ideas seem good and the proposed number of components seems reasonable.
With this, we should also then cleanup the JIRA to make it actually usable.
On Thu, 12 May 2016 at 18:09 Stephan Ewen wrote:
> All maintainer candidates are only proposals so far. No indication of lead
> or anything
Flavio Pompermaier created FLINK-3900:
-
Summary: Set nullCheck=true as default in TableConfig
Key: FLINK-3900
URL: https://issues.apache.org/jira/browse/FLINK-3900
Project: Flink
Issue Ty
All maintainer candidates are only proposals so far. No indication of lead
or anything so far.
Let's first see if we agree on the structure proposed here, and if we take
the components as suggested here or if we refine the list.
Am 12.05.2016 17:45 schrieb "Robert Metzger" :
> tl;dr: +1
>
> I als
Hi,
I agree that this would be very nice. Unfortunately Flink does only allow
one output from an operation right now. Maybe we can extends this somehow
in the future.
Cheers,
Aljoscha
On Thu, 12 May 2016 at 17:27 CPC wrote:
> Hi Gabor,
>
> Yes functionally this helps. But in this case i am proc
tl;dr: +1
I also like the proposal a lot. Our community is growing at a quite fast
pace and we need to have some structure in place to still keep track of
everything going on.
I'm happy to see that the proposal mentions cleaning up our JIRA. This is
something that has been annoying me for quite a
Hi Gabor,
Yes functionally this helps. But in this case i am processing an element
twice and sending whole data to two different operator . What i am trying
to achieve is like datastream split like functionality or a little bit
more:
In filter like scenario i want to do below pseudo operation:
Hello,
You can split a DataSet into two DataSets with two filters:
val xs: DataSet[A] = ...
val split1: DataSet[A] = xs.filter(f1)
val split2: DataSet[A] = xs.filter(f2)
where f1 and f2 are true for those elements that should go into the
first and second DataSets respectively. So far, the splits
Fabian Hueske created FLINK-3899:
Summary: Document window processing with Reduce/FoldFunction +
WindowFunction
Key: FLINK-3899
URL: https://issues.apache.org/jira/browse/FLINK-3899
Project: Flink
Hi folks,
Is there any way in dataset api to split Dataset[A] to Dataset[A] and
Dataset[B] ? Use case belongs to a custom filter component that we want to
implement. We will want to direct input elements whose result is false
after we apply the predicate. Actually we want to direct input elements
+1 for the initiative. With a better process we will improve the
quality of the Flink development and give us more time to focus.
Could we have another category "Infrastructure"? This would concern
things like CI, nightly deployment of snapshots/documentation, ASF
Infra communication. Robert and m
Hey Stephan!
Thanks to you and the others who started this. I really like the
proposal and I'm happy to see my name on some components.
So, +1.
I'd say let's wait until the end of the week/beginning of next week to
see if there is any disagreement with the propsal in the community
(doesn't look
Yes, Matthias, that was supposed to be you.
Sorry from another guy who frequently has his name misspelled ;-)
On Thu, May 12, 2016 at 1:27 PM, Matthias J. Sax wrote:
> +1 from my side.
>
> Happy to be the maintainer for Storm-Compatibiltiy (at least I guess
> it's me, even the correct spelling w
Big +1 from my side, I think this will help the community grow and prosper
big time!
On Thu, May 12, 2016 at 1:27 PM, Matthias J. Sax wrote:
> +1 from my side.
>
> Happy to be the maintainer for Storm-Compatibiltiy (at least I guess
> it's me, even the correct spelling would be with two 't' :P)
+1 from my side.
Happy to be the maintainer for Storm-Compatibiltiy (at least I guess
it's me, even the correct spelling would be with two 't' :P)
-Matthias
On 05/12/2016 12:56 PM, Till Rohrmann wrote:
> +1 for the proposal
> On May 12, 2016 12:13 PM, "Stephan Ewen" wrote:
>
>> Yes, Gabor Geva
+1 for the proposal
On May 12, 2016 12:13 PM, "Stephan Ewen" wrote:
> Yes, Gabor Gevay, that did refer to you!
>
> Sorry for the ambiguity...
>
> On Thu, May 12, 2016 at 10:46 AM, Márton Balassi >
> wrote:
>
> > +1 for the proposal
> > @ggevay: I do think that it refers to you. :)
> >
> > On Thu
Do I need to open also a Jira or just the PR?
On Thu, May 12, 2016 at 12:03 PM, Stephan Ewen wrote:
> Yes, please open a pull request for that.
>
> On Thu, May 12, 2016 at 11:40 AM, Flavio Pompermaier >
> wrote:
>
> > If you're interested to I created an Eclipse version that should follows
> >
Yes, Gabor Gevay, that did refer to you!
Sorry for the ambiguity...
On Thu, May 12, 2016 at 10:46 AM, Márton Balassi
wrote:
> +1 for the proposal
> @ggevay: I do think that it refers to you. :)
>
> On Thu, May 12, 2016 at 10:40 AM, Gábor Gévay wrote:
>
> > Hello,
> >
> > There are at least thr
Yes, please open a pull request for that.
On Thu, May 12, 2016 at 11:40 AM, Flavio Pompermaier
wrote:
> If you're interested to I created an Eclipse version that should follows
> Flink coding rules..should I create a new JIRA for it?
>
> On Thu, May 5, 2016 at 6:02 PM, Dawid Wysakowicz <
> wysak
If you're interested to I created an Eclipse version that should follows
Flink coding rules..should I create a new JIRA for it?
On Thu, May 5, 2016 at 6:02 PM, Dawid Wysakowicz wrote:
> I opened JIRA: https://issues.apache.org/jira/browse/FLINK-3870. and
> created PR both to flink and flink-web.
+1 for the proposal
@ggevay: I do think that it refers to you. :)
On Thu, May 12, 2016 at 10:40 AM, Gábor Gévay wrote:
> Hello,
>
> There are at least three Gábors in the Flink community, :) so
> assuming that the Gábor in the list of maintainers of the DataSet API
> is referring to me, I'll be
Hello,
There are at least three Gábors in the Flink community, :) so
assuming that the Gábor in the list of maintainers of the DataSet API
is referring to me, I'll be happy to do it. :)
Best,
Gábor G.
2016-05-10 11:24 GMT+02:00 Stephan Ewen :
> Hi everyone!
>
> We propose to establish some li
Since FLINK-1827 was merged you could also skip test compilation with
-Dmaven.test.skip=true if you don't want to waste time and resources :)
On 12 May 2016 10:06, "Jark" wrote:
> Sorry for mistyped the command. You can enter into
> flink/flink-streaming-java and run `mvn clean package install
>
The Flink PMC is pleased to announce the availability of Flink 1.0.3.
The official release announcement:
http://flink.apache.org/news/2016/05/11/release-1.0.3.html
Release binaries:
http://apache.openmirror.de/flink/flink-1.0.3/
Please update your Maven dependencies to the new 1.0.3 version and
Hi Saiph,
You can enter flink directory and run `mvn clean install -DskipTest=true`
to install all the modules (including flunk-streaming-java) into your local .m2
repository . After that, change your app dependencies version to the version
of your flink, such as “1.1-SNAPSHOT”. At last,
Sorry for mistyped the command. You can enter into flink/flink-streaming-java
and run `mvn clean package install -DskipTests=true` . It will install only
flink-streaming-java module.
> 在 2016年5月12日,上午10:02,Jark 写道:
>
> Hi Saiph,
>You can enter flink directory and run `mvn clean install -
Thanks Ufuk :-)
On Wed, May 11, 2016 at 5:16 PM, Stephan Ewen wrote:
> Thanks for pushing this release Ufuk!
>
> On Wed, May 11, 2016 at 5:12 PM, Fabian Hueske wrote:
>
> > Thanks Ufuk!
> >
> > 2016-05-11 16:39 GMT+02:00 Ufuk Celebi :
> >
> > > This vote has passed with 3 binding +1 votes. Than
Funny you should say that, because in a recent discussion with Stephan and
Jamie, we talked about reworking the web UI to talk to numerous job managers.
I’ve been looking into is as part of the Mesos work (FLINK-1984). I’ll start a
new thread about it soon.
> On May 11, 2016, at 10:38 PM, Al
That would be definitely awesome (and useful also for us)! +1
On Thu, May 12, 2016 at 7:38 AM, Aljoscha Krettek
wrote:
> I favor the one-cluster-per job approach. If this becomes the dominant
> approach to doing things we could also think about introducing a separate
> component that would allo
37 matches
Mail list logo