+1
On Fri, Aug 23, 2019 at 11:40 AM Kostas Tzoumas wrote:
> +1
>
> On Thu, Aug 22, 2019 at 5:29 PM jincheng sun
> wrote:
>
>> +1
>>
>> Becket Qin 于2019年8月22日 周四16:22写道:
>>
>> > Hi All, so far the votes count as following:
>> >
>> > +1 (Binding): 13 (Aljoscha, Fabian, Kurt, Till, Timo, Max, Step
+1, thanks.
On Mon, May 1, 2023 at 4:23 PM Őrhidi Mátyás
wrote:
> +1 SGTM.
>
> Cheers,
> Matyas
>
> On Wed, Apr 26, 2023 at 11:43 AM Hao t Chang wrote:
>
> > Agree. I will help.
> >
> >
>
Hi Jim and Ted,
Thanks for the quick response. For Openshift issue I would assume that
adding the RBAC suggested here [1] would solve the problem, it seems fine
to me.
For the missing taskmanager could you please share the relevant logs from
your jobmanager pod that is already show running? Thank
Thank you, team.
+1 (binding)
- Verified Helm repo works as expected, points to correct image tag, build,
version
- Verified basic examples + checked operator logs everything looks as
expected
- Verified hashes, signatures and source release contains no binaries
- Ran built-in tests, built jars +
Thanks, awesome! :-)
On Wed, May 17, 2023 at 2:24 PM Gyula Fóra wrote:
> The Apache Flink community is very happy to announce the release of Apache
> Flink Kubernetes Operator 1.5.0.
>
> The Flink Kubernetes Operator allows users to manage their Apache Flink
> applications and their lifecycle th
Thanks, Gyula.
+1 (binding)
On Thu, Jul 20, 2023 at 5:01 AM Samrat Deb wrote:
> thank you gyula ,
> for driving it.
> +1(non binding)
>
>
> Bests,
> Samrat
>
> On Thu, 20 Jul 2023 at 8:02 AM, Rui Fan <1996fan...@gmail.com> wrote:
>
> > Thanks Gyula for driving this release.
> >
> > +1 for the t
Hi team,
+1 for supporting the last 1.x for a longer than usual period of time and
limiting it to bugfixes. I would suggest supporting it for double the usual
amount of time (4 minor releases).
On Tue, Jul 25, 2023 at 9:25 AM Konstantin Knauf wrote:
> Hi Alex,
>
> yes, I think, it makes sense t
Thank you, team.
+1 (binding)
- Verified Helm repo works as expected, points to correct image tag, build,
version
- Verified basic examples + checked operator logs everything looks as
expected
- Verified hashes, signatures and source release contains no binaries
- Ran built-in tests, built jars +
Hi Gabor,
Thanks for bringing this up. Similarly to when we dropped Python 3.6 due to
its end of life (and added 3.10) in Flink 1.17 [1,2], it makes sense to
proceed to remove 3.7 and add 3.11 instead.
+1.
[1] https://issues.apache.org/jira/browse/FLINK-27929
[2] https://github.com/apache/flink/
Thanks, Peter. I agree that this is needed for Iceberg and beneficial for
other connectors too.
+1
On Wed, Oct 4, 2023 at 3:56 PM Péter Váry
wrote:
> Hi Team,
>
> In my previous email[1] I have described our challenges migrating the
> existing Iceberg SinkFunction based implementation, to the n
+1 (binding)
Marton
On Wed, Oct 11, 2023 at 8:20 PM Gyula Fóra wrote:
> Thanks , Peter.
>
> +1
>
> Gyula
>
> On Wed, 11 Oct 2023 at 14:47, Péter Váry
> wrote:
>
> > Hi all,
> >
> > Thank you to everyone for the feedback on FLIP-371[1].
> > Based on the discussion thread [2], I think we are rea
Hi Flink & Paimon devs,
The Flink webpage documentation navigation section still lists the outdated
TableStore 0.3 and master docs as subproject docs (see attachment). I am
all for advertising Paimon as a sister project of Flink, but the current
state is misleading in multiple ways.
I would like
; > > >
> > > > +1
> > > >
> > > > On Tue, Oct 17, 2023 at 10:34 AM Jingsong Li >
> > > wrote:
> > > >>
> > > >> Hi marton,
> > > >>
> > > >> Thanks for driving. +1
> > > &
Thank you, team. @David Radley: Not having Rui's key signed is not ideal,
but is acceptable for the release.
+1 (binding)
- Verified Helm repo works as expected, points to correct image tag, build,
version
- Verified basic examples + checked operator logs everything looks as
expected
- Verified h
Here you go, this is valid for 30 days:
https://join.slack.com/t/apache-flink/shared_invite/zt-276wzpx1c-DpF_IYPeZomOS3ChYkc4SA
On Sun, Nov 12, 2023 at 8:17 AM Neelabh Shukla
wrote:
> Hey Team,
> Can someone send me the slack invite link?
>
> Thanks,
> Neelabh
>
+1 (binding)
- Verified Helm repo works as expected, points to correct image tag, build,
version
- Verified basic examples + checked operator logs everything looks as
expected
- Verified hashes, signatures and source release contains no binaries
- Ran built-in tests, built jars + docker image from
Thanks, Matthias. Big +1 from me.
On Tue, Nov 28, 2023 at 5:30 PM Matthias Pohl
wrote:
> Thanks for the pointer. I'm planning to join that meeting.
>
> On Tue, Nov 28, 2023 at 4:16 PM Etienne Chauchot
> wrote:
>
> > Hi all,
> >
> > FYI there is the ASF infra roundtable soon. One of the subjects
Thanks, Martijn and Peter.
In terms of the concrete issue:
- I am following up with the author of FLIP-321 [1] (Becket) to update
the docs [2] to reflect the right state.
- I see two reasonable approaches in terms of proceeding with the
specific changeset:
1. We allow the excepti
> > > > > without much material harm.
> > > > > >
> > > > > > Option 2:
> > > > > > Theoretically speaking, if we really want to reach the perfect
> > state
> > > > > while
> > > > > >
Hi Leonard,
Thank you for the excellent work you and the team working on the CDC
connectors project have been doing so far. I am +1 of having them under
Flink's umbrella.
On Thu, Dec 7, 2023 at 10:26 AM Etienne Chauchot
wrote:
> Big +1, thanks this will be a very useful addition to Flink.
>
> B
Thanks, for raising this Peter. +1 for reverting the change.
Given the response from Timo and Aitozi, I believe it would be best if we
could ship reverting the change in 1.18.1.
On Thu, Dec 7, 2023 at 2:47 PM Aitozi wrote:
> Hi Peter, Timo
> Sorry for this breaking change, I didn't notice t
+1 This greatly improves interfacing with multiple Flink versions, e.g.
upgrades from the Kubernetes Operator.
On Mon, Dec 11, 2023 at 12:36 PM Gyula Fóra wrote:
> Thanks Gabor!
>
> +1 from my side, this sounds like a reasonable change that will
> improve integration and backward compatibility.
+1 (binding)
On Tue, Dec 12, 2023 at 4:16 PM Rodrigo Meneses wrote:
> +1
>
> On Tue, Dec 12, 2023 at 6:58 AM Maximilian Michels wrote:
>
> > +1 (binding)
> >
> > On Tue, Dec 12, 2023 at 2:23 PM Peter Huang
> > wrote:
> > >
> > > +1 Non-binding
> > >
> > >
> > > Peter Huang
> > >
> > > Őrhidi M
+1
Thanks, Peter. Based on the consensus in the recent thread on FLIP-371 [1]
I agree that this is the right approach. I made some minor edits to the
FLIP, which looks good to me now.
[1] https://lists.apache.org/thread/h6nkgth838dlh5s90sd95zd6hlsxwx57
On Wed, Dec 13, 2023 at 5:30 PM Gyula Fóra
+1 (binding)
On Mon 18. Dec 2023 at 09:34, Péter Váry
wrote:
> Hi everyone,
>
> Since there were no further comments on the discussion thread [1], I would
> like to start the vote for FLIP-372 [2].
>
> The FLIP started as a small new feature, but in the discussion thread and
> in a similar paral
Thanks, Paul.
Ferenc and I have been looking into unblocking the Kubernetes path via an
updated implementation for FLINK-28915 to ship the jars conveniently there.
You can expect an updated PR there next week. Looking forward to your
findings in the YARN POC.
On Mon, Dec 11, 2023 at 4:01 AM Paul
+1
Thanks, Danny - I really appreciate you taking the time for the in-depth
investigation. Please proceed, looking forward to your experience.
On Mon, Jan 8, 2024 at 6:04 PM Martijn Visser
wrote:
> Thanks for investigating Danny. It looks like the best direction to go to
> :)
>
> On Mon, Jan 8,
+1 (binding)
On Tue, Jan 9, 2024 at 10:15 AM Leonard Xu wrote:
> +1(binding)
>
> Best,
> Leonard
>
> > 2024年1月9日 下午5:08,Yangze Guo 写道:
> >
> > +1 (non-binding)
> >
> > Best,
> > Yangze Guo
> >
> > On Tue, Jan 9, 2024 at 5:06 PM Robert Metzger
> wrote:
> >>
> >> +1 (binding)
> >>
> >>
> >> On T
Hi all,
We have added the interface for registering the connectors in custom user
user defined functions, like representing enrichment from an HBase table in
the middle of a Flink application. We are reaching out to the Atlas
community to review the implementation in the near future too, based on
Hi Jack,
Yes, we know how to do it and even have the implementation ready and being
reviewed by the Atlas community at the moment. :-)
Would you be interested in having a look?
On Thu, Mar 19, 2020 at 12:56 PM jackylau wrote:
> Hi:
> i think flink integrate atlas also need add catalog informa
+1 (binding)
Thank you for proposing this contribution!
On Fri, Nov 1, 2019 at 2:46 PM Konstantin Knauf
wrote:
> +1 (non-binding)
>
> Stateful Functions, already in its current initial release, simplifies the
> development of event-driven application on Flink quite significantly.
>
> On Thu, Oc
Wearing my Cloudera hat I can tell you that we have done this exercise for
our distros of the 3.0 and 3.1 Hadoop versions. We have not contributed
these back just yet, but we are open to do so. If the community is
interested we can contribute those changes back to flink-shaded and suggest
the nece
Additionally as having multiple files under /output1.txt is standard in the
Hadoop ecosystem you can transparently read all the files with
env.readTextFile("/output1.txt").
You can also set parallelism on individual operators (e.g the file writer)
if you really need a single output.
On Fri, Nov 2
Thanks, Robert!
On Fri, Nov 27, 2015 at 5:02 PM, Vasiliki Kalavri wrote:
> Thank you Robert ^^
>
> On 27 November 2015 at 16:23, Till Rohrmann wrote:
>
> > Thanks Robert for being the release manager for 0.10.1
> >
> > On Fri, Nov 27, 2015 at 4:21 PM, Maximilian Michels
> > wrote:
> >
> > > Gr
Thanks for writing this up, Gábor. As Aljoscha suggested chaining changes
all of these and makes it very tricky to work with these which should be
clearly documented. That was the reason while some time ago the streaming
API always copied the output of a UDF by default to avoid this ambiguous
cases
+1
On Wed, Jan 13, 2016 at 12:37 PM, Matthias J. Sax wrote:
> +1
>
> On 01/13/2016 11:51 AM, Fabian Hueske wrote:
> > @Stephan: You mean all tags should be protected, not only those under
> rel?
> >
> > 2016-01-13 11:43 GMT+01:00 Till Rohrmann :
> >
> >> +1 for protecting the master branch.
> >>
Hi guys,
They are at least already registered for serialization [1], so there should
be no intentional conflict as Theo has suggested.
[1]
https://github.com/apache/flink/blob/master/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/common/FlinkMLTools.scala#L67-L73
Best,
Marton
On T
Adding to Ufuk's answer: yes, cancelling the job frees up the resources. :)
Best:
Marton
On Fri, Feb 19, 2016 at 12:10 PM, Ufuk Celebi wrote:
> Yes, you can cancel it via the web frontend or the CLI interface [1].
>
> If you can send messages to the JobManager, you can also send a
> CancelJob
Thanks for creating the candidate Robert and for the heads-up, Slim.
I would like to get a PR [1] in before 1.0.0 as it breaks hashing behavior
of DataStream.keyBy. The PR has the feature implemented and the java tests
adopted, there is still a bit of outstanding fix for the scala tests. Gábor
Hor
Recent changes to the build [1] where many libraries got their core
dependencies (the ones included in the flink-dist fat jar) moved to the
provided scope.
The reasoning was that when submitting to the Flink cluster the application
already has these dependencies, while when a user writes a program
Issued JIRA ticket 3511 to make it referable in other discussions. [1]
[1] https://issues.apache.org/jira/browse/FLINK-3511
On Thu, Feb 25, 2016 at 3:36 PM, Márton Balassi
wrote:
> Recent changes to the build [1] where many libraries got their core
> dependencies (the ones included
.
> > >> >> The "trigger" for creating the release was that the number of
> > blocking
> > >> >> issues is 0 now.
> > >> >>
> > >> >> I did a quick check of the open pull requests yesterday evening and
> > >>
Great to see that. :)
On Fri, Feb 26, 2016 at 1:56 PM, Theodore Vasiloudis <
theodoros.vasilou...@gmail.com> wrote:
> I'm sure others noticed this as well yesterday, but the project has passed
> 1000 stars on Github,
> just in time for the 1.0 release ;)
>
> Here's to the next 1000!
>
> --Theo
>
t; and the core flink
> dependencies with scope "compile"
>
> That way the example should run in the IDE out of the cox, and users that
> reference the libraries will still get the correct packaging (include the
> library in the user jar, but not additionally the core flink j
Thanks for initiating this Ufuk. Updated the streaming hashing mention -
whether it is api breaking is questionable, so I would place it last in the
list. But definitely good to mention it there.
On Thu, Mar 3, 2016 at 10:48 AM, Ufuk Celebi wrote:
> Hey all,
>
> let's make sure that we have a go
@Fabian: That is my bad, but I think we should be still on time. Pinged Uli
just to make sure. Proposal from Gabor and Jira from me are coming soon.
On Tue, Mar 8, 2016 at 11:43 AM, Fabian Hueske wrote:
> Hi Gabor,
>
> I did not find any Flink proposals for this year's GSoC in JIRA (should be
>
Hey,
I was wondering whether there is a way to access the Configuration from an
(Stream)ExecutionEnviroment or a RichFunction. Practically I would like to
set a temporary persist path in the Configuration and access the location
somewhere during the topology.
I have followed the way the streaming
Mar 9, 2016 at 12:38 PM, Márton Balassi
> wrote:
>
> > Hey,
> >
> > I was wondering whether there is a way to access the Configuration from
> an
> > (Stream)ExecutionEnviroment or a RichFunction. Practically I would like
> to
> > set a temporary
bit more, like
> >> > - when is it decided whether this project takes place?
> >> > - when would results be there?
> >> > - can we expect the results to be usable, i.e., how good is the
> >> student?
> >> > (no offence, but so far the results in GSoC wer
Hey,
I have just come across a shortcoming of the streaming Scala API: it
completely lacks the Scala implementation of the DataStreamSink and instead
the Java version is used. [1]
I would regard this as a bug that needs a fix for 1.0.1. Unfortunately this
is also api-breaking.
Will post it to JI
The JIRA issue is FLINK-3610.
On Sat, Mar 12, 2016 at 8:39 PM, Márton Balassi
wrote:
>
> I have just come across a shortcoming of the streaming Scala API: it
> completely lacks the Scala implementation of the DataStreamSink and
> instead the Java version is used. [1]
>
> I wo
nge because its API breaking.
> One of the promises of the 1.0 release is that we are not breaking any APIs
> in the 1.x.y series of Flink. We can fix those issues with a 2.x release.
>
> On Sun, Mar 13, 2016 at 5:27 AM, Márton Balassi
> wrote:
>
> > The JIRA issue is FLINK
w methods. Maybe we can
> find a good way to resolve the issue without changing the signature of
> existing methods.
> And for tracking API breaking changes, maybe it makes sense to create a
> 2.0.0 version in JIRA and set the "fix-for" for the issue to 2.0.
>
> On Sun, M
Hey,
I think we came to the agreement that this PR is not mergeable right now,
so I am closing it. I personally find it inconsistent to not have the fully
API mirrored in Scala though, but this is something that we can revisit
when prepairing 2.0.
Best,
Marton
On Mon, Mar 14, 2016 at 8:14 PM, S
le.com/Tuple-performance-and-the-curious-JIT-compiler-td10666.html
> ),
> >> and I wanted to make this information available to be able to
> incorporate
> >> this into that discussion. I have written this draft with the help of
> Gábor
> >> Gévay and Márton Balassi and I am op
Just a quick note: "[FLINK-3636] Add ThrottledIterator to WindowJoin jar"
is not needed on the release-1,0 branch as the example rewrite introducing
the ThrottledIterator is not present there. I see that Ufuk has already
pushed the commit there, it does no harm after all.
On Wed, Mar 30, 2016 at 1
com/apache/flink/blob/master/flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/codegen/CodeGenerator.scala
>
> On 18 March 2016 at 19:37, Gábor Horváth wrote:
>
> > Thank you! I finalized the project.
> >
> >
> > On 18 March 2016 at 10:29, Már
uld be added.
> I think that users could be confused.
>
> Regards,
> Chiwan Park
>
> > On Apr 17, 2016, at 3:49 PM, Márton Balassi
> wrote:
> >
> > Hi Gábor,
> >
> > I think that adding the Janino dep to flink-core should be fine, as it
> has
> &g
Hi Gabor,
I have checked out your branch and tried debugging WordCountPojo to
reproduce the behaviour. I am on a Mac with jdk1.8.0_91. I have received
the following error when trying to access the constructors of the class in
question:
Exception in thread "main" java.lang.VerifyError: (class:
org
+1 for the proposal
@ggevay: I do think that it refers to you. :)
On Thu, May 12, 2016 at 10:40 AM, Gábor Gévay wrote:
> Hello,
>
> There are at least three Gábors in the Flink community, :) so
> assuming that the Gábor in the list of maintainers of the DataSet API
> is referring to me, I'll be
Hey Vijay,
Depending on the local dependencies is one way to do this. IMHO the more
straight-forward way is to simply place your tests within your version of
Flink in the same project. That way the IDE will use the right version of
the artifact when executing the test.
Best,
Marton
On Sat, May
Hey Eron,
Yes, DataSet#collect and count methods implicitly trigger a JobGraph
execution, thus they also trigger writing to any previously defined sinks.
The idea behind this behavior is to enable interactive querying (the one
that you are used to get from a shell environment) and it is also a gre
I also think that the current mechanism is weird. IMHO it makes sense to
add the flag to both the start and stop scripts.
On Wed, Jun 1, 2016 at 2:09 PM, Ufuk Celebi wrote:
> Yes, it's expected, but you are certainly not the first one to be
> confused by this behaviour.
>
> The reasoning behind
I do like the idea, that seems to be the trend now - the Bigtop community
had a similar initiative recently. [1]
Helps dealing with the "Is it mature enough?" question. :)
[1] http://kaiyzen.github.io/bigtop/
On Mon, Jul 4, 2016 at 5:00 PM, Ufuk Celebi wrote:
> I would like that! +1
>
> On Mon,
Hi Alan,
Your contribution is more than welcome. It would be a great addition to
flink-streaming-connectors. At some point we might move some of these to a
"Flink Packages" repository, similarly to the Spark approach, but currently
the best place to have them is the internal connectors.
Robert in
Hi Kevin,
Thanks for being willing to contribute such an effort. I think it is a
completely valid discussion to ask in your organization and please feel
free to ask us questions during your evaluation. Putting statements on the
Flink website highlighting the differences would be very tricky though
ur e-mail below:
>
> On 08.07.2016 15:13, Márton Balassi wrote:
>
>> Hi Kevin,
>>
>> Thanks for being willing to contribute such an effort. I think it is a
>> completely valid discussion to ask in your organization and please feel
>> free to ask us questions d
Welcome Neelesh, great to have you here. :)
On Sun, Jul 31, 2016, 11:08 Neelesh Salian wrote:
> Hello folks,
>
> I am Neelesh Salian; I recently joined the Flink community and I wanted to
> take this opportunity to formally introduce myself.
>
> I have been working with the Hadoop and Spark ecos
When it comes to the current use cases I'm for this separation.
@Ufuk: As Gyula has already pointed out with the current design of
integration it should not be a problem. Even if we submitted programs to
the wrong clusters it would only cause performance issues.
Eventually it would be nice to have
+1
Checked signatures, checksums, pom. Built from src, run local examples.
On Tue, Feb 17, 2015 at 11:59 PM, Robert Metzger
wrote:
> +1
>
> I've checked the RC on a HDP 2.2 sandbox (using Flink on YARN). Also ran
> wordcount on it.
> The hadoop1 quickstarts have the correct version set (that wa
+1
We used to have this a couple of releases ago.
On Wed, Feb 18, 2015 at 4:30 PM, Henry Saputra
wrote:
> Hi All,
>
> I am thinking of pushing latest doc in master ((i.e the snapshot
> build) to Flink website to help people follow the latest change and
> development without manually build the d
Dear Mathias,
Thanks for reporting the issue. I have successfully built
flink-streaming-examples with maven, you can depend on test classes, the
following in the pom does the trick:
org.apache.flink
flink-streaming-core
${project.version}
test
tests
This tells maven that the test cla
t; > repository it works!
> > >
> > > Why does maven no automatically update the local repository?
> > >
> > >
> > > -Matthias
> > >
> > >
> > >
> > > On 02/26/2015 09:20 AM, Márton Balassi wrote:
> > > >
+1
On Fri, Feb 27, 2015 at 11:32 AM, Szabó Péter
wrote:
> Yeah, I agree, it is at best a cosmetic issue. I just wanted to let you
> know about it.
>
> Peter
>
>
> 2015-02-27 11:10 GMT+01:00 Till Rohrmann :
>
> > Catching the NullPointerException and throwing an
> IllegalArgumentException
> > wit
oumas <mailto:ktzou...@apache.org>> wrote:
>
> +1
>
> On Tue, Feb 17, 2015 at 12:14 PM, Márton Balassi <mailto:mbala...@apache.org>>
> wrote:
>
> When it comes to the current use cases I'm for this separation.
> @Ufuk: As Gyula has already pointed o
Hey,
We have a nice list of new features - it definitely makes sense to have
that as a release. On my side I really want to have a first limited version
of streaming fault tolerance in it.
+1 for Robert's proposal for the deadlines.
I'm also volunteering for release manager.
Best,
Marton
On Mo
Hi Henry,
Batch mode is a new execution mode for batch Flink jobs where instead of
pipelining the whole execution the job is scheduled in stages, thus
materializing the intermediate result before continuing to the next
operators. For implications see [1].
[1] http://www.slideshare.net/KostasTzoum
I'm strongly for consistency and personally would prefer Scala as a default
- thus making the shorter page the default.
On Sat, Mar 7, 2015 at 1:47 PM, Stephan Ewen wrote:
> I think either way is fine as long as we are consistent.
>
> I have a slight bias for making Scala the default.
>
> On Sat
Then if no objections in 24 hours I'd open a JIRA issue for this.
On Mon, Mar 9, 2015 at 3:23 PM, Till Rohrmann wrote:
> +1 for Scala :-)
>
> On Sat, Mar 7, 2015 at 1:56 PM, Márton Balassi
> wrote:
>
> > I'm strongly for consistency and personally would prefer
+1 for the proposed solution from Max
+1 for decreasing the size: but let's have preview, I also think that the
current one is a bit too large
On Mon, Mar 9, 2015 at 2:16 PM, Maximilian Michels wrote:
> We can fix this for the headings by adding the following CSS rule:
>
> h1, h2, h3, h4 {
>
Hey,
Seems like a weird interaction between output splitting and windowing.
Could you please open a JIRA ticket for it?
Thanks,
Marton
On Mon, Mar 9, 2015 at 10:22 AM, Szabó Péter
wrote:
> I'm running the following code and getting the RuntimeException "Emit
> failed due to: org.apache.flink
>>
> > >>
> > >>
> > >>
> > >> If there are no objections, I will merge this by the end of the day.
> > >>
> > >> Best regards,
> > >> Max
> > >>
> > >> On Mon, Mar 9, 2015 at 4:22 PM,
gt; > wrote:
> >
> >> Ah, thanks Márton.
> >>
> >> So we are chartering to the similar concept of Spark RRD staging
> >> execution =P
> >> I suppose there will be a runtime configuration or hint to tell the
> >> Flink Job manager to
+1 for Max's suggestion.
On Mon, Mar 16, 2015 at 10:32 AM, Ufuk Celebi wrote:
> On Fri, Mar 13, 2015 at 6:08 PM, Maximilian Michels
> wrote:
>
> >
> > Thanks for starting the discussion. We should definitely not keep
> > flink-expressions.
> >
> > I'm in favor of DataTable for the DataSet abstr
Dear Akshay,
Thanks again for your interest and for the recent contribution to streaming.
Both of the projects mentioned wold be largely appreciated by the
community, and you can also propose other project suggestions here for
discussion.
Regarding FLINK-1534, the thesis I mentioned serves as a
)
>- expression API
>
> - contrib
>
> - yarn
>
> - dist
>
> - yarn tests
>
> - java 8
>
> On Mon, Jan 5, 2015 at 7:45 PM, Henry Saputra
> wrote:
>
> > Thanks Marton, having 2 threads discussing same thing can be confusing.
> >
>
Thanks for looking into this, Stephan. +1 for the JIRAs.
On Mon, Mar 23, 2015 at 10:55 AM, Ufuk Celebi wrote:
> On 23 Mar 2015, at 10:44, Stephan Ewen wrote:
>
> > Hi everyone!
> >
> > With the streaming stuff getting heavier exposure, I think it needs a few
> > more tests. With so many changes
I also like the travis infrastucture. Thanks for bringing this up and
reaching out to the travis guys.
On Tue, Mar 24, 2015 at 3:38 PM, Robert Metzger wrote:
> Hi guys,
>
> the build queue on travis is getting very very long. It seems that it takes
> 4 days now until commits to master are build.
+DataTable
On Thu, Mar 26, 2015 at 9:29 AM, Markl, Volker, Prof. Dr. <
volker.ma...@tu-berlin.de> wrote:
> +Table
>
> I also agree with that line of argument (think SQL ;-) )
>
> -Ursprüngliche Nachricht-
> Von: Timo Walther [mailto:twal...@apache.org]
> Gesendet: Donnerstag, 26. März 201
Dear Janani,
Apache Kafka as a source is supported by our system, check out the
documentation for details. [1]
You can use UDP as a source if you wish, just bear in mind the standard
disadvantages of it: the possibility of losing messages and that you will
have to manually deal with the serializa
+1 for the early release.
I'd call it 0.9-milestone1.
On Thu, Mar 26, 2015 at 1:37 PM, Maximilian Michels wrote:
> +1 for a beta release: 0.9-beta.
>
> On Thu, Mar 26, 2015 at 12:09 PM, Paris Carbone wrote:
>
> > +1 for an early release. It will help unblock the samoa PR that has 0.9
> > depen
@Timo: No feature freeze for this, yes.
On Thu, Mar 26, 2015 at 3:36 PM, Timo Walther wrote:
> +1 for a beta release. So there is no feature-freeze until the RC right?
>
>
>
> On 26.03.2015 15:32, Márton Balassi wrote:
>
>> +1 for the early release.
>>
>>
+1 for 0.9.0-milestone-1.
On Fri, Mar 27, 2015 at 12:47 PM, Stephan Ewen wrote:
> Okay, to how about we make this
>
>
> org.apache.flink
> flink-core
> 0.9.0-milestone-1
>
>
> I think it is common that milestones have numbers. There is no such thing
> as "the" milestone.
>
>
>
> On Thu, Mar 26
Woot!
On Wed, Apr 1, 2015 at 9:01 AM, Aljoscha Krettek
wrote:
> Right now, runtime is roughly thrice that of equivalent java programs.
> But I plan on bringing that to the same ballpark using code
> generation.
>
> On Wed, Apr 1, 2015 at 8:54 AM, Fabian Hueske wrote:
> > :-D
> > This is awesome
Hey Matthias,
Thanks for reporting the Exception thrown, we were not preparing for this
use case yet. We fixed it with Gyula, he is pushing a fix for it right now:
When the job is cancelled (for example due to shutting down the executor
underneath) you should not see that InterruptedException as s
Big +1 for the proposal for Peter and Gyula. I'm really for bringing the
windowing and window join API in sync.
On Thu, Apr 2, 2015 at 6:32 PM, Gyula Fóra wrote:
> Hey guys,
>
> As Aljoscha has highlighted earlier the current window join semantics in
> the streaming api doesn't follow the change
Hey Mathias,
Thanks, this is a really nice contribution. I just scrolled through the
code, but I really like it and big thanks for the the tests for the
examples.
The rebase Fabian suggested would help a lot when merging.
On Thu, Apr 2, 2015 at 9:19 PM, Fabian Hueske wrote:
> Hi Matthias,
>
it like this:
>
> stream_A = a.window(...)
> stream_B = b.window(...)
>
> stream_A.join(stream_B).where().equals().with()
>
> So a join would just be a join of two WindowedDataStreamS. This would
> neatly move the windowing stuff into one place.
>
> On Thu, Apr 2, 2015 at
Dear Flavio,
'mvn clean install -DskipTests' should do the trick.
On Fri, Apr 3, 2015 at 12:11 AM, Flavio Pompermaier
wrote:
> Hi to all,
>
> I was trying to compile Flink 0.9 skipping test compilation
> (-Dmaven.test.skip=true) but this is not possible because there are
> projects like flink-
>>> topology). Thanks again for initiating this!
> >>>
> >>> Paris
> >>>
> >>>> On 02 Apr 2015, at 23:14, Gyula Fóra wrote:
> >>>>
> >>>> This sounds amazing :) thanks Matthias!
> >>>>
> >&
1 - 100 of 293 matches
Mail list logo