Re: [DISCUSS] Releasing Flink 1.8 / Feature Freeze

2019-02-13 Thread Ufuk Celebi
+1 for Feb 22. Thanks for being the release manager.

– Ufuk

On Tue, Feb 12, 2019 at 7:00 PM Stephan Ewen  wrote:
>
> +1 for doing a 1.8 release soon.
>
> Some of the Table API refactoring work is blocked on a release (assuming we
> want one release to deprecate some functions before dropping them.
>
> On Tue, Feb 12, 2019 at 11:03 AM Aljoscha Krettek 
> wrote:
>
> > Hi All,
> >
> > In reference to a recent mail by Ufuk [1] and because it has been a while
> > since the last Flink release we should start thinking about a Flink 1.8
> > release. We’re actually a bit behind the cadence but I think we still
> > shouldn’t rush things. I’m hereby proposing myself as release manager for
> > Flink 1.8 and I also want to suggest February 22 as the date for feature
> > freeze and cutting of the 1.8 release branch. This is quite soon but still
> > gives us two weeks to work on things.
> >
> > What do you think?
> >
> > Best,
> > Aljoscha
> >
> > [1]
> > https://lists.apache.org/thread.html/423164e045c3c206f2b8d5c061be7055ef4bc3fd880c28a862ef5d8c@%3Cdev.flink.apache.org%3E


request for access

2019-02-13 Thread 刘建刚
Hi Guys,

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA Username is Jiangang. My JIRA full name is Liu.


Request for contribution access

2019-02-13 Thread Xin Ma
Hi Guys,

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA ID is iluvex.


Best regards,

Xin


Request for contribution access

2019-02-13 Thread ??
Hi Guys,


I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA Username is ysqwhiletrue. My JIRA full name sishu.yss.

Re: [DISCUSS] Releasing Flink 1.6.4

2019-02-13 Thread Till Rohrmann
+1 for the 1.6.4 release.

On Tue, Feb 12, 2019 at 6:44 PM Stephan Ewen  wrote:

> +1 for 1.6.4
>
>
> On Tue, Feb 12, 2019 at 6:18 PM Thomas Weise  wrote:
>
> > +1 for making it the final 1.6.x release
> >
> >
> > On Tue, Feb 12, 2019 at 7:10 AM Chesnay Schepler 
> > wrote:
> >
> > > I assume we will treat this as the last bugfix release in the 1.6
> series?
> > >
> > > On 12.02.2019 10:00, jincheng sun wrote:
> > > > Hi Flink devs,
> > > >
> > > > It has been a long time since the release of 1.6.3 (December 23,
> 2018).
> > > > There have been a lot of valuable bug fixes during this period.
> > > > What do you think about releasing Flink 1.6.4 soon?
> > > >
> > > > We already have some critical fixes in the release-1.6 branch(such
> as):
> > > >
> > > > - FLINK-11235: Solve Elasticsearch connector thread leaks
> > > > - FLINK-11207: security vulnerability with currently used
> > > > Apachecommons-compress version
> > > > - FLINK-10761: do not acquire lock for getAllVariables
> > > > - FLINK-10761: potential deadlock with metrics system
> > > > - FLINK-11140: fix empty child path check in Buckets
> > > > - FLINK-10774: connection leak in FlinkKafkaConsumer
> > > > - FLINK-10848: problem with resource allocation in YARN mode
> > > > - FLINK-11419: restore issue with StreamingFileSink
> > > > - FLINK-10774: connection leak in FlinkKafkaConsumer
> > > >
> > > > Please let me know what you think. Ideally, we can kick off the
> release
> > > > vote for the first RC early next week.
> > > >
> > > > I have a preliminary analysis of the JIRAs on 1.6.4. There are
> > currently
> > > 4
> > > > in progress, and there are 23 need to do.
> > > > I have write the Google doc
> > > > <
> > >
> >
> https://docs.google.com/document/d/1ESMrCkLT_Lf4L1Mw1nEu2c0yqlM_MzMRpjAHC0G3Ln0/edit?usp=sharing
> > > >
> > > > for how to processing of these JIRAs. Welcome to comment in the email
> > or
> > > in
> > > > the Google doc
> > > > <
> > >
> >
> https://docs.google.com/document/d/1ESMrCkLT_Lf4L1Mw1nEu2c0yqlM_MzMRpjAHC0G3Ln0/edit?usp=sharing
> > > >
> > > > .
> > > >
> > > > If there are some other critical fixes for 1.6.4 that are almost
> > > completed
> > > > (already have a PR opened and review is in progress),
> > > > please let me know here by the end of this week.
> > > >
> > > > Cheers,
> > > > Jincheng
> > > >
> > >
> > >
> >
>


Re: [DISCUSS] Releasing Flink 1.8 / Feature Freeze

2019-02-13 Thread Till Rohrmann
+1 for the 1.8 release. Thanks for volunteering as our release manager
Aljoscha.

Cheers,
Till

On Wed, Feb 13, 2019 at 9:01 AM Ufuk Celebi  wrote:

> +1 for Feb 22. Thanks for being the release manager.
>
> – Ufuk
>
> On Tue, Feb 12, 2019 at 7:00 PM Stephan Ewen  wrote:
> >
> > +1 for doing a 1.8 release soon.
> >
> > Some of the Table API refactoring work is blocked on a release (assuming
> we
> > want one release to deprecate some functions before dropping them.
> >
> > On Tue, Feb 12, 2019 at 11:03 AM Aljoscha Krettek 
> > wrote:
> >
> > > Hi All,
> > >
> > > In reference to a recent mail by Ufuk [1] and because it has been a
> while
> > > since the last Flink release we should start thinking about a Flink 1.8
> > > release. We’re actually a bit behind the cadence but I think we still
> > > shouldn’t rush things. I’m hereby proposing myself as release manager
> for
> > > Flink 1.8 and I also want to suggest February 22 as the date for
> feature
> > > freeze and cutting of the 1.8 release branch. This is quite soon but
> still
> > > gives us two weeks to work on things.
> > >
> > > What do you think?
> > >
> > > Best,
> > > Aljoscha
> > >
> > > [1]
> > >
> https://lists.apache.org/thread.html/423164e045c3c206f2b8d5c061be7055ef4bc3fd880c28a862ef5d8c@%3Cdev.flink.apache.org%3E
>


[jira] [Created] (FLINK-11591) Restoring LockableTypeSerializer's snapshot from 1.6 and below requires pre-processing before snapshot is valid to use

2019-02-13 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-11591:
---

 Summary: Restoring LockableTypeSerializer's snapshot from 1.6 and 
below requires pre-processing before snapshot is valid to use
 Key: FLINK-11591
 URL: https://issues.apache.org/jira/browse/FLINK-11591
 Project: Flink
  Issue Type: Bug
  Components: CEP, Type Serialization System
Reporter: Tzu-Li (Gordon) Tai
Assignee: Igal Shilman


In 1.6 and below, the {{LockableTypeSerializer}} incorrectly returns directly 
the element serializer's snapshot instead of wrapping it within an independent 
snapshot class:
https://github.com/apache/flink/blob/release-1.6/flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/sharedbuffer/Lockable.java#L188

This results in the fact that the the written state information for this would 
be {{(LockableTypeSerializer, SomeArbitrarySnapshot)}}.

The problem occurs when restoring this in Flink 1.7+, since compatibility 
checks are now performed by providing the new serializer to the snapshot, what 
would happen is:
{{SomeArbitrarySnapshot.resolveSchemaCompatibility(newLockableTypeSerializer)}},
 which would not work since the arbitrary snapshot does not recognize the 
{{LockableTypeSerializer}}.

To fix this, we essentially need to preprocess that arbitrary snapshot when 
restoring from <= 1.6 version snapshots.

A proposed fix would be to have the following interface:
{code}
public interface RequiresLegacySerializerSnapshotPreprocessing {
TypeSerializerSnapshot 
preprocessLegacySerializerSnapshot(TypeSerializerSnapshot legacySnapshot)
}
{code}

The {{LockableTypeSerializer}} would then implement this interface, and in the 
{{preprocessLegacySerializerSnapshot}} method, properly wrap that arbitrary 
element serializer snapshot into a {{LockableTypeSerializerSnapshot}}.

In general, this interface is useful to preprocess any problematic snapshot 
that was returned pre 1.7.

The point-in-time to check if a written serializer in <= 1.6 savepoints 
implements this interface and preprocesses the read snapshot would be:
https://github.com/apache/flink/blob/a567a1ef628eadad21e11864ec328481cd6d7898/flink-core/src/main/java/org/apache/flink/api/common/typeutils/TypeSerializerSerializationUtil.java#L218



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Releasing Flink 1.6.4

2019-02-13 Thread jing
+1



--
Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/


Re: [DISCUSS] Releasing Flink 1.8 / Feature Freeze

2019-02-13 Thread zhijiang
+1 for feature freeze on Feb 22.

I focus on two features which are expected to be covered in 1.8 release before. 
 The broadcast serialization improvement [1] only left one PR which would be 
done with expectation.
The pluggable shuffle manager [2] might not catch up with this time, but I 
think it is not the blocker for the release.

[1] https://issues.apache.org/jira/browse/FLINK-10745
[2] https://issues.apache.org/jira/browse/FLINK-10653

Best,
Zhijiang


--
From:Ufuk Celebi 
Send Time:2019年2月13日(星期三) 16:01
To:dev 
Subject:Re: [DISCUSS] Releasing Flink 1.8 / Feature Freeze

+1 for Feb 22. Thanks for being the release manager.

– Ufuk

On Tue, Feb 12, 2019 at 7:00 PM Stephan Ewen  wrote:
>
> +1 for doing a 1.8 release soon.
>
> Some of the Table API refactoring work is blocked on a release (assuming we
> want one release to deprecate some functions before dropping them.
>
> On Tue, Feb 12, 2019 at 11:03 AM Aljoscha Krettek 
> wrote:
>
> > Hi All,
> >
> > In reference to a recent mail by Ufuk [1] and because it has been a while
> > since the last Flink release we should start thinking about a Flink 1.8
> > release. We’re actually a bit behind the cadence but I think we still
> > shouldn’t rush things. I’m hereby proposing myself as release manager for
> > Flink 1.8 and I also want to suggest February 22 as the date for feature
> > freeze and cutting of the 1.8 release branch. This is quite soon but still
> > gives us two weeks to work on things.
> >
> > What do you think?
> >
> > Best,
> > Aljoscha
> >
> > [1]
> > https://lists.apache.org/thread.html/423164e045c3c206f2b8d5c061be7055ef4bc3fd880c28a862ef5d8c@%3Cdev.flink.apache.org%3E



flink

2019-02-13 Thread Asura
Hi Guys,

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA ID is Gongwenzhou.

Re: [DISCUSS] Start a user...@flink.apache.org mailing list for the Chinese-speaking community?

2019-02-13 Thread Robert Metzger
Hey all,

I'm now getting more and more MODERATE emails from the mailing list service
for the user-zh@ list from people trying to subscribe.
I would like to ask if any committer (who ideally speaks Chinese) is
willing to moderate the user-zh@ list.

This works as follows:
When somebody who is not subscribed to the mailing list is trying to post a
message there, the moderators receive an email like this:


> To approve:
>user-zh-accept-1550048049.51177@flink.apache.org
> To reject:
>user-zh-reject-1550048049.51177.x...@flink.apache.org
> To give a reason
> to reject:
> %%% Start comment
> %%% End comment
>
>
>
> -- Forwarded message --
> From: "" 
> To: user...@flink.apache.org
> Cc:
> Bcc:
> Date: Wed, 11 Feb 2019 16:37:19 +0800
> Subject: Sub
> Sub


This means xxx...@163.com has send an email to user...@flink.apache.org
without being subscribed.
Option 1 is to send an email to
user-zh-accept-1550048049.51177@flink.apache.org
 to accept this
message to the list. But in this case, the message does not have any
meaningful content.
So instead, what I do is, I directly send an email to xxx...@163.com,
explaining how to subscribe to the mailing list.
It is important that people on the user list are subscribed before posting,
so that they receive the answers to their questions.

In rare cases people send emails that moderators can just accept (for
example when a well-known subscriber to the list is accidentally posting
from a different address).

Which Flink committer is willing to help out here?

Best,
Robert



On Tue, Jan 29, 2019 at 10:29 AM Jark Wu  wrote:

> Cheers!
>
> Subscribed. Looking forward to the first Chinese question ;)
>
> On Tue, 29 Jan 2019 at 17:16, Robert Metzger  wrote:
>
> > Success!
> > The mailing list has been created.
> >
> > Send an email to "user-zh-subscr...@flink.apache.org" to subscribe!
> > I've also updated the website with the list:
> > https://flink.apache.org/community.html
> >
> > I will now also tweet about it, even though I believe it'll be more
> > important to advertise the list on Chinese social media platforms.
> >
> >
> > On Tue, Jan 29, 2019 at 1:52 AM ZILI CHEN  wrote:
> >
> > > +1,sounds good
> > >
> > > Ufuk Celebi  于2019年1月29日周二 上午1:46写道:
> > >
> > > > I'm late to this party but big +1. Great idea! I think this will help
> > > > to better represent the actual Flink community size and increase
> > > > interaction between the English and non-English speaking community.
> > > > :-)
> > > >
> > > > On Mon, Jan 28, 2019 at 6:02 PM jincheng sun <
> sunjincheng...@gmail.com
> > >
> > > > wrote:
> > > > >
> > > > > +1,I like the idea very much!
> > > > >
> > > > > Robert Metzger 于2019年1月24日 周四19:15写道:
> > > > >
> > > > > > Hey all,
> > > > > >
> > > > > > I would like to create a new user support mailing list called "
> > > > > > user...@flink.apache.org" to cater the Chinese-speaking Flink
> > > > community.
> > > > > >
> > > > > > Why?
> > > > > > In the last year 24% of the traffic on flink.apache.org came
> from
> > > the
> > > > US,
> > > > > > 22% from China. In the last three months, China is at 30%, the US
> > at
> > > > 20%.
> > > > > > An additional data point is that there's a Flink DingTalk group
> > with
> > > > more
> > > > > > than 5000 members, asking Flink questions.
> > > > > > I believe that knowledge about Flink should be available in
> public
> > > > forums
> > > > > > (our mailing list), indexable by search engines. If there's a
> huge
> > > > demand
> > > > > > in a Chinese language support, we as a community should provide
> > these
> > > > users
> > > > > > the tools they need, to grow our community and to allow them to
> > > follow
> > > > the
> > > > > > Apache way.
> > > > > >
> > > > > > Is it possible?
> > > > > > I believe it is, because a number of other Apache projects are
> > > running
> > > > > > non-English user@ mailing lists.
> > > > > > Apache OpenOffice, Cocoon, OpenMeetings, CloudStack all have
> > > > non-English
> > > > > > lists: http://mail-archives.apache.org/mod_mbox/
> > > > > > One thing I want to make very clear in this discussion is that
> all
> > > > project
> > > > > > decisions, developer discussions, JIRA tickets etc. need to
> happen
> > in
> > > > > > English, as this is the primary language of the Apache Foundation
> > and
> > > > our
> > > > > > community.
> > > > > > We should also clarify this on the page listing the mailing
> lists.
> > > > > >
> > > > > > How?
> > > > > > If there is consensus in this discussion thread, I would request
> > the
> > > > new
> > > > > > mailing list next Monday.
> > > > > > In case of discussions, I will start a vote on Monday or when the
> > > > > > discussions have stopped.
> > > > > > Then, we should put the new list on our website and start
> promoting
> > > it
> > > > (in
> > > > > > said DingTalk group and on social media).
> > > > > >
> > > > > > Let me know what you think about this idea :)
> > > > > >
> > > > > > Best,
> > > > > > Ro

Re: [DISCUSS] Releasing Flink 1.8 / Feature Freeze

2019-02-13 Thread jincheng sun
Thanks for bring up the discuss of 1.8 release Aljoscha !

+1 for feature freeze for 1.8 release soon.

As Stephan mentioned above, Table API refactoring work is blocked on a
release.
The expects changes of deprecated(in 1.8) APIs (Such as:
ExternalCatalogTable#builder(), new Table(..) ) in the TableAPI refactoring
work has been merged into the Master. So  from the points of my view, I
hope cutting of the 1.8 release branch ASAP. :-)

Of course, I agree that we shouldn’t rush things.

Best,
Jincheng






Stephan Ewen  于2019年2月13日周三 上午2:00写道:

> +1 for doing a 1.8 release soon.
>
> Some of the Table API refactoring work is blocked on a release (assuming we
> want one release to deprecate some functions before dropping them.
>
> On Tue, Feb 12, 2019 at 11:03 AM Aljoscha Krettek 
> wrote:
>
> > Hi All,
> >
> > In reference to a recent mail by Ufuk [1] and because it has been a while
> > since the last Flink release we should start thinking about a Flink 1.8
> > release. We’re actually a bit behind the cadence but I think we still
> > shouldn’t rush things. I’m hereby proposing myself as release manager for
> > Flink 1.8 and I also want to suggest February 22 as the date for feature
> > freeze and cutting of the 1.8 release branch. This is quite soon but
> still
> > gives us two weeks to work on things.
> >
> > What do you think?
> >
> > Best,
> > Aljoscha
> >
> > [1]
> >
> https://lists.apache.org/thread.html/423164e045c3c206f2b8d5c061be7055ef4bc3fd880c28a862ef5d8c@%3Cdev.flink.apache.org%3E
>


[ANNOUNCE] Apache Flink-shaded 6.0 released

2019-02-13 Thread Chesnay Schepler
The Apache Flink community is very happy to announce the release of 
Apache Flink-shaded 6.0.


The flink-shaded project contains a number of shaded dependencies for 
Apache Flink.


Apache Flink® is an open-source stream processing framework for 
distributed, high-performing, always-available, and accurate data 
streaming applications.


The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344544

We would like to thank all contributors of the Apache Flink community 
who made this release possible!


Regards,
Chesnay



[DISCUSS] Improve the flinkbot

2019-02-13 Thread Robert Metzger
Hey all,

the flinkbot has been active for a week now, and I hope the initial hiccups
have been resolved :)

I wanted to start this as a permanent thread to discuss problems and
improvements with the bot.

*So please post here if you have questions, problems or ideas how to
improve it!*


Re: [DISCUSS] Improve the flinkbot

2019-02-13 Thread Robert Metzger
The first improvement to Flink Bot I would like to introduce is the use of
labels.

I’m proposing to apply one of the following labels depending on the review
progress:


review=needsDescriptionApproval ❌

review=needsConsensusApproval ❌

review=needsArchitectureApproval ❌

review=needsQualityApproval ❌

review=approved ✅


This is how it looks in my test repository:

[image: Screenshot 2019-02-13 10.24.16.png]
(screenshot url:
https://user-images.githubusercontent.com/89049/52701055-9e022600-2f79-11e9-919e-df4338bc0fa3.png
 )


What are the benefits of this?

Labels allow to filter pull requests, so we can get a view of all approved
pull requests, to merge them (after a final review :) )

More senior members of the community can focus on approving consensus and
architecture of pull requests, while newer members of the community can
focus on “just” reviewing the code quality.


If nobody objects here, I will activate this new feature in the coming days.



On Wed, Feb 13, 2019 at 10:29 AM Robert Metzger  wrote:

> Hey all,
>
> the flinkbot has been active for a week now, and I hope the initial
> hiccups have been resolved :)
>
> I wanted to start this as a permanent thread to discuss problems and
> improvements with the bot.
>
> *So please post here if you have questions, problems or ideas how to
> improve it!*
>


Re: [DISCUSS] Improve the flinkbot

2019-02-13 Thread Chesnay Schepler

As of right now I'd like 2 things:

1) By default the bot shows a big red X next to every item; I'd prefer a 
question mark here as this allows us to differentiate between rejected 
and unaddressed points. It's also a bit nicer for contributors imo as it 
does not have such a negative connotation.


2) Be able to approve multiple items without requiring multiple 
mentions, i.e. @flinkbot approve X Y Z should approve X,Y,Z at once.


On 13.02.2019 10:29, Robert Metzger wrote:

Hey all,

the flinkbot has been active for a week now, and I hope the initial hiccups
have been resolved :)

I wanted to start this as a permanent thread to discuss problems and
improvements with the bot.

*So please post here if you have questions, problems or ideas how to
improve it!*





Re: [DISCUSS] Improve the flinkbot

2019-02-13 Thread Chesnay Schepler
"More senior members of the community can focus on approving consensus 
and architecture of pull requests, while newer members of the community 
can focus on “just” reviewing the code quality."


TBH I reallydon't see this happening, so I'm not too hot for this change.

How have you solved the permission issue for the bot?

If the permissions are set what I'd like to see is /attention/ 
automatically adding the respective person to the list of reviewers.


On 13.02.2019 10:30, Robert Metzger wrote:


The first improvement to Flink Bot I would like to introduce is the 
use of labels.


I’m proposing to apply one of the following labels depending on the 
review progress:



review=needsDescriptionApproval ❌

review=needsConsensusApproval ❌

review=needsArchitectureApproval ❌

review=needsQualityApproval ❌

review=approved ✅


This is how it looks in my test repository:

Screenshot 2019-02-13 10.24.16.png
(screenshot url: 
https://user-images.githubusercontent.com/89049/52701055-9e022600-2f79-11e9-919e-df4338bc0fa3.png ) 




What are the benefits of this?

Labels allow to filter pull requests, so we can get a view of all 
approved pull requests, to merge them (after a final review :) )


More senior members of the community can focus on approving consensus 
and architecture of pull requests, while newer members of the 
community can focus on “just” reviewing the code quality.



If nobody objects here, I will activate this new feature in the coming 
days.




On Wed, Feb 13, 2019 at 10:29 AM Robert Metzger > wrote:


Hey all,

the flinkbot has been active for a week now, and I hope the
initial hiccups have been resolved :)

I wanted to start this as a permanent thread to discuss problems
and improvements with the bot.

*So please post here if you have questions, problems or ideas how
to improve it!*





Re: [DISCUSS] Clean up and reorganize the JIRA components

2019-02-13 Thread Chesnay Schepler
The only parent I can think of is "Infrastructure", but I don't quite 
like it :/


+1 for "Runtime / Configuration"; this is too general to be placed in 
coordination imo.


On 12.02.2019 18:25, Robert Metzger wrote:

Thanks a lot for your feedback Chesnay!

re build/travis/release: Do you have a good idea for a common parent for
"Build System", "Travis" and "Release System"?

re legacy: Okay, I see your point. I will keep the Legacy Components prefix.

re library: I think I don't have a argument here. My proposal is based on
what I felt as being right :) I added the "Library / " prefix to the
proposal.

re core/config: From the proposed components, I see the best match with
"Runtime / Coordination", but I agree that this example is difficult to
place into my proposed scheme. Do you think we should introduce "Runtime /
Configuration" as a component?


I updated the proposal accordingly!





On Tue, Feb 12, 2019 at 12:19 PM Chesnay Schepler 
wrote:


re build/travis/release: No, I'm against merging build system, travis
and release system.

re legacy: So going forward you're proposing to move dropped features
into the legacy bucket and make it impossible to search for specific
issues for that component? There's 0 overhead to having these
components, so I really don't get the benefit here, but see the overhead.
I don't buy the argument of "people will not open issues if the
component doesn't exist", they will just leave the component field blank
or add a random one (that would be wrong). In fact, if you had a
storm/tez component (that users would adhere to) then it would be
_easier_ to figure out whether an issue can be rejected right away.

re library: If you are against a library category, what's your argument
for a connector category?

re tests: I don't mind "tests" being removed from tickets about test
instabilities, but you specified the migration as "rename E2E tests"
which is not equivalent.
Under what category would you file modifications to flink-test-utils-junit?
I would propose to not differentiate between e2e and other tests; I
would go along with "Test infrastructure", and remove the major "Tests"
category.

re core/config: As an example, where (under Runtime) would you place the
introduction of the ConfigOption class?

On 11.02.2019 11:31, Robert Metzger wrote:

Thanks a lot for your feedback!

@Timo:
I've followed your suggestions and updated the proposed names in the

wiki.

Regarding a new "SQL/Connectors" component: I (with admittedly not much
knowledge) would not add this component at the moment, and put the SQL
stuff into the respective connector component.
It is probably pretty difficult for a user to decide whether a but

belongs

to "SQL/Connector" to "Connectors/Kafka" when Kafka in SQL does not work.

@Chesnay:
- You are suggesting to rename "Build System" to "Maven" and still merge

it

with "Travis", "Release System" etc. as in the proposal?

- "Runtime / Control Plan" vs "Runtime / Coordination" -- I changed the
proposal

- Re. "Documentation": Yes, I think that would be better in the long run.
We are already in a situation where there are groups within the community
focusing on certain areas of the code (such as SQL, the runtime,
connectors). Those groups will monitor their components, but it will be a
lot of overhead for them to monitor the "Documentation" component.
We can also try to assign documentation components to both

"Documentation"

and the affected component, such as "Runtime / Metrics".

- Removed "Misc / " prefix.

- "Legacy Components": Usually legacy components usually have very few
tickets. "Flink on Tez" has 13, "Storm Compat" ~30, and JIRA has a bulk
edit feature :)
The benefit of having it generalized is that people will probably not add
tickets to it.

- "Libraries /" prefix: I don't think that it is necessary. Some

libraries

might grow in the future (like the Table API), then we need to rename.
the "flink-libraries" module does contain stuff like the sql client or

the

python api, which are already covered by other components in my proposal

--

so going with the maven module structure is not an argument here.

- "End to end infrastructure" and "Tests: The same argument as with the
"Documentation" applies here. The maintainers of Kafka, Metrics, ..

should

get visibility into "their" test instabilities through "their"

components.

Not many people will feel responsible for the "Tests" component.

For "Core" and "Configuration", I will move the tickets to the

appropriate

components in "Runtime /".

For "API / Scala": Good point. I will add that component.

How to do it? I will just go through the pain and do it.


Best,
Robert




On Fri, Feb 8, 2019 at 2:40 PM Chesnay Schepler 

wrote:

Some concerns:

Travis and build system / release system are entirely different. I would
even keep the release system away from the build-system, as it is more
about the release scripts and documentation, while the latter is about
maven. Actually I'd just rename build-

Re: [ANNOUNCE] Apache Flink-shaded 6.0 released

2019-02-13 Thread Jeff Zhang
Thanks Chesnay, but looks like the document has not been updated yet,
there's no download link for 6.0 https://flink.apache.org/downloads.html


Chesnay Schepler  于2019年2月13日周三 下午5:26写道:

> The Apache Flink community is very happy to announce the release of
> Apache Flink-shaded 6.0.
>
> The flink-shaded project contains a number of shaded dependencies for
> Apache Flink.
>
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data
> streaming applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344544
>
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
>
> Regards,
> Chesnay
>
>

-- 
Best Regards

Jeff Zhang


Re: [DISCUSS] Improve the flinkbot

2019-02-13 Thread Stephan Ewen
+1 for question mark instead of X - definitely comes across nicer

Maybe we can we make these labels shorter / more compact?
Do ne need to go through all steps individually, or can one immediately
jump to "approved" with one command? Or jump to "code quality review"?

On Wed, Feb 13, 2019 at 10:41 AM Chesnay Schepler 
wrote:

> "More senior members of the community can focus on approving consensus
> and architecture of pull requests, while newer members of the community
> can focus on “just” reviewing the code quality."
>
> TBH I reallydon't see this happening, so I'm not too hot for this change.
>
> How have you solved the permission issue for the bot?
>
> If the permissions are set what I'd like to see is /attention/
> automatically adding the respective person to the list of reviewers.
>
> On 13.02.2019 10:30, Robert Metzger wrote:
> >
> > The first improvement to Flink Bot I would like to introduce is the
> > use of labels.
> >
> > I’m proposing to apply one of the following labels depending on the
> > review progress:
> >
> >
> > review=needsDescriptionApproval ❌
> >
> > review=needsConsensusApproval ❌
> >
> > review=needsArchitectureApproval ❌
> >
> > review=needsQualityApproval ❌
> >
> > review=approved ✅
> >
> >
> > This is how it looks in my test repository:
> >
> > Screenshot 2019-02-13 10.24.16.png
> > (screenshot url:
> >
> https://user-images.githubusercontent.com/89049/52701055-9e022600-2f79-11e9-919e-df4338bc0fa3.png
> )
> >
> >
> >
> > What are the benefits of this?
> >
> > Labels allow to filter pull requests, so we can get a view of all
> > approved pull requests, to merge them (after a final review :) )
> >
> > More senior members of the community can focus on approving consensus
> > and architecture of pull requests, while newer members of the
> > community can focus on “just” reviewing the code quality.
> >
> >
> > If nobody objects here, I will activate this new feature in the coming
> > days.
> >
> >
> >
> > On Wed, Feb 13, 2019 at 10:29 AM Robert Metzger  > > wrote:
> >
> > Hey all,
> >
> > the flinkbot has been active for a week now, and I hope the
> > initial hiccups have been resolved :)
> >
> > I wanted to start this as a permanent thread to discuss problems
> > and improvements with the bot.
> >
> > *So please post here if you have questions, problems or ideas how
> > to improve it!*
> >
>
>


Re: [ANNOUNCE] Apache Flink-shaded 6.0 released

2019-02-13 Thread Chesnay Schepler

It's available now. Should've merged the PR before sending out the mail

On 13.02.2019 10:34, Jeff Zhang wrote:

Thanks Chesnay, but looks like the document has not been updated yet,
there's no download link for 6.0 https://flink.apache.org/downloads.html


Chesnay Schepler  于2019年2月13日周三 下午5:26写道:


The Apache Flink community is very happy to announce the release of
Apache Flink-shaded 6.0.

The flink-shaded project contains a number of shaded dependencies for
Apache Flink.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:

https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344544

We would like to thank all contributors of the Apache Flink community
who made this release possible!

Regards,
Chesnay






Re: [ANNOUNCE] Apache Flink-shaded 6.0 released

2019-02-13 Thread jincheng sun
Thank you very much for managing Flink-shaded 6.0 release, Chesnay!

Cheers,
Jincheng

Chesnay Schepler  于2019年2月13日周三 下午5:26写道:

> The Apache Flink community is very happy to announce the release of
> Apache Flink-shaded 6.0.
>
> The flink-shaded project contains a number of shaded dependencies for
> Apache Flink.
>
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data
> streaming applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344544
>
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
>
> Regards,
> Chesnay
>
>


Re: [ANNOUNCE] Apache Flink-shaded 6.0 released

2019-02-13 Thread jincheng sun
Hi Jeff, I see the download link at the `Flink-shaded ` section.

   - Flink-shaded 6.0 - 2019-02-12 (Source
   

   )

Best,
Jincheng

Jeff Zhang  于2019年2月13日周三 下午5:53写道:

> Thanks Chesnay, but looks like the document has not been updated yet,
> there's no download link for 6.0 https://flink.apache.org/downloads.html
>
>
> Chesnay Schepler  于2019年2月13日周三 下午5:26写道:
>
> > The Apache Flink community is very happy to announce the release of
> > Apache Flink-shaded 6.0.
> >
> > The flink-shaded project contains a number of shaded dependencies for
> > Apache Flink.
> >
> > Apache Flink® is an open-source stream processing framework for
> > distributed, high-performing, always-available, and accurate data
> > streaming applications.
> >
> > The release is available for download at:
> > https://flink.apache.org/downloads.html
> >
> > The full release notes are available in Jira:
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344544
> >
> > We would like to thank all contributors of the Apache Flink community
> > who made this release possible!
> >
> > Regards,
> > Chesnay
> >
> >
>
> --
> Best Regards
>
> Jeff Zhang
>


Re: [DISCUSS] Start a user...@flink.apache.org mailing list for the Chinese-speaking community?

2019-02-13 Thread Tzu-Li (Gordon) Tai
Hi Robert,

I'll be willing to help with moderating the user-zh@ list.

Cheers,
Gordon

On Wed, Feb 13, 2019 at 5:05 PM Robert Metzger  wrote:

> Hey all,
>
> I'm now getting more and more MODERATE emails from the mailing list service
> for the user-zh@ list from people trying to subscribe.
> I would like to ask if any committer (who ideally speaks Chinese) is
> willing to moderate the user-zh@ list.
>
> This works as follows:
> When somebody who is not subscribed to the mailing list is trying to post a
> message there, the moderators receive an email like this:
>
>
> > To approve:
> >user-zh-accept-1550048049.51177@flink.apache.org
> > To reject:
> >user-zh-reject-1550048049.51177.x...@flink.apache.org
> > To give a reason
> > to reject:
> > %%% Start comment
> > %%% End comment
> >
> >
> >
> > -- Forwarded message --
> > From: "" 
> > To: user...@flink.apache.org
> > Cc:
> > Bcc:
> > Date: Wed, 11 Feb 2019 16:37:19 +0800
> > Subject: Sub
> > Sub
>
>
> This means xxx...@163.com has send an email to user...@flink.apache.org
> without being subscribed.
> Option 1 is to send an email to
> user-zh-accept-1550048049.51177@flink.apache.org
>  to accept this
> message to the list. But in this case, the message does not have any
> meaningful content.
> So instead, what I do is, I directly send an email to xxx...@163.com,
> explaining how to subscribe to the mailing list.
> It is important that people on the user list are subscribed before posting,
> so that they receive the answers to their questions.
>
> In rare cases people send emails that moderators can just accept (for
> example when a well-known subscriber to the list is accidentally posting
> from a different address).
>
> Which Flink committer is willing to help out here?
>
> Best,
> Robert
>
>
>
> On Tue, Jan 29, 2019 at 10:29 AM Jark Wu  wrote:
>
> > Cheers!
> >
> > Subscribed. Looking forward to the first Chinese question ;)
> >
> > On Tue, 29 Jan 2019 at 17:16, Robert Metzger 
> wrote:
> >
> > > Success!
> > > The mailing list has been created.
> > >
> > > Send an email to "user-zh-subscr...@flink.apache.org" to subscribe!
> > > I've also updated the website with the list:
> > > https://flink.apache.org/community.html
> > >
> > > I will now also tweet about it, even though I believe it'll be more
> > > important to advertise the list on Chinese social media platforms.
> > >
> > >
> > > On Tue, Jan 29, 2019 at 1:52 AM ZILI CHEN 
> wrote:
> > >
> > > > +1,sounds good
> > > >
> > > > Ufuk Celebi  于2019年1月29日周二 上午1:46写道:
> > > >
> > > > > I'm late to this party but big +1. Great idea! I think this will
> help
> > > > > to better represent the actual Flink community size and increase
> > > > > interaction between the English and non-English speaking community.
> > > > > :-)
> > > > >
> > > > > On Mon, Jan 28, 2019 at 6:02 PM jincheng sun <
> > sunjincheng...@gmail.com
> > > >
> > > > > wrote:
> > > > > >
> > > > > > +1,I like the idea very much!
> > > > > >
> > > > > > Robert Metzger 于2019年1月24日 周四19:15写道:
> > > > > >
> > > > > > > Hey all,
> > > > > > >
> > > > > > > I would like to create a new user support mailing list called "
> > > > > > > user...@flink.apache.org" to cater the Chinese-speaking Flink
> > > > > community.
> > > > > > >
> > > > > > > Why?
> > > > > > > In the last year 24% of the traffic on flink.apache.org came
> > from
> > > > the
> > > > > US,
> > > > > > > 22% from China. In the last three months, China is at 30%, the
> US
> > > at
> > > > > 20%.
> > > > > > > An additional data point is that there's a Flink DingTalk group
> > > with
> > > > > more
> > > > > > > than 5000 members, asking Flink questions.
> > > > > > > I believe that knowledge about Flink should be available in
> > public
> > > > > forums
> > > > > > > (our mailing list), indexable by search engines. If there's a
> > huge
> > > > > demand
> > > > > > > in a Chinese language support, we as a community should provide
> > > these
> > > > > users
> > > > > > > the tools they need, to grow our community and to allow them to
> > > > follow
> > > > > the
> > > > > > > Apache way.
> > > > > > >
> > > > > > > Is it possible?
> > > > > > > I believe it is, because a number of other Apache projects are
> > > > running
> > > > > > > non-English user@ mailing lists.
> > > > > > > Apache OpenOffice, Cocoon, OpenMeetings, CloudStack all have
> > > > > non-English
> > > > > > > lists: http://mail-archives.apache.org/mod_mbox/
> > > > > > > One thing I want to make very clear in this discussion is that
> > all
> > > > > project
> > > > > > > decisions, developer discussions, JIRA tickets etc. need to
> > happen
> > > in
> > > > > > > English, as this is the primary language of the Apache
> Foundation
> > > and
> > > > > our
> > > > > > > community.
> > > > > > > We should also clarify this on the page listing the mailing
> > lists.
> > > > > > >
> > > > > > > How?
> > > > > > > If there is consensus in this discussion thread, I would
> request
> > 

Re: [DISCUSS] Improve the flinkbot

2019-02-13 Thread jincheng sun
Hi Robert, Thanks for bring up the discussion! I think add the labels is
good idea!

About the state of labels, I suggest that the state initializes the red X
turns yellow question mark , and turns blue checkmark when approved.

This way the contributors can know if these tags have been processed. What
to you think?

Best,
Jincheng

Robert Metzger  于2019年2月13日周三 下午5:30写道:

> The first improvement to Flink Bot I would like to introduce is the use of
> labels.
>
> I’m proposing to apply one of the following labels depending on the review
> progress:
>
>
> review=needsDescriptionApproval ❌
>
> review=needsConsensusApproval ❌
>
> review=needsArchitectureApproval ❌
>
> review=needsQualityApproval ❌
>
> review=approved ✅
>
>
> This is how it looks in my test repository:
>
> [image: Screenshot 2019-02-13 10.24.16.png]
> (screenshot url:
> https://user-images.githubusercontent.com/89049/52701055-9e022600-2f79-11e9-919e-df4338bc0fa3.png
>  )
>
>
> What are the benefits of this?
>
> Labels allow to filter pull requests, so we can get a view of all approved
> pull requests, to merge them (after a final review :) )
>
> More senior members of the community can focus on approving consensus and
> architecture of pull requests, while newer members of the community can
> focus on “just” reviewing the code quality.
>
>
> If nobody objects here, I will activate this new feature in the coming
> days.
>
>
>
> On Wed, Feb 13, 2019 at 10:29 AM Robert Metzger 
> wrote:
>
>> Hey all,
>>
>> the flinkbot has been active for a week now, and I hope the initial
>> hiccups have been resolved :)
>>
>> I wanted to start this as a permanent thread to discuss problems and
>> improvements with the bot.
>>
>> *So please post here if you have questions, problems or ideas how to
>> improve it!*
>>
>


Re: [DISCUSS] Releasing Flink 1.6.4

2019-02-13 Thread Robert Metzger
Can we start creating the release candidate for the 1.6.4 version, or are
there still commits in the "release-1.6" branch missing?

On Wed, Feb 13, 2019 at 9:41 AM jing  wrote:

> +1
>
>
>
> --
> Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
>


Re: [DISCUSS] Improve the flinkbot

2019-02-13 Thread Till Rohrmann
I'd like to specify that a PR does not need special attention. Atm you need
to specify a person for point 3.

Big +1 for having a command to approve everything until (and also
including) a specified state.

Cheers,
Till

On Wed, Feb 13, 2019 at 11:17 AM jincheng sun 
wrote:

> Hi Robert, Thanks for bring up the discussion! I think add the labels is
> good idea!
>
> About the state of labels, I suggest that the state initializes the red X
> turns yellow question mark , and turns blue checkmark when approved.
>
> This way the contributors can know if these tags have been processed. What
> to you think?
>
> Best,
> Jincheng
>
> Robert Metzger  于2019年2月13日周三 下午5:30写道:
>
> > The first improvement to Flink Bot I would like to introduce is the use
> of
> > labels.
> >
> > I’m proposing to apply one of the following labels depending on the
> review
> > progress:
> >
> >
> > review=needsDescriptionApproval ❌
> >
> > review=needsConsensusApproval ❌
> >
> > review=needsArchitectureApproval ❌
> >
> > review=needsQualityApproval ❌
> >
> > review=approved ✅
> >
> >
> > This is how it looks in my test repository:
> >
> > [image: Screenshot 2019-02-13 10.24.16.png]
> > (screenshot url:
> >
> https://user-images.githubusercontent.com/89049/52701055-9e022600-2f79-11e9-919e-df4338bc0fa3.png
> >  )
> >
> >
> > What are the benefits of this?
> >
> > Labels allow to filter pull requests, so we can get a view of all
> approved
> > pull requests, to merge them (after a final review :) )
> >
> > More senior members of the community can focus on approving consensus and
> > architecture of pull requests, while newer members of the community can
> > focus on “just” reviewing the code quality.
> >
> >
> > If nobody objects here, I will activate this new feature in the coming
> > days.
> >
> >
> >
> > On Wed, Feb 13, 2019 at 10:29 AM Robert Metzger 
> > wrote:
> >
> >> Hey all,
> >>
> >> the flinkbot has been active for a week now, and I hope the initial
> >> hiccups have been resolved :)
> >>
> >> I wanted to start this as a permanent thread to discuss problems and
> >> improvements with the bot.
> >>
> >> *So please post here if you have questions, problems or ideas how to
> >> improve it!*
> >>
> >
>


Re: [DISCUSS] Releasing Flink 1.6.4

2019-02-13 Thread jincheng sun
Hi  Robert, Thanks for your comments about whether the relevant JIRAs are
included in release 1.6.4 in Google doc

 !

I agree that all JIRAs mentioned in the Google doc

will
not go into the 1.6.4 release.

If there is no other feedback the commits need merge into 1.6 branch, I
think we can start creating the release candidate for the 1.6.4 version
tomorrow.

Best,
Jincheng

Robert Metzger  于2019年2月13日周三 下午6:33写道:

> Can we start creating the release candidate for the 1.6.4 version, or are
> there still commits in the "release-1.6" branch missing?
>
> On Wed, Feb 13, 2019 at 9:41 AM jing  wrote:
>
> > +1
> >
> >
> >
> > --
> > Sent from:
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
> >
>


[DISCUSS] Adding a mid-term roadmap to the Flink website

2019-02-13 Thread Stephan Ewen
Hi all!

Recently several contributors, committers, and users asked about making it
more visible in which way the project is currently going.

Users and developers can track the direction by following the discussion
threads and JIRA, but due to the mass of discussions and open issues, it is
very hard to get a good overall picture.
Especially for new users and contributors, is is very hard to get a quick
overview of the project direction.

To fix this, I suggest to add a brief roadmap summary to the homepage. It
is a bit of a commitment to keep that roadmap up to date, but I think the
benefit for users justifies that.
The Apache Beam project has added such a roadmap [1]
, which was received very well by the
community, I would suggest to follow a similar structure here.

If the community is in favor of this, I would volunteer to write a first
version of such a roadmap. The points I would include are below.

Best,
Stephan

[1] https://beam.apache.org/roadmap/



Disclaimer: Apache Flink is not governed or steered by any one single
entity, but by its community and Project Management Committee (PMC). This
is not a authoritative roadmap in the sense of a plan with a specific
timeline. Instead, we share our vision for the future and major initiatives
that are receiving attention and give users and contributors an
understanding what they can look forward to.

*Future Role of Table API and DataStream API*
  - Table API becomes first class citizen
  - Table API becomes primary API for analytics use cases
  * Declarative, automatic optimizations
  * No manual control over state and timers
  - DataStream API becomes primary API for applications and data pipeline
use cases
  * Physical, user controls data types, no magic or optimizer
  * Explicit control over state and time

*Batch Streaming Unification*
  - Table API unification (environments) (FLIP-32)
  - New unified source interface (FLIP-27)
  - Runtime operator unification & code reuse between DataStream / Table
  - Extending Table API to make it convenient API for all analytical use
cases (easier mix in of UDFs)
  - Same join operators on bounded/unbounded Table API and DataStream API

*Faster Batch (Bounded Streams)*
  - Much of this comes via Blink contribution/merging
  - Fine-grained Fault Tolerance on bounded data (Table API)
  - Batch Scheduling on bounded data (Table API)
  - External Shuffle Services Support on bounded streams
  - Caching of intermediate results on bounded data (Table API)
  - Extending DataStream API to explicitly model bounded streams (API
breaking)
  - Add fine fault tolerance, scheduling, caching also to DataStream API

*Streaming State Evolution*
  - Let all built-in serializers support stable evolution
  - First class support for other evolvable formats (Protobuf, Thrift)
  - Savepoint input/output format to modify / adjust savepoints

*Simpler Event Time Handling*
  - Event Time Alignment in Sources
  - Simpler out-of-the box support in sources

*Checkpointing*
  - Consistency of Side Effects: suspend / end with savepoint (FLIP-34)
  - Failed checkpoints explicitly aborted on TaskManagers (not only on
coordinator)

*Automatic scaling (adjusting parallelism)*
  - Reactive scaling
  - Active scaling policies

*Kubernetes Integration*
  - Active Kubernetes Integration (Flink actively manages containers)

*SQL Ecosystem*
  - Extended Metadata Stores / Catalog / Schema Registries support
  - DDL support
  - Integration with Hive Ecosystem

*Simpler Handling of Dependencies*
  - Scala in the APIs, but not in the core (hide in separate class loader)
  - Hadoop-free by default


Re: [DISCUSS] Adding a mid-term roadmap to the Flink website

2019-02-13 Thread jincheng sun
Very excited and thank you for launching such a great discussion, Stephan !

Here only a little suggestion that in the Batch Streaming Unification
section, do we need to add an item:

- Same window operators on bounded/unbounded Table API and DataStream API
(currently OVER window only exists in SQL/TableAPI, DataStream API does not
yet support)

Best,
Jincheng

Stephan Ewen  于2019年2月13日周三 下午7:21写道:

> Hi all!
>
> Recently several contributors, committers, and users asked about making it
> more visible in which way the project is currently going.
>
> Users and developers can track the direction by following the discussion
> threads and JIRA, but due to the mass of discussions and open issues, it is
> very hard to get a good overall picture.
> Especially for new users and contributors, is is very hard to get a quick
> overview of the project direction.
>
> To fix this, I suggest to add a brief roadmap summary to the homepage. It
> is a bit of a commitment to keep that roadmap up to date, but I think the
> benefit for users justifies that.
> The Apache Beam project has added such a roadmap [1]
> , which was received very well by the
> community, I would suggest to follow a similar structure here.
>
> If the community is in favor of this, I would volunteer to write a first
> version of such a roadmap. The points I would include are below.
>
> Best,
> Stephan
>
> [1] https://beam.apache.org/roadmap/
>
> 
>
> Disclaimer: Apache Flink is not governed or steered by any one single
> entity, but by its community and Project Management Committee (PMC). This
> is not a authoritative roadmap in the sense of a plan with a specific
> timeline. Instead, we share our vision for the future and major initiatives
> that are receiving attention and give users and contributors an
> understanding what they can look forward to.
>
> *Future Role of Table API and DataStream API*
>   - Table API becomes first class citizen
>   - Table API becomes primary API for analytics use cases
>   * Declarative, automatic optimizations
>   * No manual control over state and timers
>   - DataStream API becomes primary API for applications and data pipeline
> use cases
>   * Physical, user controls data types, no magic or optimizer
>   * Explicit control over state and time
>
> *Batch Streaming Unification*
>   - Table API unification (environments) (FLIP-32)
>   - New unified source interface (FLIP-27)
>   - Runtime operator unification & code reuse between DataStream / Table
>   - Extending Table API to make it convenient API for all analytical use
> cases (easier mix in of UDFs)
>   - Same join operators on bounded/unbounded Table API and DataStream API
>
> *Faster Batch (Bounded Streams)*
>   - Much of this comes via Blink contribution/merging
>   - Fine-grained Fault Tolerance on bounded data (Table API)
>   - Batch Scheduling on bounded data (Table API)
>   - External Shuffle Services Support on bounded streams
>   - Caching of intermediate results on bounded data (Table API)
>   - Extending DataStream API to explicitly model bounded streams (API
> breaking)
>   - Add fine fault tolerance, scheduling, caching also to DataStream API
>
> *Streaming State Evolution*
>   - Let all built-in serializers support stable evolution
>   - First class support for other evolvable formats (Protobuf, Thrift)
>   - Savepoint input/output format to modify / adjust savepoints
>
> *Simpler Event Time Handling*
>   - Event Time Alignment in Sources
>   - Simpler out-of-the box support in sources
>
> *Checkpointing*
>   - Consistency of Side Effects: suspend / end with savepoint (FLIP-34)
>   - Failed checkpoints explicitly aborted on TaskManagers (not only on
> coordinator)
>
> *Automatic scaling (adjusting parallelism)*
>   - Reactive scaling
>   - Active scaling policies
>
> *Kubernetes Integration*
>   - Active Kubernetes Integration (Flink actively manages containers)
>
> *SQL Ecosystem*
>   - Extended Metadata Stores / Catalog / Schema Registries support
>   - DDL support
>   - Integration with Hive Ecosystem
>
> *Simpler Handling of Dependencies*
>   - Scala in the APIs, but not in the core (hide in separate class loader)
>   - Hadoop-free by default
>
>


Apply JIRA contributor

2019-02-13 Thread Yaoting Gong
Hi Guys,

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA ID is Tom Goong

Thanks.
Tom Goong


Re: Apply JIRA contributor

2019-02-13 Thread Robert Metzger
Hi,
I added you!

On Wed, Feb 13, 2019 at 2:19 PM Yaoting Gong 
wrote:

> Hi Guys,
>
> I want to contribute to Apache Flink.
> Would you please give me the permission as a contributor?
> My JIRA ID is Tom Goong
>
> Thanks.
> Tom Goong
>


Re: flink

2019-02-13 Thread Robert Metzger
Hi,
I added you!

On Wed, Feb 13, 2019 at 10:01 AM Asura <1402357...@qq.com> wrote:

> Hi Guys,
>
> I want to contribute to Apache Flink.
> Would you please give me the permission as a contributor?
> My JIRA ID is Gongwenzhou.


Re: Request for contribution access

2019-02-13 Thread Robert Metzger
Hey,
I added you to our JIRA

On Wed, Feb 13, 2019 at 9:17 AM 区 <670694...@qq.com> wrote:

> Hi Guys,
>
>
> I want to contribute to Apache Flink.
> Would you please give me the permission as a contributor?
> My JIRA Username is ysqwhiletrue. My JIRA full name sishu.yss.


Re: Request for contribution access

2019-02-13 Thread Robert Metzger
Hi,

I added you to our JIRA.

On Wed, Feb 13, 2019 at 9:17 AM Xin Ma  wrote:

> Hi Guys,
>
> I want to contribute to Apache Flink.
> Would you please give me the permission as a contributor?
> My JIRA ID is iluvex.
>
>
> Best regards,
>
> Xin
>


Re: request for access

2019-02-13 Thread Robert Metzger
Hey,
I added you to our JIRA.

On Wed, Feb 13, 2019 at 9:07 AM 刘建刚  wrote:

> Hi Guys,
>
> I want to contribute to Apache Flink.
> Would you please give me the permission as a contributor?
> My JIRA Username is Jiangang. My JIRA full name is Liu.
>


[jira] [Created] (FLINK-11592) Port TaskManagerFailsWithSlotSharingITCase to new code base

2019-02-13 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-11592:
-

 Summary: Port TaskManagerFailsWithSlotSharingITCase to new code 
base
 Key: FLINK-11592
 URL: https://issues.apache.org/jira/browse/FLINK-11592
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.8.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.8.0


Port {{TaskManagerFailsWithSlotSharingITCase}} to new code base.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


List of consumed kafka topics should not be restored from state

2019-02-13 Thread Gyula Fóra
Hi!

I have run into a weird issue which I could have sworn that it wouldnt
happen :D
I feel there was a discussion about this in the past but maybe im wrong,
but I hope someone can point me to a ticket.

Lets say you create a kafka consumer that consumes (t1,t2,t3), you take a
savepoint and deploy a new version that only consumes (t1).

The restore logic now still starts to consume (t1,t2,t3) which feels very
unintuitive as those were explicitly removed from the list. It is also hard
to debug as the topics causing the problem are not defined anywhere in your
job, configs etc.

Has anyone run into this issue? Should we change this default behaviour or
at least have an option to not do this?

Cheers,
Gyula


[jira] [Created] (FLINK-11593) Check & port TaskManagerTest to new code base

2019-02-13 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-11593:
-

 Summary: Check & port TaskManagerTest to new code base
 Key: FLINK-11593
 URL: https://issues.apache.org/jira/browse/FLINK-11593
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.8.0
Reporter: Till Rohrmann
 Fix For: 1.8.0


Check and port {{TaskManagerTest}} to new code base.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11594) Check & port TaskManagerRegistrationTest to new code base

2019-02-13 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-11594:
-

 Summary: Check & port TaskManagerRegistrationTest to new code base
 Key: FLINK-11594
 URL: https://issues.apache.org/jira/browse/FLINK-11594
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.8.0
Reporter: Till Rohrmann
 Fix For: 1.8.0


Check and port {{TaskManagerRegistrationTest}} to new code base.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: List of consumed kafka topics should not be restored from state

2019-02-13 Thread Tzu-Li (Gordon) Tai
Hi,

Partition offsets stored in state will always be respected when the
consumer is restored from checkpoints / savepoints.
AFAIK, this seems to have been the behaviour for quite some time now (since
FlinkKafkaConsumer08).

I think in the past there were some discussion to at least allow some way
to ignore restored partition offsets.
One way to enable this is to filter the restored partition offsets based on
the configured list of specified topics / topic regex pattern in the
current execution. This should work, since this can only be modified when
restoring from savepoints (i.e. manual restores).
To avoid breaking the current behaviour, we can maybe add a
`filterRestoredPartitionOffsetState()` configuration on the consumer, which
by default is disabled to match the current behaviour.

What do you think?

Cheers,
Gordon

On Wed, Feb 13, 2019 at 11:59 PM Gyula Fóra  wrote:

> Hi!
>
> I have run into a weird issue which I could have sworn that it wouldnt
> happen :D
> I feel there was a discussion about this in the past but maybe im wrong,
> but I hope someone can point me to a ticket.
>
> Lets say you create a kafka consumer that consumes (t1,t2,t3), you take a
> savepoint and deploy a new version that only consumes (t1).
>
> The restore logic now still starts to consume (t1,t2,t3) which feels very
> unintuitive as those were explicitly removed from the list. It is also hard
> to debug as the topics causing the problem are not defined anywhere in your
> job, configs etc.
>
> Has anyone run into this issue? Should we change this default behaviour or
> at least have an option to not do this?
>
> Cheers,
> Gyula
>


Problems with local build

2019-02-13 Thread Александр
Hello everyone! I've just joined to flink contributing community and have
some problems like:

1. I can't do 'mvn clean package' in project on local machine because of
error:

*[ERROR] Failed to execute goal on project flink-dist_2.11: Could not
resolve dependencies for project
org.apache.flink:flink-dist_2.11:jar:1.8-SNAPSHOT: The following artifacts
could not be resolved:
org.apache.flink:flink-examples-streaming-state-machine_2.11:jar:1.8-SNAPSHOT
 *

2. I have problem with building master branch on my travis-CI because of:

*[ERROR] No plugin found for prefix 'dependency' in the current project and
in the plugin groups [org.apache.maven.plugins, org.codehaus.mojo]
available from the repositories [local (/home/travis/.m2/repository),
central (https://repo.maven.apache.org/maven2
)] *

Who can make it clear for me? Sorry for this kind of questions, but i can't
resolve it by myself. Thanks in advance.

Best regards, Aleksandr Salatich


Re: List of consumed kafka topics should not be restored from state

2019-02-13 Thread Feng LI
Hello there,

I’m just wondering if there are real world use cases for maintaining this
default behavior. It’s a bit counter intuitive and sometimes results in
serious production issues. ( We had a similar issue when changing the topic
name, and resulting reading every message twice - both from the old one and
from the new).

Cheers,
Feng
Le mer. 13 févr. 2019 à 17:56, Tzu-Li (Gordon) Tai  a
écrit :

> Hi,
>
> Partition offsets stored in state will always be respected when the
> consumer is restored from checkpoints / savepoints.
> AFAIK, this seems to have been the behaviour for quite some time now (since
> FlinkKafkaConsumer08).
>
> I think in the past there were some discussion to at least allow some way
> to ignore restored partition offsets.
> One way to enable this is to filter the restored partition offsets based on
> the configured list of specified topics / topic regex pattern in the
> current execution. This should work, since this can only be modified when
> restoring from savepoints (i.e. manual restores).
> To avoid breaking the current behaviour, we can maybe add a
> `filterRestoredPartitionOffsetState()` configuration on the consumer, which
> by default is disabled to match the current behaviour.
>
> What do you think?
>
> Cheers,
> Gordon
>
> On Wed, Feb 13, 2019 at 11:59 PM Gyula Fóra  wrote:
>
> > Hi!
> >
> > I have run into a weird issue which I could have sworn that it wouldnt
> > happen :D
> > I feel there was a discussion about this in the past but maybe im wrong,
> > but I hope someone can point me to a ticket.
> >
> > Lets say you create a kafka consumer that consumes (t1,t2,t3), you take a
> > savepoint and deploy a new version that only consumes (t1).
> >
> > The restore logic now still starts to consume (t1,t2,t3) which feels very
> > unintuitive as those were explicitly removed from the list. It is also
> hard
> > to debug as the topics causing the problem are not defined anywhere in
> your
> > job, configs etc.
> >
> > Has anyone run into this issue? Should we change this default behaviour
> or
> > at least have an option to not do this?
> >
> > Cheers,
> > Gyula
> >
>


Re: [DISCUSS] Adding a mid-term roadmap to the Flink website

2019-02-13 Thread Rong Rong
Thanks Stephan for the great proposal.

This would not only be beneficial for new users but also for contributors
to keep track on all upcoming features.

I think that better window operator support can also be separately group
into its own category, as they affects both future DataStream API and batch
stream unification.
can we also include:
- OVER aggregate for DataStream API separately as @jincheng suggested.
- Improving sliding window operator [1]

One more additional suggestion, can we also include a more extendable
security module [2,3] @shuyi and I are currently working on?
This will significantly improve the usability for Flink in corporate
environments where proprietary or 3rd-party security integration is needed.

Thanks,
Rong


[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Improvement-to-Flink-Window-Operator-with-Slicing-td25750.html
[2]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-security-improvements-td21068.html
[3]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-Kerberos-Improvement-td25983.html




On Wed, Feb 13, 2019 at 3:39 AM jincheng sun 
wrote:

> Very excited and thank you for launching such a great discussion, Stephan !
>
> Here only a little suggestion that in the Batch Streaming Unification
> section, do we need to add an item:
>
> - Same window operators on bounded/unbounded Table API and DataStream API
> (currently OVER window only exists in SQL/TableAPI, DataStream API does
> not yet support)
>
> Best,
> Jincheng
>
> Stephan Ewen  于2019年2月13日周三 下午7:21写道:
>
>> Hi all!
>>
>> Recently several contributors, committers, and users asked about making
>> it more visible in which way the project is currently going.
>>
>> Users and developers can track the direction by following the discussion
>> threads and JIRA, but due to the mass of discussions and open issues, it is
>> very hard to get a good overall picture.
>> Especially for new users and contributors, is is very hard to get a quick
>> overview of the project direction.
>>
>> To fix this, I suggest to add a brief roadmap summary to the homepage. It
>> is a bit of a commitment to keep that roadmap up to date, but I think the
>> benefit for users justifies that.
>> The Apache Beam project has added such a roadmap [1]
>> , which was received very well by the
>> community, I would suggest to follow a similar structure here.
>>
>> If the community is in favor of this, I would volunteer to write a first
>> version of such a roadmap. The points I would include are below.
>>
>> Best,
>> Stephan
>>
>> [1] https://beam.apache.org/roadmap/
>>
>> 
>>
>> Disclaimer: Apache Flink is not governed or steered by any one single
>> entity, but by its community and Project Management Committee (PMC). This
>> is not a authoritative roadmap in the sense of a plan with a specific
>> timeline. Instead, we share our vision for the future and major initiatives
>> that are receiving attention and give users and contributors an
>> understanding what they can look forward to.
>>
>> *Future Role of Table API and DataStream API*
>>   - Table API becomes first class citizen
>>   - Table API becomes primary API for analytics use cases
>>   * Declarative, automatic optimizations
>>   * No manual control over state and timers
>>   - DataStream API becomes primary API for applications and data pipeline
>> use cases
>>   * Physical, user controls data types, no magic or optimizer
>>   * Explicit control over state and time
>>
>> *Batch Streaming Unification*
>>   - Table API unification (environments) (FLIP-32)
>>   - New unified source interface (FLIP-27)
>>   - Runtime operator unification & code reuse between DataStream / Table
>>   - Extending Table API to make it convenient API for all analytical use
>> cases (easier mix in of UDFs)
>>   - Same join operators on bounded/unbounded Table API and DataStream API
>>
>> *Faster Batch (Bounded Streams)*
>>   - Much of this comes via Blink contribution/merging
>>   - Fine-grained Fault Tolerance on bounded data (Table API)
>>   - Batch Scheduling on bounded data (Table API)
>>   - External Shuffle Services Support on bounded streams
>>   - Caching of intermediate results on bounded data (Table API)
>>   - Extending DataStream API to explicitly model bounded streams (API
>> breaking)
>>   - Add fine fault tolerance, scheduling, caching also to DataStream API
>>
>> *Streaming State Evolution*
>>   - Let all built-in serializers support stable evolution
>>   - First class support for other evolvable formats (Protobuf, Thrift)
>>   - Savepoint input/output format to modify / adjust savepoints
>>
>> *Simpler Event Time Handling*
>>   - Event Time Alignment in Sources
>>   - Simpler out-of-the box support in sources
>>
>> *Checkpointing*
>>   - Consistency of Side Effects: suspend / end with savepoint (FLIP-34)
>>   - Failed checkp

Re: List of consumed kafka topics should not be restored from state

2019-02-13 Thread Gyula Fóra
Hi!

I agree that it’s very confusing if you explicitly specify the topics that
are to be confusing and what happens is different.

I would almost consider this to be a bug , can’t see any reasonable use
case just hard to debug problems .

Having an option would be a good start but I would rather treat this as a
bug.

Gyula

On Wed, 13 Feb 2019 at 18:27, Feng LI  wrote:

> Hello there,
>
> I’m just wondering if there are real world use cases for maintaining this
> default behavior. It’s a bit counter intuitive and sometimes results in
> serious production issues. ( We had a similar issue when changing the topic
> name, and resulting reading every message twice - both from the old one and
> from the new).
>
> Cheers,
> Feng
> Le mer. 13 févr. 2019 à 17:56, Tzu-Li (Gordon) Tai  a
> écrit :
>
> > Hi,
> >
> > Partition offsets stored in state will always be respected when the
> > consumer is restored from checkpoints / savepoints.
> > AFAIK, this seems to have been the behaviour for quite some time now
> (since
> > FlinkKafkaConsumer08).
> >
> > I think in the past there were some discussion to at least allow some way
> > to ignore restored partition offsets.
> > One way to enable this is to filter the restored partition offsets based
> on
> > the configured list of specified topics / topic regex pattern in the
> > current execution. This should work, since this can only be modified when
> > restoring from savepoints (i.e. manual restores).
> > To avoid breaking the current behaviour, we can maybe add a
> > `filterRestoredPartitionOffsetState()` configuration on the consumer,
> which
> > by default is disabled to match the current behaviour.
> >
> > What do you think?
> >
> > Cheers,
> > Gordon
> >
> > On Wed, Feb 13, 2019 at 11:59 PM Gyula Fóra 
> wrote:
> >
> > > Hi!
> > >
> > > I have run into a weird issue which I could have sworn that it wouldnt
> > > happen :D
> > > I feel there was a discussion about this in the past but maybe im
> wrong,
> > > but I hope someone can point me to a ticket.
> > >
> > > Lets say you create a kafka consumer that consumes (t1,t2,t3), you
> take a
> > > savepoint and deploy a new version that only consumes (t1).
> > >
> > > The restore logic now still starts to consume (t1,t2,t3) which feels
> very
> > > unintuitive as those were explicitly removed from the list. It is also
> > hard
> > > to debug as the topics causing the problem are not defined anywhere in
> > your
> > > job, configs etc.
> > >
> > > Has anyone run into this issue? Should we change this default behaviour
> > or
> > > at least have an option to not do this?
> > >
> > > Cheers,
> > > Gyula
> > >
> >
>


[jira] [Created] (FLINK-11595) Gelly addEdge in certain circumstances still include duplicate vertices.

2019-02-13 Thread Calvin Han (JIRA)
Calvin Han created FLINK-11595:
--

 Summary: Gelly addEdge in certain circumstances still include 
duplicate vertices.
 Key: FLINK-11595
 URL: https://issues.apache.org/jira/browse/FLINK-11595
 Project: Flink
  Issue Type: Bug
  Components: Gelly
Affects Versions: 1.7.1
 Environment: MacOS, intelliJ
Reporter: Calvin Han


Assuming a base graph constructed by:

```

public class GraphCorn {

 public static Graph gc;

 public GraphCorn(String filename) throws Exception {
 ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

 DataSet> csvInput = 
env.readCsvFile(filename)
 .types(String.class, String.class, String.class, String.class, String.class, 
String.class);

 DataSet> srcTuples = csvInput.project(0, 2)
 .map(new MapFunction>() {
 @Override
 public Vertex map(Tuple tuple) throws Exception {
 VertexLabel lb = new VertexLabel(Util.hash(tuple.getField(1)));
 return new Vertex<>(tuple.getField(0), lb);
 }
 }).returns(new TypeHint>(){});

 DataSet> dstTuples = csvInput.project(1, 3)
 .map(new MapFunction>() {
 @Override
 public Vertex map(Tuple tuple) throws Exception {
 VertexLabel lb = new VertexLabel(Util.hash(tuple.getField(1)));
 return new Vertex<>(tuple.getField(0), lb);
 }
 }).returns(new TypeHint>(){});

 DataSet> vertexTuples = 
srcTuples.union(dstTuples).distinct(0);

 DataSet> edgeTuples = csvInput.project(0, 1, 4, 5)
 .map(new MapFunction>() {
 @Override
 public Edge map(Tuple tuple) throws Exception {
 EdgeLabel lb = new EdgeLabel(Util.hash(tuple.getField(2)), 
Long.parseLong(tuple.getField(3)));
 return new Edge<>(tuple.getField(0), tuple.getField(1), lb);
 }
 }).returns(new TypeHint>(){});

 this.gc = Graph.fromDataSet(vertexTuples, edgeTuples, env);
 }

}

```

Base graph CSV:

```

0,1,a,b,c,0
0,2,a,d,e,1
1,2,b,d,f,2

```

Attempt to add edges using the following function:

```

try(BufferedReader br = new BufferedReader(new FileReader(this.fileName))) {
 for(String line; (line = br.readLine()) != null; ) {
 String[] attributes = line.split(",");
 assert(attributes.length == 6);
 String srcID = attributes[0];
 String dstID = attributes[1];
 String srcLb = attributes[2];
 String dstLb = attributes[3];
 String edgeLb = attributes[4];
 String ts = attributes[5];

 Vertex src = new Vertex<>(srcID, new 
VertexLabel(Util.hash(srcLb)));
 Vertex dst = new Vertex<>(dstID, new 
VertexLabel(Util.hash(dstLb)));
 EdgeLabel edge = new EdgeLabel(Util.hash(edgeLb), Long.parseLong(ts));

 GraphCorn.gc = GraphCorn.gc.addEdge(src, dst, edge);
 }
} catch (Exception e) {
 System.err.println(e.getMessage());
}

```

The graph components to add is:

```

0,4,a,d,k,3
1,3,b,a,g,3
2,3,d,a,h,4

```

GraphCorn.gc will contain duplicate node 0, 1, and 2 (those that exist in base 
graph), which should not be the case acceding to the documentation.

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11596) Check & port ResourceManagerTest to new code base

2019-02-13 Thread TisonKun (JIRA)
TisonKun created FLINK-11596:


 Summary: Check & port ResourceManagerTest to new code base
 Key: FLINK-11596
 URL: https://issues.apache.org/jira/browse/FLINK-11596
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.8.0
Reporter: TisonKun
 Fix For: 1.8.0


Check & port {{ResourceManagerTest}} to new code base



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11597) Remove legacy JobManagerActorTestUtils

2019-02-13 Thread TisonKun (JIRA)
TisonKun created FLINK-11597:


 Summary: Remove legacy JobManagerActorTestUtils
 Key: FLINK-11597
 URL: https://issues.apache.org/jira/browse/FLINK-11597
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.8.0
Reporter: TisonKun
Assignee: TisonKun
 Fix For: 1.8.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11598) Remove legacy JobSubmissionClientActor

2019-02-13 Thread TisonKun (JIRA)
TisonKun created FLINK-11598:


 Summary: Remove legacy JobSubmissionClientActor
 Key: FLINK-11598
 URL: https://issues.apache.org/jira/browse/FLINK-11598
 Project: Flink
  Issue Type: Sub-task
  Components: Client
Affects Versions: 1.8.0
Reporter: TisonKun
Assignee: TisonKun
 Fix For: 1.8.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11599) Remove legacy JobClientActor

2019-02-13 Thread TisonKun (JIRA)
TisonKun created FLINK-11599:


 Summary: Remove legacy JobClientActor
 Key: FLINK-11599
 URL: https://issues.apache.org/jira/browse/FLINK-11599
 Project: Flink
  Issue Type: Sub-task
  Components: Client
Affects Versions: 1.8.0
Reporter: TisonKun
Assignee: TisonKun
 Fix For: 1.8.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11600) Remove legacy JobListeningContext

2019-02-13 Thread TisonKun (JIRA)
TisonKun created FLINK-11600:


 Summary: Remove legacy JobListeningContext
 Key: FLINK-11600
 URL: https://issues.apache.org/jira/browse/FLINK-11600
 Project: Flink
  Issue Type: Sub-task
  Components: Client
Affects Versions: 1.8.0
Reporter: TisonKun
Assignee: TisonKun
 Fix For: 1.8.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11601) Remove legacy AkkaJobManagerGateway

2019-02-13 Thread TisonKun (JIRA)
TisonKun created FLINK-11601:


 Summary: Remove legacy AkkaJobManagerGateway
 Key: FLINK-11601
 URL: https://issues.apache.org/jira/browse/FLINK-11601
 Project: Flink
  Issue Type: Sub-task
  Components: JobManager
Affects Versions: 1.8.0
Reporter: TisonKun
Assignee: TisonKun
 Fix For: 1.8.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11602) Remove legacy AkkaJobManagerRetriever

2019-02-13 Thread TisonKun (JIRA)
TisonKun created FLINK-11602:


 Summary: Remove legacy AkkaJobManagerRetriever
 Key: FLINK-11602
 URL: https://issues.apache.org/jira/browse/FLINK-11602
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.8.0
Reporter: TisonKun
Assignee: TisonKun
 Fix For: 1.8.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11603) Ported the MetricQueryService to the new RpcEndpoint

2019-02-13 Thread TisonKun (JIRA)
TisonKun created FLINK-11603:


 Summary: Ported the MetricQueryService to the new RpcEndpoint
 Key: FLINK-11603
 URL: https://issues.apache.org/jira/browse/FLINK-11603
 Project: Flink
  Issue Type: Improvement
  Components: Metrics
Reporter: TisonKun
Assignee: TisonKun


Given that a series TODO mention {{This is a temporary hack until we have 
ported the MetricQueryService to the new RpcEndpoint}}, I'd like to give it a 
try to implement the RpcEndpoint version of MetricQueryService.

Basically, port {{onRecieve}} to 
1. {{addMetric(metricName, metric, group)}}
2. {{removeMetric(metric)}}
3. {{createDump()}}

And then adjust tests and replace {{metricServiceQueryPath}} with a 
corresponding {{RpcGateway}}.

I'd like to learn that if the statement if true --- when we call a 
Runnable/Callable with runAsync/callAsync, then the Runnable/Callable is 
running in the main thread of the underlying RPC service, specifically, in the 
actor thread?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Apply JIRA Contributor

2019-02-13 Thread Ma, Yan

Hi guys,

I would like to be a contributor of Apache flink, can someone give me the JIRA 
access to this project? My JIRA id is yma.

Thanks very much in advance.

Yan


Approve Contributor Permission

2019-02-13 Thread dou mao
Hi all :

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor ?
My JIRA account is Maodou, and email is current.


Hope your reply :)

Maodou


Hi!

2019-02-13 Thread ??????
Hi, I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA ID is JJ GUO.

Hi

2019-02-13 Thread 黄子健
Hi Guys,


I want to contribute to Apache Flink.

Would you please give me the permission as a contributor?

My JIRA ID is huangzijian888.


[jira] [Created] (FLINK-11604) Extend the necessary methods in ResultPartitionWriter interface

2019-02-13 Thread zhijiang (JIRA)
zhijiang created FLINK-11604:


 Summary: Extend the necessary methods in ResultPartitionWriter 
interface
 Key: FLINK-11604
 URL: https://issues.apache.org/jira/browse/FLINK-11604
 Project: Flink
  Issue Type: Sub-task
  Components: Network
Reporter: zhijiang
Assignee: zhijiang
 Fix For: 1.8.0


This is a preparation work for future creating {{ResultPartitionWriter}} via 
proposed {{ShuffleService}}.

Currently there exists only one {{ResultPartition}} implementation for 
{{ResultPartitionWriter}} interface, so the specific {{ResultPartition}} 
instance is easily referenced in many other classes such as {{Task}}, 
{{NetworkEnvironment}}, etc. Even some private methods in {{ResultPartition}} 
would be called directly in these reference classes.

Considering {{ShuffleService}} might create multiple different 
{{ResultPartitionWriter}} implementations future, then all the other classes 
should only reference with the interface and call the common methods. Therefore 
we extend the related methods in {{ResultPartitionWriter}} interface in order 
to cover existing logics in {{ResultPartition}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Would you please give me the permission as a contributor?

2019-02-13 Thread linjie
Hi Guys,

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA ID is linjie.

Would you please give me the permission as a contributor?

2019-02-13 Thread linjie
Hi Guys,

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA ID is xulinjie.



Apply for permisson as a contributor

2019-02-13 Thread yuzhao0225
Hi Guys,

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA ID is danny0405.

[jira] [Created] (FLINK-11605) Translate the "Dataflow Programming Model" page into Chinese

2019-02-13 Thread Jark Wu (JIRA)
Jark Wu created FLINK-11605:
---

 Summary: Translate the "Dataflow Programming Model" page into 
Chinese
 Key: FLINK-11605
 URL: https://issues.apache.org/jira/browse/FLINK-11605
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/concepts/programming-model.html
The markdown file is located in flink/docs/concepts/programming-model.zh.md
The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11606) Translate the "Distributed Runtime Environment" page into Chinese

2019-02-13 Thread Jark Wu (JIRA)
Jark Wu created FLINK-11606:
---

 Summary: Translate the "Distributed Runtime Environment" page into 
Chinese
 Key: FLINK-11606
 URL: https://issues.apache.org/jira/browse/FLINK-11606
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/concepts/runtime.html
The markdown file is located in flink/docs/concepts/runtime.zh.md
The markdown file will be created once FLINK-11529 is merged.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11607) Translate the "DataStream API Tutorial" page into Chinese

2019-02-13 Thread Jark Wu (JIRA)
Jark Wu created FLINK-11607:
---

 Summary: Translate the "DataStream API Tutorial" page into Chinese
 Key: FLINK-11607
 URL: https://issues.apache.org/jira/browse/FLINK-11607
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/tutorials/datastream_api.html
The markdown file is located in flink/docs/tutorials/datastream_api.zh.md
The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11608) Translate the "Local Setup Tutorial" page into Chinese

2019-02-13 Thread Jark Wu (JIRA)
Jark Wu created FLINK-11608:
---

 Summary: Translate the "Local Setup Tutorial" page into Chinese
 Key: FLINK-11608
 URL: https://issues.apache.org/jira/browse/FLINK-11608
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/tutorials/local_setup.html
The markdown file is located in flink/docs/concepts/local_setup.zh.md
The markdown file will be created once FLINK-11529 is merged.






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11609) Translate the "Running Flink on Windows" page into Chinese

2019-02-13 Thread Jark Wu (JIRA)
Jark Wu created FLINK-11609:
---

 Summary: Translate the "Running Flink on Windows" page into Chinese
 Key: FLINK-11609
 URL: https://issues.apache.org/jira/browse/FLINK-11609
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/tutorials/flink_on_windows.html
The markdown file is located in flink/docs/tutorials/flink_on_windows.zh.md
The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11610) Translate the "Examples" page into Chinese

2019-02-13 Thread Jark Wu (JIRA)
Jark Wu created FLINK-11610:
---

 Summary: Translate the "Examples" page into Chinese
 Key: FLINK-11610
 URL: https://issues.apache.org/jira/browse/FLINK-11610
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is https://ci.apache.org/projects/flink/flink-docs-master/examples/
The markdown file is located in flink/docs/examples/index.zh.md
The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11611) Translate the "Batch Examples" page into Chinese

2019-02-13 Thread Jark Wu (JIRA)
Jark Wu created FLINK-11611:
---

 Summary: Translate the "Batch Examples" page into Chinese
 Key: FLINK-11611
 URL: https://issues.apache.org/jira/browse/FLINK-11611
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/dev/batch/examples.html
The markdown file is located in flink/docs/dev/batch/examples.zh.md
The markdown file will be created once FLINK-11529 is merged.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11612) Translate the "Project Template for Java" page into Chinese

2019-02-13 Thread Jark Wu (JIRA)
Jark Wu created FLINK-11612:
---

 Summary: Translate the "Project Template for Java" page into 
Chinese
 Key: FLINK-11612
 URL: https://issues.apache.org/jira/browse/FLINK-11612
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/dev/projectsetup/java_api_quickstart.html
The markdown file is located in 
flink/docs/dev/projectsetup/java_api_quickstart.zh.md
The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11613) Translate the "Project Template for Scala" page into Chinese

2019-02-13 Thread Jark Wu (JIRA)
Jark Wu created FLINK-11613:
---

 Summary: Translate the "Project Template for Scala" page into 
Chinese
 Key: FLINK-11613
 URL: https://issues.apache.org/jira/browse/FLINK-11613
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/dev/projectsetup/scala_api_quickstart.html
The markdown file is located in 
flink/docs/dev/projectsetup/scala_api_quickstart.zh.md
The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11614) Translate the "Configuring Dependencies" page into Chinese

2019-02-13 Thread Jark Wu (JIRA)
Jark Wu created FLINK-11614:
---

 Summary: Translate the "Configuring Dependencies" page into Chinese
 Key: FLINK-11614
 URL: https://issues.apache.org/jira/browse/FLINK-11614
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/dev/projectsetup/dependencies.html
The markdown file is located in flink/docs/dev/projectsetup/dependencies.zh.md
The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Would you please give me the permission as a contributor?

2019-02-13 Thread linjie
Hi Guys,

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA username is linjie,full name is xulinjie.