Re: [DISCUSS][CODE STYLE] Breaking long function argument lists and chained method calls

2019-08-22 Thread Zili Chen
One more question, what do you differ

*public **void func(*
*int arg1,*
*int arg2,*
*...)** throws E1, E2, E3 {*
*...*
*}*

and

*public **void func(*
*int arg1,*
*int arg2,*
*...
*)** throws E1, E2, E3 {*
*...*
*}*

I prefer the latter because parentheses are aligned in a similar way,
as well as the border between declaration and function body is clear.


Zili Chen  于2019年8月22日周四 上午9:53写道:

> Thanks Andrey for driving the discussion. Just for clarification,
> what we conclude here are several guidelines without automatic
> checker/tool guard them, right?
>
> Best,
> tison.
>
>
> Andrey Zagrebin  于2019年8月21日周三 下午8:18写道:
>
>> Hi All,
>>
>> I suggest we also conclude this discussion now.
>>
>> Breaking the line of too long statements (line longness is yet to be fully
>> defined) to improve code readability in case of
>>
>>- Long function argument lists (declaration or call): void func(type1
>>arg1, type2 arg2, ...)
>>- Long sequence of chained calls:
>>list.stream().map(...).reduce(...).collect(...)...
>>
>> Rules:
>>
>>- Break the list of arguments/calls if the line exceeds limit or
>> earlier
>>if you believe that the breaking would improve the code readability
>>- If you break the line then each argument/call should have a separate
>>line, including the first one
>>- Each new line argument/call should have one extra indentation
>> relative
>>to the line of the parent function name or called entity
>>- The opening brace always stays on the line of the parent function
>> name
>>- The closing brace of the function argument list and the possible
>>thrown exception list always stay on the line of the last argument
>>- The dot of a chained call is always on the line of that chained call
>>proceeding the call at the beginning
>>
>> Examples of breaking:
>>
>>- Function arguments
>>
>> *public **void func(*
>> *int arg1,*
>> *int arg2,*
>> *...)** throws E1, E2, E3 {*
>> *...*
>> *}*
>>
>>
>>- Chained method calls:
>>
>> *values*
>> *.stream()*
>> *.map(*...*)*
>> *.collect(...);*
>>
>>
>> I suggest we spawn separate discussion threads (can do as a follow-up)
>> about:
>>
>>- the hard line length limit in Java, possibly to confirm it also for
>>Scala (cc @Tison)
>>- indentation rules for the broken list of a declared function
>> arguments
>>
>> If there are no more comments/objections/concerns, I will open a PR to
>> capture the discussion outcome.
>>
>> Best,
>> Andrey
>>
>>
>>
>> On Wed, Aug 21, 2019 at 8:57 AM Zili Chen  wrote:
>>
>> > Implement question: how to apply the line length rules?
>> >
>> > If we just turn on checkstyle rule "LineLength" then a huge
>> > effort is required to break lines those break the rule. If
>> > we use an auto-formatter here then it possibly break line
>> > "just at the position" awfully.
>> >
>> > Is it possible we require only to fit the rule on the fly
>> > a pull request touch files?
>> >
>> > Best,
>> > tison.
>> >
>> >
>> > Yu Li  于2019年8月20日周二 下午5:22写道:
>> >
>> > > I second Stephan's summarize, and to be more explicit, +1 on:
>> > > - Set a hard line length limit
>> > > - Allow arguments on the same line if below length limit
>> > > - With consistent argument breaking when that length is exceeded
>> > > - Developers can break before that if they feel it helps with
>> readability
>> > >
>> > > FWIW, hbase project also sets the line length limit to 100 [1]
>> > (personally
>> > > I don't have any tendency on this, so JFYI).
>> > >
>> > > [1]
>> > >
>> > >
>> >
>> https://github.com/apache/hbase/blob/a59f7d4ffc27ea23b9822c3c26d6aeb76ccdf9aa/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml#L128
>> > >
>> > > Best Regards,
>> > > Yu
>> > >
>> > >
>> > > On Mon, 19 Aug 2019 at 18:22, Stephan Ewen  wrote:
>> > >
>> > > > I personally prefer not to break lines with few parameters.
>> > > > It just feels unnecessarily clumsy to parse the breaks if there are
>> > only
>> > > > two or three arguments with short names.
>> > > >
>> > > > So +1
>> > > >   - for a hard line length limit
>> > > >   - allowing arguments on the same line if below that limit
>> > > >   - with consistent argument breaking when that length is exceeded
>> > > >   - developers can break before that if they feel it helps with
>> > > > readability.
>> > > >
>> > > > This should be similar to what we have, except for enforcing the
>> line
>> > > > length limit.
>> > > >
>> > > > I think our Java guide originally suggested 120 characters line
>> length,
>> > > but
>> > > > we can reduce that to 100 if a majority argues for that, but it is
>> > > separate
>> > > > discussion.
>> > > > We uses shorter lines in Scala (100 chars) because Scala code
>> becomes
>> > > > harder to read faster with long lines.
>> > > >
>> > > >
>> > > > On Mon, Aug 19, 2019 at 10:45 AM Andrey Zagrebin <
>> and...@ververica.com
>> > >
>> > > > wrote:
>> > > >

Re: CiBot Update

2019-08-22 Thread Stephan Ewen
Nice, thanks!

On Thu, Aug 22, 2019 at 3:59 AM Zili Chen  wrote:

> Thanks for your announcement. Nice work!
>
> Best,
> tison.
>
>
> vino yang  于2019年8月22日周四 上午8:14写道:
>
> > +1 for "@flinkbot run travis", it is very convenient.
> >
> > Chesnay Schepler  于2019年8月21日周三 下午9:12写道:
> >
> > > Hi everyone,
> > >
> > > this is an update on recent changes to the CI bot.
> > >
> > >
> > > The bot now cancels builds if a new commit was added to a PR, and
> > > cancels all builds if the PR was closed.
> > > (This was implemented a while ago; I'm just mentioning it again for
> > > discoverability)
> > >
> > >
> > > Additionally, starting today you can now re-trigger a Travis run by
> > > writing a comment "@flinkbot run travis"; this means you no longer have
> > > to commit an empty commit or do other shenanigans to get another build
> > > running.
> > > Note that this will /not/ work if the PR was re-opened, until at least
> 1
> > > new build was triggered by a push.
> > >
> >
>


Re: [DISCUSS] Release flink-shaded 8.0

2019-08-22 Thread Stephan Ewen
+1 to go ahead

at some point we may want to bump the Hadoop versions for which we build
the shaded jars, but that would be a another dedicated effort

On Wed, Aug 21, 2019 at 1:41 PM Chesnay Schepler  wrote:

> Nico has opened a PR for bumping netty; we plan to have this merged by
> tomorrow.
>
> Unless anyone has concerns I will kick off the release on Friday.
>
> On 19/08/2019 12:11, Nico Kruber wrote:
> > I quickly went through all the changelogs for Netty 4.1.32 (which we
> > currently use) to the latest Netty 4.1.39.Final. Below, you will find a
> > list of bug fixes and performance improvements that may affect us. Nice
> > changes we could benefit from, also for the Java > 8 efforts. The most
> > important ones fixing leaks etc are #8921, #9167, #9274, #9394, and the
> > various CompositeByteBuf fixes. The rest are mostly performance
> > improvements.
> >
> > Since we are still early in the dev cycle for Flink 1.10, it would maybe
> > nice to update and verify that the new version works correctly. I'll
> > create a ticket and PR.
> >
> >
> > FYI (1): My own patches to bring dynamically-linked openSSL to more
> > distributions, namely SUSE and Arch, have not made it into a release yet.
> >
> > FYI (2): We are currently using the latest version of netty-tcnative,
> > i.e. 2.0.25.
> >
> >
> > Nico
> >
> > --
> > Netty 4.1.33.Final
> > - Fix ClassCastException and native crash when using kqueue transport
> > (#8665)
> > - Provide a way to cache the internal nioBuffer of the PooledByteBuffer
> > to reduce GC (#8603)
> >
> > Netty 4.1.34.Final
> > - Do not use GetPrimitiveArrayCritical(...) due multiple not-fixed bugs
> > related to GCLocker (#8921)
> > - Correctly monkey-patch id also in whe os / arch is used within library
> > name (#8913)
> > - Further reduce ensureAccessible() overhead (#8895)
> > - Support using an Executor to offload blocking / long-running tasks
> > when processing TLS / SSL via the SslHandler (#8847)
> > - Minimize memory footprint for AbstractChannelHandlerContext for
> > handlers that execute in the EventExecutor (#8786)
> > - Fix three bugs in CompositeByteBuf (#8773)
> >
> > Netty 4.1.35.Final
> > - Fix possible ByteBuf leak when CompositeByteBuf is resized (#8946)
> > - Correctly produce ssl alert when certificate validation fails on the
> > client-side when using native SSL implementation (#8949)
> >
> > Netty 4.1.37.Final
> > - Don't filter out TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (#9274)
> > - Try to mark child channel writable again once the parent channel
> > becomes writable (#9254)
> > - Properly debounce wakeups (#9191)
> > - Don't read from timerfd and eventfd on each EventLoop tick (#9192)
> > - Correctly detect that KeyManagerFactory is not supported when using
> > OpenSSL 1.1.0+ (#9170)
> > - Fix possible unsafe sharing of internal NIO buffer in CompositeByteBuf
> > (#9169)
> > - KQueueEventLoop won't unregister active channels reusing a file
> > descriptor (#9149)
> > - Prefer direct io buffers if direct buffers pooled (#9167)
> >
> > Netty 4.1.38.Final
> > - Prevent ByteToMessageDecoder from overreading when !isAutoRead (#9252)
> > - Correctly take length of ByteBufInputStream into account for
> > readLine() / readByte() (#9310)
> > - availableSharedCapacity will be slowly exhausted (#9394)
> > --
> >
> > On 18/08/2019 16:47, Stephan Ewen wrote:
> >> Are we fine with the current Netty version, or would be want to bump it?
> >>
> >> On Fri, Aug 16, 2019 at 10:30 AM Chesnay Schepler  >> > wrote:
> >>
> >>  Hello,
> >>
> >>  I would like to kick off the next flink-shaded release next week.
> There
> >>  are 2 ongoing efforts that are blocked on this release:
> >>
> >>* [FLINK-13467] Java 11 support requires a bump to ASM to
> correctly
> >>  handle Java 11 bytecode
> >>* [FLINK-11767] Reworking the
> typeSerializerSnapshotMigrationTestBase
> >>  requires asm-commons to be added to flink-shaded-asm
> >>
> >>  Are there any other changes on anyone's radar that we will have to
> make
> >>  for 1.10? (will bumping calcite require anything, for example)
> >>
> >>
>
>


Re: CiBot Update

2019-08-22 Thread Xintong Song
The re-triggering travis feature is so convenient. Thanks Chesnay~!

Thank you~

Xintong Song



On Thu, Aug 22, 2019 at 9:26 AM Stephan Ewen  wrote:

> Nice, thanks!
>
> On Thu, Aug 22, 2019 at 3:59 AM Zili Chen  wrote:
>
> > Thanks for your announcement. Nice work!
> >
> > Best,
> > tison.
> >
> >
> > vino yang  于2019年8月22日周四 上午8:14写道:
> >
> > > +1 for "@flinkbot run travis", it is very convenient.
> > >
> > > Chesnay Schepler  于2019年8月21日周三 下午9:12写道:
> > >
> > > > Hi everyone,
> > > >
> > > > this is an update on recent changes to the CI bot.
> > > >
> > > >
> > > > The bot now cancels builds if a new commit was added to a PR, and
> > > > cancels all builds if the PR was closed.
> > > > (This was implemented a while ago; I'm just mentioning it again for
> > > > discoverability)
> > > >
> > > >
> > > > Additionally, starting today you can now re-trigger a Travis run by
> > > > writing a comment "@flinkbot run travis"; this means you no longer
> have
> > > > to commit an empty commit or do other shenanigans to get another
> build
> > > > running.
> > > > Note that this will /not/ work if the PR was re-opened, until at
> least
> > 1
> > > > new build was triggered by a push.
> > > >
> > >
> >
>


Re: CiBot Update

2019-08-22 Thread Jark Wu
Great work! Thanks Chesnay!



On Thu, 22 Aug 2019 at 15:42, Xintong Song  wrote:

> The re-triggering travis feature is so convenient. Thanks Chesnay~!
>
> Thank you~
>
> Xintong Song
>
>
>
> On Thu, Aug 22, 2019 at 9:26 AM Stephan Ewen  wrote:
>
> > Nice, thanks!
> >
> > On Thu, Aug 22, 2019 at 3:59 AM Zili Chen  wrote:
> >
> > > Thanks for your announcement. Nice work!
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > vino yang  于2019年8月22日周四 上午8:14写道:
> > >
> > > > +1 for "@flinkbot run travis", it is very convenient.
> > > >
> > > > Chesnay Schepler  于2019年8月21日周三 下午9:12写道:
> > > >
> > > > > Hi everyone,
> > > > >
> > > > > this is an update on recent changes to the CI bot.
> > > > >
> > > > >
> > > > > The bot now cancels builds if a new commit was added to a PR, and
> > > > > cancels all builds if the PR was closed.
> > > > > (This was implemented a while ago; I'm just mentioning it again for
> > > > > discoverability)
> > > > >
> > > > >
> > > > > Additionally, starting today you can now re-trigger a Travis run by
> > > > > writing a comment "@flinkbot run travis"; this means you no longer
> > have
> > > > > to commit an empty commit or do other shenanigans to get another
> > build
> > > > > running.
> > > > > Note that this will /not/ work if the PR was re-opened, until at
> > least
> > > 1
> > > > > new build was triggered by a push.
> > > > >
> > > >
> > >
> >
>


Re: [RESULT] [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-22 Thread Chesnay Schepler

Are we also releasing python artifacts for 1.9?

On 21/08/2019 19:23, Tzu-Li (Gordon) Tai wrote:

I'm happy to announce that we have unanimously approved this candidate as
the 1.9.0 release.

There are 12 approving votes, 5 of which are binding:
- Yu Li
- Zili Chen
- Gordon Tai
- Stephan Ewen
- Jark Wu
- Vino Yang
- Gary Yao
- Bowen Li
- Chesnay Schepler
- Till Rohrmann
- Aljoscha Krettek
- David Anderson

There are no disapproving votes.

Thanks everyone who has contributed to this release!

I will wait until tomorrow morning for the artifacts to be available in
Maven central before announcing the release in a separate thread.

The release blog post will also be merged tomorrow along with the official
announcement.

Cheers,
Gordon

On Wed, Aug 21, 2019, 5:37 PM David Anderson  wrote:


+1 (non-binding)

I upgraded the flink-training-exercises project.

I encountered a few rough edges, including problems in the docs, but
nothing serious.

I had to make some modifications to deal with changes in the Table API:

ExternalCatalogTable.builder became new ExternalCatalogTableBuilder
TableEnvironment.getTableEnvironment became StreamTableEnvironment.create
StreamTableDescriptorValidator.UPDATE_MODE() became
StreamTableDescriptorValidator.UPDATE_MODE
org.apache.flink.table.api.java.Slide moved to
org.apache.flink.table.api.Slide

I also found myself forced to change a CoProcessFunction to a
KeyedCoProcessFunction (which it should have been).

I also tried a few complex queries in the SQL console, and wrote a
simple job using the State Processor API. Everything worked.

David


David Anderson | Training Coordinator

Follow us @VervericaData

--
Join Flink Forward - The Apache Flink Conference
Stream Processing | Event Driven | Real Time


On Wed, Aug 21, 2019 at 1:45 PM Aljoscha Krettek 
wrote:

+1

I checked the last RC on a GCE cluster and was satisfied with the

testing. The cherry-picked commits didn’t change anything related, so I’m
forwarding my vote from there.

Aljoscha


On 21. Aug 2019, at 13:34, Chesnay Schepler 

wrote:

+1 (binding)

On 21/08/2019 08:09, Bowen Li wrote:

+1 non-binding

- built from source with default profile
- manually ran SQL and Table API tests for Flink's metadata

integration

with Hive Metastore in local cluster
- manually ran SQL tests for batch capability with Blink planner and

Hive

integration (source/sink/udf) in local cluster
 - file formats include: csv, orc, parquet


On Tue, Aug 20, 2019 at 10:23 PM Gary Yao  wrote:


+1 (non-binding)

Reran Jepsen tests 10 times.

On Wed, Aug 21, 2019 at 5:35 AM vino yang 

wrote:

+1 (non-binding)

- checkout source code and build successfully
- started a local cluster and ran some example jobs successfully
- verified signatures and hashes
- checked release notes and post

Best,
Vino

Stephan Ewen  于2019年8月21日周三 上午4:20写道:


+1 (binding)

  - Downloaded the binary release tarball
  - started a standalone cluster with four nodes
  - ran some examples through the Web UI
  - checked the logs
  - created a project from the Java quickstarts maven archetype
  - ran a multi-stage DataSet job in batch mode
  - killed as TaskManager and verified correct restart behavior,

including

failover region backtracking


I found a few issues, and a common theme here is confusing error

reporting

and logging.

(1) When testing batch failover and killing a TaskManager, the job

reports

as the failure cause "org.apache.flink.util.FlinkException: The

assigned

slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
 I think that is a pretty bad error message, as a user I don't

know

what

that means. Some internal book keeping thing?
 You need to know a lot about Flink to understand that this

means

"TaskManager failure".
 https://issues.apache.org/jira/browse/FLINK-13805
 I would not block the release on this, but think this should

get

pretty

urgent attention.

(2) The Metric Fetcher floods the log with error messages when a
TaskManager is lost.
  There are many exceptions being logged by the Metrics Fetcher

due

to

not reaching the TM any more.
  This pollutes the log and drowns out the original exception

and

the

meaningful logs from the scheduler/execution graph.
  https://issues.apache.org/jira/browse/FLINK-13806
  Again, I would not block the release on this, but think this

should

get pretty urgent attention.

(3) If you put "web.submit.enable: false" into the configuration,

the

web

UI will still display the "SubmitJob" page, but errors will
 continuously pop up, stating "Unable to load requested file

/jars."

 https://issues.apache.org/jira/browse/FLINK-13799

(4) REST endpoint logs ERROR level messages when selecting the
"Checkpoints" tab for batch jobs. That does not seem correct.
  https://issues.apache.org/jira/browse/FLINK-13795

Best,
Stephan




On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai <

tzuli...@apache.org>

wrote:


+1

Legal checks:
- verified signatures and has

Re: [RESULT] [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-22 Thread Tzu-Li (Gordon) Tai
@Chesnay

No. Users will have to manually build and install PyFlink themselves in
1.9.0:
https://ci.apache.org/projects/flink/flink-docs-release-1.9/flinkDev/building.html#build-pyflink

This is also mentioned in the announcement blog post (to-be-merged):
https://github.com/apache/flink-web/pull/244/files#diff-0cc840a590f5cab2485934278134c9baR291

On Thu, Aug 22, 2019 at 10:03 AM Chesnay Schepler 
wrote:

> Are we also releasing python artifacts for 1.9?
>
> On 21/08/2019 19:23, Tzu-Li (Gordon) Tai wrote:
> > I'm happy to announce that we have unanimously approved this candidate as
> > the 1.9.0 release.
> >
> > There are 12 approving votes, 5 of which are binding:
> > - Yu Li
> > - Zili Chen
> > - Gordon Tai
> > - Stephan Ewen
> > - Jark Wu
> > - Vino Yang
> > - Gary Yao
> > - Bowen Li
> > - Chesnay Schepler
> > - Till Rohrmann
> > - Aljoscha Krettek
> > - David Anderson
> >
> > There are no disapproving votes.
> >
> > Thanks everyone who has contributed to this release!
> >
> > I will wait until tomorrow morning for the artifacts to be available in
> > Maven central before announcing the release in a separate thread.
> >
> > The release blog post will also be merged tomorrow along with the
> official
> > announcement.
> >
> > Cheers,
> > Gordon
> >
> > On Wed, Aug 21, 2019, 5:37 PM David Anderson 
> wrote:
> >
> >> +1 (non-binding)
> >>
> >> I upgraded the flink-training-exercises project.
> >>
> >> I encountered a few rough edges, including problems in the docs, but
> >> nothing serious.
> >>
> >> I had to make some modifications to deal with changes in the Table API:
> >>
> >> ExternalCatalogTable.builder became new ExternalCatalogTableBuilder
> >> TableEnvironment.getTableEnvironment became
> StreamTableEnvironment.create
> >> StreamTableDescriptorValidator.UPDATE_MODE() became
> >> StreamTableDescriptorValidator.UPDATE_MODE
> >> org.apache.flink.table.api.java.Slide moved to
> >> org.apache.flink.table.api.Slide
> >>
> >> I also found myself forced to change a CoProcessFunction to a
> >> KeyedCoProcessFunction (which it should have been).
> >>
> >> I also tried a few complex queries in the SQL console, and wrote a
> >> simple job using the State Processor API. Everything worked.
> >>
> >> David
> >>
> >>
> >> David Anderson | Training Coordinator
> >>
> >> Follow us @VervericaData
> >>
> >> --
> >> Join Flink Forward - The Apache Flink Conference
> >> Stream Processing | Event Driven | Real Time
> >>
> >>
> >> On Wed, Aug 21, 2019 at 1:45 PM Aljoscha Krettek 
> >> wrote:
> >>> +1
> >>>
> >>> I checked the last RC on a GCE cluster and was satisfied with the
> >> testing. The cherry-picked commits didn’t change anything related, so
> I’m
> >> forwarding my vote from there.
> >>> Aljoscha
> >>>
>  On 21. Aug 2019, at 13:34, Chesnay Schepler 
> >> wrote:
>  +1 (binding)
> 
>  On 21/08/2019 08:09, Bowen Li wrote:
> > +1 non-binding
> >
> > - built from source with default profile
> > - manually ran SQL and Table API tests for Flink's metadata
> >> integration
> > with Hive Metastore in local cluster
> > - manually ran SQL tests for batch capability with Blink planner and
> >> Hive
> > integration (source/sink/udf) in local cluster
> >  - file formats include: csv, orc, parquet
> >
> >
> > On Tue, Aug 20, 2019 at 10:23 PM Gary Yao 
> wrote:
> >
> >> +1 (non-binding)
> >>
> >> Reran Jepsen tests 10 times.
> >>
> >> On Wed, Aug 21, 2019 at 5:35 AM vino yang 
> >> wrote:
> >>> +1 (non-binding)
> >>>
> >>> - checkout source code and build successfully
> >>> - started a local cluster and ran some example jobs successfully
> >>> - verified signatures and hashes
> >>> - checked release notes and post
> >>>
> >>> Best,
> >>> Vino
> >>>
> >>> Stephan Ewen  于2019年8月21日周三 上午4:20写道:
> >>>
>  +1 (binding)
> 
>    - Downloaded the binary release tarball
>    - started a standalone cluster with four nodes
>    - ran some examples through the Web UI
>    - checked the logs
>    - created a project from the Java quickstarts maven archetype
>    - ran a multi-stage DataSet job in batch mode
>    - killed as TaskManager and verified correct restart behavior,
> >> including
>  failover region backtracking
> 
> 
>  I found a few issues, and a common theme here is confusing error
> >>> reporting
>  and logging.
> 
>  (1) When testing batch failover and killing a TaskManager, the job
> >>> reports
>  as the failure cause "org.apache.flink.util.FlinkException: The
> >> assigned
>  slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
>   I think that is a pretty bad error message, as a user I don't
> >> know
> >>> what
>  that means. Some internal book keeping thing?
>   You need to know a lot about Flin

Re: [DISCUSS][CODE STYLE] Breaking long function argument lists and chained method calls

2019-08-22 Thread Andrey Zagrebin
Hi Tison,

Regarding the automatic checks.
Yes, I suggest we conclude the discussion without the automatic checks.
As soon as we have more ideas/investigation, put into automation, we can
activate it and/or reconsider.
Nonetheless, I do not see any problem if we agree on this atm and make it
part of our code style recommendations.

Regarding putting the right parenthesis on the new line.
At the moment we do not use this approach in our code base. My personal
feeling is that it is not so often used in Java.
Anyways I think, this goes more into direction of the second follow-up to
discuss this separately:

   - indentation rules for the broken list of a declared function arguments
   (atm we usually do: one and newline before body or two idents)

or maybe we should rather name it: how to separate function declaration and
body (as you already mentioned it like this).

We can change this point:

   - The closing brace of the function argument list and the possible
   thrown exception list always stay on the line of the last argument

to

   - The possible thrown exception list is never broken and stays on the
   same last line

Then we can also adjust it if needed after the discussion about how to
separate function declaration and body.

Best,
Andrey






On Thu, Aug 22, 2019 at 9:05 AM Zili Chen  wrote:

> One more question, what do you differ
>
> *public **void func(*
> *int arg1,*
> *int arg2,*
> *...)** throws E1, E2, E3 {*
> *...*
> *}*
>
> and
>
> *public **void func(*
> *int arg1,*
> *int arg2,*
> *...
> *)** throws E1, E2, E3 {*
> *...*
> *}*
>
> I prefer the latter because parentheses are aligned in a similar way,
> as well as the border between declaration and function body is clear.
>
>
> Zili Chen  于2019年8月22日周四 上午9:53写道:
>
> > Thanks Andrey for driving the discussion. Just for clarification,
> > what we conclude here are several guidelines without automatic
> > checker/tool guard them, right?
> >
> > Best,
> > tison.
> >
> >
> > Andrey Zagrebin  于2019年8月21日周三 下午8:18写道:
> >
> >> Hi All,
> >>
> >> I suggest we also conclude this discussion now.
> >>
> >> Breaking the line of too long statements (line longness is yet to be
> fully
> >> defined) to improve code readability in case of
> >>
> >>- Long function argument lists (declaration or call): void func(type1
> >>arg1, type2 arg2, ...)
> >>- Long sequence of chained calls:
> >>list.stream().map(...).reduce(...).collect(...)...
> >>
> >> Rules:
> >>
> >>- Break the list of arguments/calls if the line exceeds limit or
> >> earlier
> >>if you believe that the breaking would improve the code readability
> >>- If you break the line then each argument/call should have a
> separate
> >>line, including the first one
> >>- Each new line argument/call should have one extra indentation
> >> relative
> >>to the line of the parent function name or called entity
> >>- The opening brace always stays on the line of the parent function
> >> name
> >>- The closing brace of the function argument list and the possible
> >>thrown exception list always stay on the line of the last argument
> >>- The dot of a chained call is always on the line of that chained
> call
> >>proceeding the call at the beginning
> >>
> >> Examples of breaking:
> >>
> >>- Function arguments
> >>
> >> *public **void func(*
> >> *int arg1,*
> >> *int arg2,*
> >> *...)** throws E1, E2, E3 {*
> >> *...*
> >> *}*
> >>
> >>
> >>- Chained method calls:
> >>
> >> *values*
> >> *.stream()*
> >> *.map(*...*)*
> >> *.collect(...);*
> >>
> >>
> >> I suggest we spawn separate discussion threads (can do as a follow-up)
> >> about:
> >>
> >>- the hard line length limit in Java, possibly to confirm it also for
> >>Scala (cc @Tison)
> >>- indentation rules for the broken list of a declared function
> >> arguments
> >>
> >> If there are no more comments/objections/concerns, I will open a PR to
> >> capture the discussion outcome.
> >>
> >> Best,
> >> Andrey
> >>
> >>
> >>
> >> On Wed, Aug 21, 2019 at 8:57 AM Zili Chen  wrote:
> >>
> >> > Implement question: how to apply the line length rules?
> >> >
> >> > If we just turn on checkstyle rule "LineLength" then a huge
> >> > effort is required to break lines those break the rule. If
> >> > we use an auto-formatter here then it possibly break line
> >> > "just at the position" awfully.
> >> >
> >> > Is it possible we require only to fit the rule on the fly
> >> > a pull request touch files?
> >> >
> >> > Best,
> >> > tison.
> >> >
> >> >
> >> > Yu Li  于2019年8月20日周二 下午5:22写道:
> >> >
> >> > > I second Stephan's summarize, and to be more explicit, +1 on:
> >> > > - Set a hard line length limit
> >> > > - Allow arguments on the same line if below length limit
> >> > > - With consistent argument breaking when that length is exceeded
> >> > > - Developers can break before that if they feel it helps 

Re: CiBot Update

2019-08-22 Thread Till Rohrmann
Thanks for the continuous work on the CiBot Chesnay!

Cheers,
Till

On Thu, Aug 22, 2019 at 9:47 AM Jark Wu  wrote:

> Great work! Thanks Chesnay!
>
>
>
> On Thu, 22 Aug 2019 at 15:42, Xintong Song  wrote:
>
> > The re-triggering travis feature is so convenient. Thanks Chesnay~!
> >
> > Thank you~
> >
> > Xintong Song
> >
> >
> >
> > On Thu, Aug 22, 2019 at 9:26 AM Stephan Ewen  wrote:
> >
> > > Nice, thanks!
> > >
> > > On Thu, Aug 22, 2019 at 3:59 AM Zili Chen 
> wrote:
> > >
> > > > Thanks for your announcement. Nice work!
> > > >
> > > > Best,
> > > > tison.
> > > >
> > > >
> > > > vino yang  于2019年8月22日周四 上午8:14写道:
> > > >
> > > > > +1 for "@flinkbot run travis", it is very convenient.
> > > > >
> > > > > Chesnay Schepler  于2019年8月21日周三 下午9:12写道:
> > > > >
> > > > > > Hi everyone,
> > > > > >
> > > > > > this is an update on recent changes to the CI bot.
> > > > > >
> > > > > >
> > > > > > The bot now cancels builds if a new commit was added to a PR, and
> > > > > > cancels all builds if the PR was closed.
> > > > > > (This was implemented a while ago; I'm just mentioning it again
> for
> > > > > > discoverability)
> > > > > >
> > > > > >
> > > > > > Additionally, starting today you can now re-trigger a Travis run
> by
> > > > > > writing a comment "@flinkbot run travis"; this means you no
> longer
> > > have
> > > > > > to commit an empty commit or do other shenanigans to get another
> > > build
> > > > > > running.
> > > > > > Note that this will /not/ work if the PR was re-opened, until at
> > > least
> > > > 1
> > > > > > new build was triggered by a push.
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: CiBot Update

2019-08-22 Thread zhijiang
It is really very convenient now. Valuable work, Chesnay!

Best,
Zhijiang
--
From:Till Rohrmann 
Send Time:2019年8月22日(星期四) 10:13
To:dev 
Subject:Re: CiBot Update

Thanks for the continuous work on the CiBot Chesnay!

Cheers,
Till

On Thu, Aug 22, 2019 at 9:47 AM Jark Wu  wrote:

> Great work! Thanks Chesnay!
>
>
>
> On Thu, 22 Aug 2019 at 15:42, Xintong Song  wrote:
>
> > The re-triggering travis feature is so convenient. Thanks Chesnay~!
> >
> > Thank you~
> >
> > Xintong Song
> >
> >
> >
> > On Thu, Aug 22, 2019 at 9:26 AM Stephan Ewen  wrote:
> >
> > > Nice, thanks!
> > >
> > > On Thu, Aug 22, 2019 at 3:59 AM Zili Chen 
> wrote:
> > >
> > > > Thanks for your announcement. Nice work!
> > > >
> > > > Best,
> > > > tison.
> > > >
> > > >
> > > > vino yang  于2019年8月22日周四 上午8:14写道:
> > > >
> > > > > +1 for "@flinkbot run travis", it is very convenient.
> > > > >
> > > > > Chesnay Schepler  于2019年8月21日周三 下午9:12写道:
> > > > >
> > > > > > Hi everyone,
> > > > > >
> > > > > > this is an update on recent changes to the CI bot.
> > > > > >
> > > > > >
> > > > > > The bot now cancels builds if a new commit was added to a PR, and
> > > > > > cancels all builds if the PR was closed.
> > > > > > (This was implemented a while ago; I'm just mentioning it again
> for
> > > > > > discoverability)
> > > > > >
> > > > > >
> > > > > > Additionally, starting today you can now re-trigger a Travis run
> by
> > > > > > writing a comment "@flinkbot run travis"; this means you no
> longer
> > > have
> > > > > > to commit an empty commit or do other shenanigans to get another
> > > build
> > > > > > running.
> > > > > > Note that this will /not/ work if the PR was re-opened, until at
> > > least
> > > > 1
> > > > > > new build was triggered by a push.
> > > > > >
> > > > >
> > > >
> > >
> >
>



[jira] [Created] (FLINK-13815) Implement the SpaceAllocator

2019-08-22 Thread Yu Li (Jira)
Yu Li created FLINK-13815:
-

 Summary: Implement the SpaceAllocator
 Key: FLINK-13815
 URL: https://issues.apache.org/jira/browse/FLINK-13815
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / State Backends
Reporter: Yu Li


As described in the design doc, we need a {{SpaceAllocator}} to allocate space 
on off-heap/disk to store the spilled key-group data.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: [VOTE] Flink Project Bylaws

2019-08-22 Thread Becket Qin
Hi All, so far the votes count as following:

+1 (Binding): 13 (Aljoscha, Fabian, Kurt, Till, Timo, Max, Stephan, Gordon,
Robert, Ufuk, Chesnay, Shaoxuan, Henry)
+0 (Binding): 1 (Thomas)

+1 (Non-Binding): 10 (Hequn, Vino, Piotr, Dawid, Xintong, Yu, Jingsong,
Yun, Jark, Biao)

Given that more than 6 days have passed and there are not sufficient +1s to
pass the vote. I am reaching out the the binding voters that have not voted
yet here.

@Greg, @Gyula, @Kostas, @Alan, @jincheng, @Marton, @Sebastian,
@Vasiliki, @Daniel

Would you have time to check the Flink bylaws proposal and vote on it? The
bylaws wiki is following:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=120731026

We are following the 2/3 majority voting process. This is the first attempt
of reaching out to the PMCs that have not voted yet. We will make another
attempt after 7 days if the result of the vote is still not determined by
then.

Also CCing private@ in case one did not setup the Apache email forwarding.

Thanks,

Jiangjie (Becket) Qin



On Thu, Aug 22, 2019 at 8:03 AM Henry Saputra 
wrote:

> Oh yeah,  +1 LGTM
>
> Thanks for working on this.
>
> - Henry
>
> On Tue, Aug 20, 2019 at 2:17 AM Becket Qin  wrote:
>
> > Thanks for sharing your thoughts, Thomas, Henry and Stephan. I also think
> > the committers are supposed to be mature enough to know when a review on
> > their own patch is needed.
> >
> > @Henry, just want to confirm, are you +1 on the proposed bylaws?
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Tue, Aug 20, 2019 at 10:54 AM Stephan Ewen  wrote:
> >
> > > I see it somewhat similar to Henry.
> > >
> > > Generally, all committers should go for a review by another committer,
> > > unless it is a trivial comment or style fix. I personally do that, even
> > > though being one of the committers that have been with the project
> > longest.
> > >
> > > For now, I was hoping though that we have a mature enough community
> that
> > > this "soft rule" is enough. Whenever possible, working based on trust
> > with
> > > soft processes beats working with hard processes. We can still revisit
> > this
> > > in case we see that it does not work out.
> > >
> > >
> > > On Mon, Aug 19, 2019 at 10:21 PM Henry Saputra <
> henry.sapu...@gmail.com>
> > > wrote:
> > >
> > > > One of the perks of being committers is be able to commit code
> without
> > > > asking from another committer. Having said that, I think we rely on
> > > > maturity of the committers to know when to ask for reviews and when
> to
> > > > commit directly.
> > > >
> > > > For example, if someone just change typos on comments or simple
> rename
> > of
> > > > internal variables, I think we could trust the committer to safely
> > commit
> > > > the changes. When the changes will have effect of changing or
> introduce
> > > new
> > > > flows of the code, that's when reviews are needed and strongly
> > > encouraged.
> > > > I think the balance is needed for this.
> > > >
> > > > PMCs have the ability and right to revert changes in source repo as
> > > > necessary.
> > > >
> > > > - Henry
> > > >
> > > > On Sun, Aug 18, 2019 at 9:23 PM Thomas Weise  wrote:
> > > >
> > > > > +0 (binding)
> > > > >
> > > > > I don't think committers should be allowed to approve their own
> > > changes.
> > > > I
> > > > > would prefer if non-committer contributors can approve committer
> PRs
> > as
> > > > > that would encourage more participation in code review and ability
> to
> > > > > contribute.
> > > > >
> > > > >
> > > > > On Fri, Aug 16, 2019 at 9:02 PM Shaoxuan Wang  >
> > > > wrote:
> > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > > On Fri, Aug 16, 2019 at 7:48 PM Chesnay Schepler <
> > ches...@apache.org
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > +1 (binding)
> > > > > > >
> > > > > > > Although I think it would be a good idea to always cc
> > > > > > > priv...@flink.apache.org when modifying bylaws, if anything to
> > > speed
> > > > > up
> > > > > > > the voting process.
> > > > > > >
> > > > > > > On 16/08/2019 11:26, Ufuk Celebi wrote:
> > > > > > > > +1 (binding)
> > > > > > > >
> > > > > > > > – Ufuk
> > > > > > > >
> > > > > > > >
> > > > > > > > On Wed, Aug 14, 2019 at 4:50 AM Biao Liu  >
> > > > wrote:
> > > > > > > >
> > > > > > > >> +1 (non-binding)
> > > > > > > >>
> > > > > > > >> Thanks for pushing this!
> > > > > > > >>
> > > > > > > >> Thanks,
> > > > > > > >> Biao /'bɪ.aʊ/
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> On Wed, 14 Aug 2019 at 09:37, Jark Wu 
> > wrote:
> > > > > > > >>
> > > > > > > >>> +1 (non-binding)
> > > > > > > >>>
> > > > > > > >>> Best,
> > > > > > > >>> Jark
> > > > > > > >>>
> > > > > > > >>> On Wed, 14 Aug 2019 at 09:22, Kurt Young  >
> > > > wrote:
> > > > > > > >>>
> > > > > > >  +1 (binding)
> > > > > > > 
> > > > > > >  Best,
> > > > > > >  Kurt
> > > > > > > 
> > > > > > > 
> > > > > > >  On Wed, Aug 14, 2019 at 1:34 AM Yun Tang <
> myas...@live.c

[jira] [Created] (FLINK-13816) Long job names result in a very ugly table listing the completed jobs in the web UI

2019-08-22 Thread David Anderson (Jira)
David Anderson created FLINK-13816:
--

 Summary: Long job names result in a very ugly table listing the 
completed jobs in the web UI
 Key: FLINK-13816
 URL: https://issues.apache.org/jira/browse/FLINK-13816
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Web Frontend
Affects Versions: 1.9.0
Reporter: David Anderson
 Attachments: Screen Shot 2019-08-21 at 1.20.45 PM.png

Although this is a UI flaw, it's bad enough I've classified it as a bug.

The horizontal space used for the list of jobs in the new, angular-based web 
frontend needs to be distributed more fairly (see the attached image). Some 
min-width for each of the columns would be one solution.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: CiBot Update

2019-08-22 Thread Zhu Zhu
Thanks Chesnay for the CI improvement!
It is very helpful.

Thanks,
Zhu Zhu

zhijiang  于2019年8月22日周四 下午4:18写道:

> It is really very convenient now. Valuable work, Chesnay!
>
> Best,
> Zhijiang
> --
> From:Till Rohrmann 
> Send Time:2019年8月22日(星期四) 10:13
> To:dev 
> Subject:Re: CiBot Update
>
> Thanks for the continuous work on the CiBot Chesnay!
>
> Cheers,
> Till
>
> On Thu, Aug 22, 2019 at 9:47 AM Jark Wu  wrote:
>
> > Great work! Thanks Chesnay!
> >
> >
> >
> > On Thu, 22 Aug 2019 at 15:42, Xintong Song 
> wrote:
> >
> > > The re-triggering travis feature is so convenient. Thanks Chesnay~!
> > >
> > > Thank you~
> > >
> > > Xintong Song
> > >
> > >
> > >
> > > On Thu, Aug 22, 2019 at 9:26 AM Stephan Ewen  wrote:
> > >
> > > > Nice, thanks!
> > > >
> > > > On Thu, Aug 22, 2019 at 3:59 AM Zili Chen 
> > wrote:
> > > >
> > > > > Thanks for your announcement. Nice work!
> > > > >
> > > > > Best,
> > > > > tison.
> > > > >
> > > > >
> > > > > vino yang  于2019年8月22日周四 上午8:14写道:
> > > > >
> > > > > > +1 for "@flinkbot run travis", it is very convenient.
> > > > > >
> > > > > > Chesnay Schepler  于2019年8月21日周三 下午9:12写道:
> > > > > >
> > > > > > > Hi everyone,
> > > > > > >
> > > > > > > this is an update on recent changes to the CI bot.
> > > > > > >
> > > > > > >
> > > > > > > The bot now cancels builds if a new commit was added to a PR,
> and
> > > > > > > cancels all builds if the PR was closed.
> > > > > > > (This was implemented a while ago; I'm just mentioning it again
> > for
> > > > > > > discoverability)
> > > > > > >
> > > > > > >
> > > > > > > Additionally, starting today you can now re-trigger a Travis
> run
> > by
> > > > > > > writing a comment "@flinkbot run travis"; this means you no
> > longer
> > > > have
> > > > > > > to commit an empty commit or do other shenanigans to get
> another
> > > > build
> > > > > > > running.
> > > > > > > Note that this will /not/ work if the PR was re-opened, until
> at
> > > > least
> > > > > 1
> > > > > > > new build was triggered by a push.
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
>


[jira] [Created] (FLINK-13817) Expose whether web submissions are enabled

2019-08-22 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-13817:


 Summary: Expose whether web submissions are enabled
 Key: FLINK-13817
 URL: https://issues.apache.org/jira/browse/FLINK-13817
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / REST
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.10.0






--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (FLINK-13818) Check whether web submission are enabled

2019-08-22 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-13818:


 Summary: Check whether web submission are enabled
 Key: FLINK-13818
 URL: https://issues.apache.org/jira/browse/FLINK-13818
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Web Frontend
Reporter: Chesnay Schepler
 Fix For: 1.10.0






--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: [DISCUSS] Flink project bylaws

2019-08-22 Thread Robert Metzger
I have started a Wiki page (editable by all) for collecting ideas for
Bylaws changes, so that we can batch changes together and then vote on
them:
https://cwiki.apache.org/confluence/display/FLINK/Ideas+for+Bylaw+changes

On Tue, Aug 13, 2019 at 1:41 PM Becket Qin  wrote:

> Hi Robert,
>
> Thanks for help apply the changes. I agree we should freeze the change to
> the bylaws page starting from now. For this particular addition of
> clarification, I'll send a notice in the voting thread to let who have
> already voted to know.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Tue, Aug 13, 2019 at 1:29 PM Robert Metzger 
> wrote:
>
> > Hi Becket,
> > I've applied the proposed change to the document:
> >
> >
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=120731026&selectedPageVersions=20&selectedPageVersions=19
> >
> > I would propose to stop accepting changes to the document now, as it
> might
> > start a discussion about the validity of the vote and the bylaws itself.
> > Changes to the document require a 2/3 majority.
> >
> >
> > On Tue, Aug 13, 2019 at 12:20 PM Maximilian Michels 
> > wrote:
> >
> > > Hi Becket,
> > >
> > > Thanks for clarifying and updating the draft. The changes look good to
> > me.
> > >
> > > I don't feel strong about a 2/3 majority in case of committer/PMC
> > > removal. Like you pointed out, both provide a significant hurdle due to
> > > possible vetos or a 2/3 majority.
> > >
> > > Thanks,
> > > Max
> > >
> > > On 13.08.19 10:36, Becket Qin wrote:
> > > > Piotr just reminded me that there was a previous suggestion to
> clarify
> > a
> > > > committer's responsibility when commit his/her own patch. So I'd like
> > to
> > > > incorporate that in the bylaws. The additional clarification is
> > following
> > > > in bold and italic font.
> > > >
> > > > one +1 from a committer followed by a Lazy approval (not counting the
> > > vote
> > > >> of the contributor), moving to lazy majority if a -1 is received.
> > > >>
> > > >
> > > >
> > > > Note that this implies that committers can +1 their own commits and
> > merge
> > > >> right away. *However, the committe**rs should use their best
> judgement
> > > to
> > > >> respect the components expertise and ongoing development plan.*
> > > >
> > > >
> > > > This does not really change any of the existing bylaws, just about
> > > > clarification.
> > > >
> > > > If there is no objection to this additional clarification, after the
> > > bylaws
> > > > wiki is updated, I'll send an update notice to the voting thread to
> > > inform
> > > > those who already voted about this addition.
> > > >
> > > > Thanks,
> > > >
> > > > Jiangjie (Becket) Qin
> > > >
> > > > On Mon, Aug 12, 2019 at 11:19 AM Becket Qin 
> > > wrote:
> > > >
> > > >> Hi Maximillian,
> > > >>
> > > >> Thanks for the feedback. Please see the reply below:
> > > >>
> > > >> Step 2 should include a personal email to the PMC members in
> question.
> > > >>
> > > >> I'm afraid reminders inside the vote thread could be overlooked
> > easily.
> > > >>
> > > >>
> > > >> This is exactly what I meant to say by "reach out" :) I just made it
> > > more
> > > >> explicit.
> > > >>
> > > >> The way the terms are described in the draft, the consensus is
> "lazy",
> > > >>> i.e. requires only 3 binding votes. I'd suggest renaming it to
> "Lazy
> > > >>> Consensus". This is in line with the other definitions such as
> "Lazy
> > > >>> Majority".
> > > >>
> > > >> It was initially called "lazy consensus", but Robert pointed out
> that
> > > >> "lazy consensus" actually means something different in ASF term [1].
> > > >> Here "lazy" pretty much means "assume everyone is +1 unless someone
> > says
> > > >> otherwise". This means any vote that requires a minimum number of +1
> > is
> > > not
> > > >> really a "lazy" vote.
> > > >>
> > > >> Removing a committer / PMC member only requires 3 binding votes. I'd
> > > >>> expect an important action like this to require a 2/3 majority.
> > > >>
> > > >> Personally I think consensus is good enough here. PMC members can
> > cast a
> > > >> veto if they disagree about the removal. In some sense, it is more
> > > >> difficult than with 2/3 majority to remove a committer / PMC member.
> > > Also,
> > > >> it might be a hard decision for some PMC members if they have never
> > > worked
> > > >> with the person in question. That said, I am OK to change it to 2/3
> > > >> majority as this will happen very rarely.
> > > >>
> > > >> Thanks,
> > > >>
> > > >> Jiangjie (Becket) Qin
> > > >>
> > > >> [1] https://www.apache.org/foundation/voting.html#LazyConsensus
> > > >>
> > > >> On Sun, Aug 11, 2019 at 5:00 PM Maximilian Michels 
> > > wrote:
> > > >>
> > > >>> I'm a bit late to the discussion here. Three suggestions:
> > > >>>
> > > >>> 1) Procedure for "insufficient active binding voters to reach 2/3
> > > majority
> > > >>>
> > >  1. Wait until the minimum length of the voting passes.
> > >  2. Publicly reach out to the remaini

[jira] [Created] (FLINK-13819) Introduce RpcEndpoint State

2019-08-22 Thread Andrey Zagrebin (Jira)
Andrey Zagrebin created FLINK-13819:
---

 Summary: Introduce RpcEndpoint State
 Key: FLINK-13819
 URL: https://issues.apache.org/jira/browse/FLINK-13819
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Reporter: Andrey Zagrebin
Assignee: Andrey Zagrebin
 Fix For: 1.10.0, 1.9.1


To better reflect the lifecycle of RpcEndpoint, we suggest to introduce its 
state:
 * created
 * started
 * stopping

We can use the state e.g. to make decision about how to react on API calls if 
it is already known that the RpcEndpoint is terminating, as required e.g. for 
FLINK-13769.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Tzu-Li (Gordon) Tai
The Apache Flink community is very happy to announce the release of Apache
Flink 1.9.0, which is the latest major release.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements
for this new major release:
https://flink.apache.org/news/2019/08/22/release-1.9.0.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Cheers,
Gordon


Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Jark Wu
Congratulations!

Thanks Gordon and Kurt for being the release manager and thanks a lot to
all the contributors.


Cheers,
Jark

On Thu, 22 Aug 2019 at 20:06, Oytun Tez  wrote:

> Congratulations team; thanks for the update, Gordon.
>
> ---
> Oytun Tez
>
> *M O T A W O R D*
> The World's Fastest Human Translation Platform.
> oy...@motaword.com — www.motaword.com
>
>
> On Thu, Aug 22, 2019 at 8:03 AM Tzu-Li (Gordon) Tai 
> wrote:
>
>> The Apache Flink community is very happy to announce the release of
>> Apache Flink 1.9.0, which is the latest major release.
>>
>> Apache Flink® is an open-source stream processing framework for
>> distributed, high-performing, always-available, and accurate data streaming
>> applications.
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> Please check out the release blog post for an overview of the
>> improvements for this new major release:
>> https://flink.apache.org/news/2019/08/22/release-1.9.0.html
>>
>> The full release notes are available in Jira:
>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601
>>
>> We would like to thank all contributors of the Apache Flink community who
>> made this release possible!
>>
>> Cheers,
>> Gordon
>>
>


[NOTICE] GitHub service interruption

2019-08-22 Thread Chesnay Schepler

Hello,

GitHub is currently experiencing problems 
; so far the one issue we saw ourselves 
is that Travis builds aren't triggered if a commit is pushed. This 
affects builds both for branches and pull requests; cron jobs may be fine.


@Committers: Please keep this in mind when merging things, as any issues 
on master will likely be detected later then usual.




Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread JingsongLee
Congratulations~~~ Thanks gordon and everyone~

Best,
Jingsong Lee


--
From:Oytun Tez 
Send Time:2019年8月22日(星期四) 14:06
To:Tzu-Li (Gordon) Tai 
Cc:dev ; user ; announce 

Subject:Re: [ANNOUNCE] Apache Flink 1.9.0 released

Congratulations team; thanks for the update, Gordon.

---
Oytun Tez

M O T A W O R D
The World's Fastest Human Translation Platform.
oy...@motaword.com — www.motaword.com

On Thu, Aug 22, 2019 at 8:03 AM Tzu-Li (Gordon) Tai  wrote:

The Apache Flink community is very happy to announce the release of Apache 
Flink 1.9.0, which is the latest major release.

Apache Flink(r) is an open-source stream processing framework for distributed, 
high-performing, always-available, and accurate data streaming applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements for 
this new major release:
https://flink.apache.org/news/2019/08/22/release-1.9.0.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601

We would like to thank all contributors of the Apache Flink community who made 
this release possible!

Cheers,
Gordon


Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Zili Chen
Congratulations!

Thanks Gordon and Kurt for being the release manager.

Thanks all the contributors who have made this release possible.

Best,
tison.


Jark Wu  于2019年8月22日周四 下午8:11写道:

> Congratulations!
>
> Thanks Gordon and Kurt for being the release manager and thanks a lot to
> all the contributors.
>
>
> Cheers,
> Jark
>
> On Thu, 22 Aug 2019 at 20:06, Oytun Tez  wrote:
>
>> Congratulations team; thanks for the update, Gordon.
>>
>> ---
>> Oytun Tez
>>
>> *M O T A W O R D*
>> The World's Fastest Human Translation Platform.
>> oy...@motaword.com — www.motaword.com
>>
>>
>> On Thu, Aug 22, 2019 at 8:03 AM Tzu-Li (Gordon) Tai 
>> wrote:
>>
>>> The Apache Flink community is very happy to announce the release of
>>> Apache Flink 1.9.0, which is the latest major release.
>>>
>>> Apache Flink® is an open-source stream processing framework for
>>> distributed, high-performing, always-available, and accurate data streaming
>>> applications.
>>>
>>> The release is available for download at:
>>> https://flink.apache.org/downloads.html
>>>
>>> Please check out the release blog post for an overview of the
>>> improvements for this new major release:
>>> https://flink.apache.org/news/2019/08/22/release-1.9.0.html
>>>
>>> The full release notes are available in Jira:
>>>
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601
>>>
>>> We would like to thank all contributors of the Apache Flink community
>>> who made this release possible!
>>>
>>> Cheers,
>>> Gordon
>>>
>>


Re: [DISCUSS] FLIP-54: Evolve ConfigOption and Configuration

2019-08-22 Thread Timo Walther

Hi everyone,

thanks for all the feedback we have received online and offline. It 
showed that many people support the idea of evolving the Flink 
configuration functionality. I'm almost sure that this FLIP will not 
solve all issues but at least will improve the current status.


We've updated the document and replaced the Correlation part with the 
concept of a ConfigOptionGroup that can provide all available options of 
a group plus custom group validators for eager validation. For now, this 
eager group validation will only be used at certain locations in the 
Flink code but it prepares for maybe validating the entire global 
configuration before submitting a job in the future.


Please take another look if you find time. I hope we can proceed with 
the voting process if there are no objections.


Regards,
Timo

Am 19.08.19 um 12:54 schrieb Timo Walther:

Hi Stephan,

thanks for your suggestions. Let me give you some background about the 
decisions made in this FLIP:


1. Goal: The FLIP is labelled "evolve" not "rework" because we did not 
want to change the entire configuration infrastructure. Both for 
backwards-compatibility reasons and the amount of work that would be 
required to update all options. If our goal is to rework the 
configuration option entirely, I might suggest to switch to JSON 
format with JSON schema and JSON validator. However, setting 
properties in a CLI or web interface becomes more tricky the more 
nested structures are allowed.


2. Class-based Options: The current ConfigOption class is centered 
around Java classes where T is determined by the default value. The 
FLIP just makes this more explicit by offering an explicit `intType()` 
method etc. The current design of validators centered around Java 
classes makes it possible to have typical domain validators baked by 
generics as you suggested. If we introduce types such as "quantity 
with measure and unit" we still need to get a class out of this option 
at the end, so why changing a proven concept?


3. List Options: The `isList` prevents having arbitrary nesting. As 
Dawid mentioned, we kept human readability in mind. For every atomic 
option like "key=12" can be represented by a list "keys=12;13". But we 
don't want to go further; esp. no nesting. A dedicated list option 
would start making this more complicated such as 
"ListOption(ObjectOption(ListOption(IntOption, ...), 
StringOption(...)))", do we want that?


4. Correlation: The correlation part is one of the suggestions that I 
like least in the document. We can also discuss removing it entirely, 
but I think it solves the use case of relating options with each other 
in a flexible way right next to the actual option. Instead of being 
hidden in some component initialization, we should put it close to the 
option to also perform validation eagerly instead of failing at 
runtime when the option is accessed the first time.


Regards,
Timo


Am 18.08.19 um 23:32 schrieb Stephan Ewen:

A "List Type" sounds like a good direction to me.

The comment on the type system was a bit brief, I agree. The idea is 
to see

if something like that can ease validation. Especially the correlation
system seems quite complex (proxies to work around order of 
initialization).


For example, let's assume we don't think primarily about "java types" 
but
would define types as one of the following (just examples, haven't 
thought

all the details through):

   (a) category type: implies string, and a fix set of possible values.
Those would be passes and naturally make it into the docs and 
validation.

Maps to a String or Enum in Java.

   (b) numeric integer type: implies long (or optionally integer, if 
we want

to automatically check overflow / underflow). would take typical domain
validators, like non-negative, etc.

   (c) numeric real type: same as above (double or float)

   (d) numeric interval type: either defined as an interval, or 
references

other parameter by key. validation by valid interval.

   (e) quantity: a measure and a unit. separately parsable. The 
measure's
type could be any of the numeric types above, with same validation 
rules.


With a system like the above, would we still correlation validators? Are
there still cases that we need to catch early (config loading) or are 
the
remaining cases sufficiently rare and runtime or setup specific, that 
it is

fine to handle them in component initialization?


On Sun, Aug 18, 2019 at 6:36 PM Dawid Wysakowicz 


wrote:


Hi Stephan,

Thank you for your opinion.

Actually list/composite types are the topics we spent the most of the
time. I understand that from a perspective of a full blown type system,
a field like isList may look weird. Please let me elaborate a bit more
on the reason behind it though. Maybe we weren't clear enough about it
in the FLIP. The key feature of all the conifg options is that they 
must

have a string representation as they might come from a configuration
file. Moreover it must be a human reada

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Timo Walther

Thanks to everyone who contributed to this release. Great team work!

Regards,
Timo

Am 22.08.19 um 14:16 schrieb JingsongLee:

Congratulations~~~ Thanks gordon and everyone~

Best,
Jingsong Lee


--
From:Oytun Tez 
Send Time:2019年8月22日(星期四) 14:06
To:Tzu-Li (Gordon) Tai 
Cc:dev ; user ; announce 

Subject:Re: [ANNOUNCE] Apache Flink 1.9.0 released

Congratulations team; thanks for the update, Gordon.

---
Oytun Tez

M O T A W O R D
The World's Fastest Human Translation Platform.
oy...@motaword.com — www.motaword.com

On Thu, Aug 22, 2019 at 8:03 AM Tzu-Li (Gordon) Tai  wrote:

The Apache Flink community is very happy to announce the release of Apache 
Flink 1.9.0, which is the latest major release.

Apache Flink(r) is an open-source stream processing framework for distributed, 
high-performing, always-available, and accurate data streaming applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements for 
this new major release:
https://flink.apache.org/news/2019/08/22/release-1.9.0.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601

We would like to thank all contributors of the Apache Flink community who made 
this release possible!

Cheers,
Gordon





Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Till Rohrmann
Great news! Thanks a lot to everyone who helped making this release
possible and in particular to our release managers Gordon and Kurt for the
hard work.

Cheers,
Till

On Thu, Aug 22, 2019 at 2:22 PM Timo Walther  wrote:

> Thanks to everyone who contributed to this release. Great team work!
>
> Regards,
> Timo
>
> Am 22.08.19 um 14:16 schrieb JingsongLee:
> > Congratulations~~~ Thanks gordon and everyone~
> >
> > Best,
> > Jingsong Lee
> >
> >
> > --
> > From:Oytun Tez 
> > Send Time:2019年8月22日(星期四) 14:06
> > To:Tzu-Li (Gordon) Tai 
> > Cc:dev ; user ; announce <
> annou...@apache.org>
> > Subject:Re: [ANNOUNCE] Apache Flink 1.9.0 released
> >
> > Congratulations team; thanks for the update, Gordon.
> >
> > ---
> > Oytun Tez
> >
> > M O T A W O R D
> > The World's Fastest Human Translation Platform.
> > oy...@motaword.com — www.motaword.com
> >
> > On Thu, Aug 22, 2019 at 8:03 AM Tzu-Li (Gordon) Tai 
> wrote:
> >
> > The Apache Flink community is very happy to announce the release of
> Apache Flink 1.9.0, which is the latest major release.
> >
> > Apache Flink(r) is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> >
> > The release is available for download at:
> > https://flink.apache.org/downloads.html
> >
> > Please check out the release blog post for an overview of the
> improvements for this new major release:
> > https://flink.apache.org/news/2019/08/22/release-1.9.0.html
> >
> > The full release notes are available in Jira:
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601
> >
> > We would like to thank all contributors of the Apache Flink community
> who made this release possible!
> >
> > Cheers,
> > Gordon
>
>
>


[DISCUSS] Enhance Support for Multicast Communication Pattern

2019-08-22 Thread Yun Gao
Hi everyone,
  In some scenarios we met a requirement that some operators want to send 
records to theirs downstream operators with an multicast communication pattern. 
In detail, for some records, the operators want to send them according to the 
partitioner (for example, Rebalance), and for some other records, the operators 
want to send them to all the connected operators and tasks. Such a 
communication pattern could be viewed as a kind of multicast: it does not 
broadcast every record, but some record will indeed be sent to multiple 
downstream operators.

However, we found that this kind of communication pattern seems could not be 
implemented rightly if the operators have multiple consumers with different 
parallelism, using the customized partitioner. To solve the above problem, we 
propose to enhance the support for such kind of irregular communication 
pattern. We think there may be two options:

 1. Support a kind of customized operator events, which share much 
similarity with Watermark, and these events can be broadcasted to the 
downstream operators separately.
 2. Let the channel selector supports multicast, and also add the separate 
RecordWriter implementation to avoid impacting the performance of the channel 
selector that does not need multicast.

The problem and options are detailed in 
https://docs.google.com/document/d/1npi5c_SeP68KuT2lNdKd8G7toGR_lxQCGOnZm_hVMks/edit?usp=sharing

We are also wondering if there are other methods to implement this requirement 
with or without changing Runtime. Very thanks for any feedbacks !


Best,
Yun



Re: [DISCUSS] FLIP-49: Unified Memory Configuration for TaskExecutors

2019-08-22 Thread Xintong Song
Hi everyone,

I just updated the FLIP document on wiki [1], with the following changes.

   - Removed open question regarding MemorySegment allocation. As
   discussed, we exclude this topic from the scope of this FLIP.
   - Updated content about JVM direct memory parameter according to recent
   discussions, and moved the other options to "Rejected Alternatives" for the
   moment.
   - Added implementation steps.


Thank you~

Xintong Song


[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors

On Mon, Aug 19, 2019 at 7:16 PM Stephan Ewen  wrote:

> @Xintong: Concerning "wait for memory users before task dispose and memory
> release": I agree, that's how it should be. Let's try it out.
>
> @Xintong @Jingsong: Concerning " JVM does not wait for GC when allocating
> direct memory buffer": There seems to be pretty elaborate logic to free
> buffers when allocating new ones. See
>
> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/file/tip/src/share/classes/java/nio/Bits.java#l643
>
> @Till: Maybe. If we assume that the JVM default works (like going with
> option 2 and not setting "-XX:MaxDirectMemorySize" at all), then I think it
> should be okay to set "-XX:MaxDirectMemorySize" to
> "off_heap_managed_memory + direct_memory" even if we use RocksDB. That is a
> big if, though, I honestly have no idea :D Would be good to understand
> this, though, because this would affect option (2) and option (1.2).
>
> On Mon, Aug 19, 2019 at 4:44 PM Xintong Song 
> wrote:
>
> > Thanks for the inputs, Jingsong.
> >
> > Let me try to summarize your points. Please correct me if I'm wrong.
> >
> >- Memory consumers should always avoid returning memory segments to
> >memory manager while there are still un-cleaned structures / threads
> > that
> >may use the memory. Otherwise, it would cause serious problems by
> having
> >multiple consumers trying to use the same memory segment.
> >- JVM does not wait for GC when allocating direct memory buffer.
> >Therefore even we set proper max direct memory size limit, we may
> still
> >encounter direct memory oom if the GC cleaning memory slower than the
> >direct memory allocation.
> >
> > Am I understanding this correctly?
> >
> > Thank you~
> >
> > Xintong Song
> >
> >
> >
> > On Mon, Aug 19, 2019 at 4:21 PM JingsongLee  > .invalid>
> > wrote:
> >
> > > Hi stephan:
> > >
> > > About option 2:
> > >
> > > if additional threads not cleanly shut down before we can exit the
> task:
> > > In the current case of memory reuse, it has freed up the memory it
> > >  uses. If this memory is used by other tasks and asynchronous threads
> > >  of exited task may still be writing, there will be concurrent security
> > >  problems, and even lead to errors in user computing results.
> > >
> > > So I think this is a serious and intolerable bug, No matter what the
> > >  option is, it should be avoided.
> > >
> > > About direct memory cleaned by GC:
> > > I don't think it is a good idea, I've encountered so many situations
> > >  that it's too late for GC to cause DirectMemory OOM. Release and
> > >  allocate DirectMemory depend on the type of user job, which is
> > >  often beyond our control.
> > >
> > > Best,
> > > Jingsong Lee
> > >
> > >
> > > --
> > > From:Stephan Ewen 
> > > Send Time:2019年8月19日(星期一) 15:56
> > > To:dev 
> > > Subject:Re: [DISCUSS] FLIP-49: Unified Memory Configuration for
> > > TaskExecutors
> > >
> > > My main concern with option 2 (manually release memory) is that
> segfaults
> > > in the JVM send off all sorts of alarms on user ends. So we need to
> > > guarantee that this never happens.
> > >
> > > The trickyness is in tasks that uses data structures / algorithms with
> > > additional threads, like hash table spill/read and sorting threads. We
> > need
> > > to ensure that these cleanly shut down before we can exit the task.
> > > I am not sure that we have that guaranteed already, that's why option
> 1.1
> > > seemed simpler to me.
> > >
> > > On Mon, Aug 19, 2019 at 3:42 PM Xintong Song 
> > > wrote:
> > >
> > > > Thanks for the comments, Stephan. Summarized in this way really makes
> > > > things easier to understand.
> > > >
> > > > I'm in favor of option 2, at least for the moment. I think it is not
> > that
> > > > difficult to keep it segfault safe for memory manager, as long as we
> > > always
> > > > de-allocate the memory segment when it is released from the memory
> > > > consumers. Only if the memory consumer continue using the buffer of
> > > memory
> > > > segment after releasing it, in which case we do want the job to fail
> so
> > > we
> > > > detect the memory leak early.
> > > >
> > > > For option 1.2, I don't think this is a good idea. Not only because
> the
> > > > assumption (regular GC is enough to clean direct buffers) may not
> > always
> > > be
> > > > true, but also it makes harder for finding problem

Re: CiBot Update

2019-08-22 Thread Hequn Cheng
Cool, thanks Chesnay a lot for the improvement!

Best, Hequn

On Thu, Aug 22, 2019 at 5:02 PM Zhu Zhu  wrote:

> Thanks Chesnay for the CI improvement!
> It is very helpful.
>
> Thanks,
> Zhu Zhu
>
> zhijiang  于2019年8月22日周四 下午4:18写道:
>
> > It is really very convenient now. Valuable work, Chesnay!
> >
> > Best,
> > Zhijiang
> > --
> > From:Till Rohrmann 
> > Send Time:2019年8月22日(星期四) 10:13
> > To:dev 
> > Subject:Re: CiBot Update
> >
> > Thanks for the continuous work on the CiBot Chesnay!
> >
> > Cheers,
> > Till
> >
> > On Thu, Aug 22, 2019 at 9:47 AM Jark Wu  wrote:
> >
> > > Great work! Thanks Chesnay!
> > >
> > >
> > >
> > > On Thu, 22 Aug 2019 at 15:42, Xintong Song 
> > wrote:
> > >
> > > > The re-triggering travis feature is so convenient. Thanks Chesnay~!
> > > >
> > > > Thank you~
> > > >
> > > > Xintong Song
> > > >
> > > >
> > > >
> > > > On Thu, Aug 22, 2019 at 9:26 AM Stephan Ewen 
> wrote:
> > > >
> > > > > Nice, thanks!
> > > > >
> > > > > On Thu, Aug 22, 2019 at 3:59 AM Zili Chen 
> > > wrote:
> > > > >
> > > > > > Thanks for your announcement. Nice work!
> > > > > >
> > > > > > Best,
> > > > > > tison.
> > > > > >
> > > > > >
> > > > > > vino yang  于2019年8月22日周四 上午8:14写道:
> > > > > >
> > > > > > > +1 for "@flinkbot run travis", it is very convenient.
> > > > > > >
> > > > > > > Chesnay Schepler  于2019年8月21日周三 下午9:12写道:
> > > > > > >
> > > > > > > > Hi everyone,
> > > > > > > >
> > > > > > > > this is an update on recent changes to the CI bot.
> > > > > > > >
> > > > > > > >
> > > > > > > > The bot now cancels builds if a new commit was added to a PR,
> > and
> > > > > > > > cancels all builds if the PR was closed.
> > > > > > > > (This was implemented a while ago; I'm just mentioning it
> again
> > > for
> > > > > > > > discoverability)
> > > > > > > >
> > > > > > > >
> > > > > > > > Additionally, starting today you can now re-trigger a Travis
> > run
> > > by
> > > > > > > > writing a comment "@flinkbot run travis"; this means you no
> > > longer
> > > > > have
> > > > > > > > to commit an empty commit or do other shenanigans to get
> > another
> > > > > build
> > > > > > > > running.
> > > > > > > > Note that this will /not/ work if the PR was re-opened, until
> > at
> > > > > least
> > > > > > 1
> > > > > > > > new build was triggered by a push.
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> >
>


Re: CiBot Update

2019-08-22 Thread Biao Liu
Thanks Chesnay a lot,

I love this feature!

Thanks,
Biao /'bɪ.aʊ/



On Thu, 22 Aug 2019 at 20:55, Hequn Cheng  wrote:

> Cool, thanks Chesnay a lot for the improvement!
>
> Best, Hequn
>
> On Thu, Aug 22, 2019 at 5:02 PM Zhu Zhu  wrote:
>
> > Thanks Chesnay for the CI improvement!
> > It is very helpful.
> >
> > Thanks,
> > Zhu Zhu
> >
> > zhijiang  于2019年8月22日周四 下午4:18写道:
> >
> > > It is really very convenient now. Valuable work, Chesnay!
> > >
> > > Best,
> > > Zhijiang
> > > --
> > > From:Till Rohrmann 
> > > Send Time:2019年8月22日(星期四) 10:13
> > > To:dev 
> > > Subject:Re: CiBot Update
> > >
> > > Thanks for the continuous work on the CiBot Chesnay!
> > >
> > > Cheers,
> > > Till
> > >
> > > On Thu, Aug 22, 2019 at 9:47 AM Jark Wu  wrote:
> > >
> > > > Great work! Thanks Chesnay!
> > > >
> > > >
> > > >
> > > > On Thu, 22 Aug 2019 at 15:42, Xintong Song 
> > > wrote:
> > > >
> > > > > The re-triggering travis feature is so convenient. Thanks Chesnay~!
> > > > >
> > > > > Thank you~
> > > > >
> > > > > Xintong Song
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Aug 22, 2019 at 9:26 AM Stephan Ewen 
> > wrote:
> > > > >
> > > > > > Nice, thanks!
> > > > > >
> > > > > > On Thu, Aug 22, 2019 at 3:59 AM Zili Chen 
> > > > wrote:
> > > > > >
> > > > > > > Thanks for your announcement. Nice work!
> > > > > > >
> > > > > > > Best,
> > > > > > > tison.
> > > > > > >
> > > > > > >
> > > > > > > vino yang  于2019年8月22日周四 上午8:14写道:
> > > > > > >
> > > > > > > > +1 for "@flinkbot run travis", it is very convenient.
> > > > > > > >
> > > > > > > > Chesnay Schepler  于2019年8月21日周三
> 下午9:12写道:
> > > > > > > >
> > > > > > > > > Hi everyone,
> > > > > > > > >
> > > > > > > > > this is an update on recent changes to the CI bot.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > The bot now cancels builds if a new commit was added to a
> PR,
> > > and
> > > > > > > > > cancels all builds if the PR was closed.
> > > > > > > > > (This was implemented a while ago; I'm just mentioning it
> > again
> > > > for
> > > > > > > > > discoverability)
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Additionally, starting today you can now re-trigger a
> Travis
> > > run
> > > > by
> > > > > > > > > writing a comment "@flinkbot run travis"; this means you no
> > > > longer
> > > > > > have
> > > > > > > > > to commit an empty commit or do other shenanigans to get
> > > another
> > > > > > build
> > > > > > > > > running.
> > > > > > > > > Note that this will /not/ work if the PR was re-opened,
> until
> > > at
> > > > > > least
> > > > > > > 1
> > > > > > > > > new build was triggered by a push.
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> >
>


[jira] [Created] (FLINK-13820) Breaking long function argument lists and chained method calls

2019-08-22 Thread Andrey Zagrebin (Jira)
Andrey Zagrebin created FLINK-13820:
---

 Summary: Breaking long function argument lists and chained method 
calls
 Key: FLINK-13820
 URL: https://issues.apache.org/jira/browse/FLINK-13820
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Project Website
Reporter: Andrey Zagrebin
Assignee: Andrey Zagrebin


Breaking the line of too long statements (line longness is yet to be fully 
defined) to improve code readability in case of
 * Long function argument lists (declaration or call): void func(type1 arg1, 
type2 arg2, ...)
 * Long sequence of chained calls: 
list.stream().map(...).reduce(...).collect(...)...

Rules:
 * Break the list of arguments/calls if the line exceeds limit or earlier if 
you believe that the breaking would improve the code readability
 * If you break the line then each argument/call should have a separate line, 
including the first one
 * Each new line argument/call should have one extra indentation relative to 
the line of the parent function name or called entity
 * The opening parenthesis always stays on the line of the parent function name
 * The possible thrown exception list is never broken and stays on the same 
last line
 * The dot of a chained call is always on the line of that chained call 
proceeding the call at the beginning

Examples of breaking:
 * Function arguments

{code:java}
public void func(
    int arg1,
    int arg2,
    ...) throws E1, E2, E3 {
    
}{code}

 * Chained method calls:

{code:java}
values
    .stream()
    .map(...)
    .collect(...);{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (FLINK-13821) Website must link to License etc

2019-08-22 Thread Sebb (Jira)
Sebb created FLINK-13821:


 Summary: Website must link to License etc
 Key: FLINK-13821
 URL: https://issues.apache.org/jira/browse/FLINK-13821
 Project: Flink
  Issue Type: Bug
Reporter: Sebb


ASF project websites must have certain links:

Apachecon
License
Thanks
Security
Sponsor/Donat

Please see:

https://whimsy.apache.org/site/project/flink



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: [DISCUSS][CODE STYLE] Breaking long function argument lists and chained method calls

2019-08-22 Thread Andrey Zagrebin
FYI PR: https://github.com/apache/flink-web/pull/254

On Thu, Aug 22, 2019 at 10:11 AM Andrey Zagrebin 
wrote:

> Hi Tison,
>
> Regarding the automatic checks.
> Yes, I suggest we conclude the discussion without the automatic checks.
> As soon as we have more ideas/investigation, put into automation, we can
> activate it and/or reconsider.
> Nonetheless, I do not see any problem if we agree on this atm and make it
> part of our code style recommendations.
>
> Regarding putting the right parenthesis on the new line.
> At the moment we do not use this approach in our code base. My personal
> feeling is that it is not so often used in Java.
> Anyways I think, this goes more into direction of the second follow-up to
> discuss this separately:
>
>- indentation rules for the broken list of a declared function
>arguments (atm we usually do: one and newline before body or two idents)
>
> or maybe we should rather name it: how to separate function declaration
> and body (as you already mentioned it like this).
>
> We can change this point:
>
>- The closing brace of the function argument list and the possible
>thrown exception list always stay on the line of the last argument
>
> to
>
>- The possible thrown exception list is never broken and stays on the
>same last line
>
> Then we can also adjust it if needed after the discussion about how to
> separate function declaration and body.
>
> Best,
> Andrey
>
>
>
>
>
>
> On Thu, Aug 22, 2019 at 9:05 AM Zili Chen  wrote:
>
>> One more question, what do you differ
>>
>> *public **void func(*
>> *int arg1,*
>> *int arg2,*
>> *...)** throws E1, E2, E3 {*
>> *...*
>> *}*
>>
>> and
>>
>> *public **void func(*
>> *int arg1,*
>> *int arg2,*
>> *...
>> *)** throws E1, E2, E3 {*
>> *...*
>> *}*
>>
>> I prefer the latter because parentheses are aligned in a similar way,
>> as well as the border between declaration and function body is clear.
>>
>>
>> Zili Chen  于2019年8月22日周四 上午9:53写道:
>>
>> > Thanks Andrey for driving the discussion. Just for clarification,
>> > what we conclude here are several guidelines without automatic
>> > checker/tool guard them, right?
>> >
>> > Best,
>> > tison.
>> >
>> >
>> > Andrey Zagrebin  于2019年8月21日周三 下午8:18写道:
>> >
>> >> Hi All,
>> >>
>> >> I suggest we also conclude this discussion now.
>> >>
>> >> Breaking the line of too long statements (line longness is yet to be
>> fully
>> >> defined) to improve code readability in case of
>> >>
>> >>- Long function argument lists (declaration or call): void
>> func(type1
>> >>arg1, type2 arg2, ...)
>> >>- Long sequence of chained calls:
>> >>list.stream().map(...).reduce(...).collect(...)...
>> >>
>> >> Rules:
>> >>
>> >>- Break the list of arguments/calls if the line exceeds limit or
>> >> earlier
>> >>if you believe that the breaking would improve the code readability
>> >>- If you break the line then each argument/call should have a
>> separate
>> >>line, including the first one
>> >>- Each new line argument/call should have one extra indentation
>> >> relative
>> >>to the line of the parent function name or called entity
>> >>- The opening brace always stays on the line of the parent function
>> >> name
>> >>- The closing brace of the function argument list and the possible
>> >>thrown exception list always stay on the line of the last argument
>> >>- The dot of a chained call is always on the line of that chained
>> call
>> >>proceeding the call at the beginning
>> >>
>> >> Examples of breaking:
>> >>
>> >>- Function arguments
>> >>
>> >> *public **void func(*
>> >> *int arg1,*
>> >> *int arg2,*
>> >> *...)** throws E1, E2, E3 {*
>> >> *...*
>> >> *}*
>> >>
>> >>
>> >>- Chained method calls:
>> >>
>> >> *values*
>> >> *.stream()*
>> >> *.map(*...*)*
>> >> *.collect(...);*
>> >>
>> >>
>> >> I suggest we spawn separate discussion threads (can do as a follow-up)
>> >> about:
>> >>
>> >>- the hard line length limit in Java, possibly to confirm it also
>> for
>> >>Scala (cc @Tison)
>> >>- indentation rules for the broken list of a declared function
>> >> arguments
>> >>
>> >> If there are no more comments/objections/concerns, I will open a PR to
>> >> capture the discussion outcome.
>> >>
>> >> Best,
>> >> Andrey
>> >>
>> >>
>> >>
>> >> On Wed, Aug 21, 2019 at 8:57 AM Zili Chen 
>> wrote:
>> >>
>> >> > Implement question: how to apply the line length rules?
>> >> >
>> >> > If we just turn on checkstyle rule "LineLength" then a huge
>> >> > effort is required to break lines those break the rule. If
>> >> > we use an auto-formatter here then it possibly break line
>> >> > "just at the position" awfully.
>> >> >
>> >> > Is it possible we require only to fit the rule on the fly
>> >> > a pull request touch files?
>> >> >
>> >> > Best,
>> >> > tison.
>> >> >
>> >> >
>> >> > Yu Li  于2019年8月20日周二 下午5:22写道:
>> >> >
>> >

[jira] [Created] (FLINK-13822) TableAggregateITCase.testGroupByFlatAggregate test failure: IllegalStateException: Concurrent access to KryoSerializer.

2019-08-22 Thread Alex (Jira)
Alex created FLINK-13822:


 Summary: TableAggregateITCase.testGroupByFlatAggregate test 
failure: IllegalStateException: Concurrent access to KryoSerializer.
 Key: FLINK-13822
 URL: https://issues.apache.org/jira/browse/FLINK-13822
 Project: Flink
  Issue Type: Bug
Reporter: Alex


{code}
[ERROR] 
testGroupByFlatAggregate[StateBackend=HEAP](org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase)
  Time elapsed: 0.643 s  <<< ERROR!
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testGroupByFlatAggregate(TableAggregateITCase.scala:59)
Caused by: java.lang.IllegalStateException: Concurrent access to 
KryoSerializer. Thread 1: GroupTableAggregate -> Calc(select=[b AS category, f0 
AS v1, f1 AS v2]) -> SinkConversionToTuple2 (1/4) , Thread 2: 
AsyncOperations-thread-1
{code}

CI log: https://api.travis-ci.com/v3/job/227362742/log.txt



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: [VOTE] Flink Project Bylaws

2019-08-22 Thread jincheng sun
+1

Becket Qin 于2019年8月22日 周四16:22写道:

> Hi All, so far the votes count as following:
>
> +1 (Binding): 13 (Aljoscha, Fabian, Kurt, Till, Timo, Max, Stephan,
> Gordon, Robert, Ufuk, Chesnay, Shaoxuan, Henry)
> +0 (Binding): 1 (Thomas)
>
> +1 (Non-Binding): 10 (Hequn, Vino, Piotr, Dawid, Xintong, Yu, Jingsong,
> Yun, Jark, Biao)
>
> Given that more than 6 days have passed and there are not sufficient +1s
> to pass the vote. I am reaching out the the binding voters that have not
> voted yet here.
>
>
> @Greg, @Gyula, @Kostas, @Alan, @jincheng, @Marton, @Sebastian, @Vasiliki, 
> @Daniel
>
> Would you have time to check the Flink bylaws proposal and vote on it? The
> bylaws wiki is following:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=120731026
>
> We are following the 2/3 majority voting process. This is the first
> attempt of reaching out to the PMCs that have not voted yet. We will make
> another attempt after 7 days if the result of the vote is still not
> determined by then.
>
> Also CCing private@ in case one did not setup the Apache email forwarding.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
>
> On Thu, Aug 22, 2019 at 8:03 AM Henry Saputra 
> wrote:
>
>> Oh yeah,  +1 LGTM
>>
>> Thanks for working on this.
>>
>> - Henry
>>
>> On Tue, Aug 20, 2019 at 2:17 AM Becket Qin  wrote:
>>
>> > Thanks for sharing your thoughts, Thomas, Henry and Stephan. I also
>> think
>> > the committers are supposed to be mature enough to know when a review on
>> > their own patch is needed.
>> >
>> > @Henry, just want to confirm, are you +1 on the proposed bylaws?
>> >
>> > Thanks,
>> >
>> > Jiangjie (Becket) Qin
>> >
>> > On Tue, Aug 20, 2019 at 10:54 AM Stephan Ewen  wrote:
>> >
>> > > I see it somewhat similar to Henry.
>> > >
>> > > Generally, all committers should go for a review by another committer,
>> > > unless it is a trivial comment or style fix. I personally do that,
>> even
>> > > though being one of the committers that have been with the project
>> > longest.
>> > >
>> > > For now, I was hoping though that we have a mature enough community
>> that
>> > > this "soft rule" is enough. Whenever possible, working based on trust
>> > with
>> > > soft processes beats working with hard processes. We can still revisit
>> > this
>> > > in case we see that it does not work out.
>> > >
>> > >
>> > > On Mon, Aug 19, 2019 at 10:21 PM Henry Saputra <
>> henry.sapu...@gmail.com>
>> > > wrote:
>> > >
>> > > > One of the perks of being committers is be able to commit code
>> without
>> > > > asking from another committer. Having said that, I think we rely on
>> > > > maturity of the committers to know when to ask for reviews and when
>> to
>> > > > commit directly.
>> > > >
>> > > > For example, if someone just change typos on comments or simple
>> rename
>> > of
>> > > > internal variables, I think we could trust the committer to safely
>> > commit
>> > > > the changes. When the changes will have effect of changing or
>> introduce
>> > > new
>> > > > flows of the code, that's when reviews are needed and strongly
>> > > encouraged.
>> > > > I think the balance is needed for this.
>> > > >
>> > > > PMCs have the ability and right to revert changes in source repo as
>> > > > necessary.
>> > > >
>> > > > - Henry
>> > > >
>> > > > On Sun, Aug 18, 2019 at 9:23 PM Thomas Weise 
>> wrote:
>> > > >
>> > > > > +0 (binding)
>> > > > >
>> > > > > I don't think committers should be allowed to approve their own
>> > > changes.
>> > > > I
>> > > > > would prefer if non-committer contributors can approve committer
>> PRs
>> > as
>> > > > > that would encourage more participation in code review and
>> ability to
>> > > > > contribute.
>> > > > >
>> > > > >
>> > > > > On Fri, Aug 16, 2019 at 9:02 PM Shaoxuan Wang <
>> wshaox...@gmail.com>
>> > > > wrote:
>> > > > >
>> > > > > > +1 (binding)
>> > > > > >
>> > > > > > On Fri, Aug 16, 2019 at 7:48 PM Chesnay Schepler <
>> > ches...@apache.org
>> > > >
>> > > > > > wrote:
>> > > > > >
>> > > > > > > +1 (binding)
>> > > > > > >
>> > > > > > > Although I think it would be a good idea to always cc
>> > > > > > > priv...@flink.apache.org when modifying bylaws, if anything
>> to
>> > > speed
>> > > > > up
>> > > > > > > the voting process.
>> > > > > > >
>> > > > > > > On 16/08/2019 11:26, Ufuk Celebi wrote:
>> > > > > > > > +1 (binding)
>> > > > > > > >
>> > > > > > > > – Ufuk
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > On Wed, Aug 14, 2019 at 4:50 AM Biao Liu <
>> mmyy1...@gmail.com>
>> > > > wrote:
>> > > > > > > >
>> > > > > > > >> +1 (non-binding)
>> > > > > > > >>
>> > > > > > > >> Thanks for pushing this!
>> > > > > > > >>
>> > > > > > > >> Thanks,
>> > > > > > > >> Biao /'bɪ.aʊ/
>> > > > > > > >>
>> > > > > > > >>
>> > > > > > > >>
>> > > > > > > >> On Wed, 14 Aug 2019 at 09:37, Jark Wu 
>> > wrote:
>> > > > > > > >>
>> > > > > > > >>> +1 (non-binding)
>> > > > > > > >>>
>> > > > > > > >>> Best,
>> > > > > > > >>> Jark
>> > > > > > > >>>
>> > > > > > > >>> On Wed, 14 

Re: [DISCUSS] Flink Python User-Defined Function for Table API

2019-08-22 Thread jincheng sun
Hi all,

Thanks a lot for your feedback. If there are no more suggestions and
comments, I think it's better to  initiate a vote to create a FLIP for
Apache Flink Python UDFs.
What do you think?

Best, Jincheng

jincheng sun  于2019年8月15日周四 上午12:54写道:

> Hi Thomas,
>
> Thanks for your confirmation and the very important reminder about bundle
> processing.
>
> I have had add the description about how to perform bundle processing from
> the perspective of checkpoint and watermark. Feel free to leave comments if
> there are anything not describe clearly.
>
> Best,
> Jincheng
>
>
> Dian Fu  于2019年8月14日周三 上午10:08写道:
>
>> Hi Thomas,
>>
>> Thanks a lot the suggestions.
>>
>> Regarding to bundle processing, there is a section "Checkpoint"[1] in the
>> design doc which talks about how to handle the checkpoint.
>> However, I think you are right that we should talk more about it, such as
>> what's bundle processing, how it affects the checkpoint and watermark, how
>> to handle the checkpoint and watermark, etc.
>>
>> [1]
>> https://docs.google.com/document/d/1WpTyCXAQh8Jr2yWfz7MWCD2-lou05QaQFb810ZvTefY/edit#heading=h.urladt565yo3
>> <
>> https://docs.google.com/document/d/1WpTyCXAQh8Jr2yWfz7MWCD2-lou05QaQFb810ZvTefY/edit#heading=h.urladt565yo3
>> >
>>
>> Regards,
>> Dian
>>
>> > 在 2019年8月14日,上午1:01,Thomas Weise  写道:
>> >
>> > Hi Jincheng,
>> >
>> > Thanks for putting this together. The proposal is very detailed,
>> thorough
>> > and for me as a Beam Flink runner contributor easy to understand :)
>> >
>> > One thing that you should probably detail more is the bundle
>> processing. It
>> > is critically important for performance that multiple elements are
>> > processed in a bundle. The default bundle size in the Flink runner is
>> 1s or
>> > 1000 elements, whichever comes first. And for streaming, you can find
>> the
>> > logic necessary to align the bundle processing with watermarks and
>> > checkpointing here:
>> >
>> https://github.com/apache/beam/blob/release-2.14.0/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/ExecutableStageDoFnOperator.java
>> >
>> > Thomas
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Tue, Aug 13, 2019 at 7:05 AM jincheng sun 
>> > wrote:
>> >
>> >> Hi all,
>> >>
>> >> The Python Table API(without Python UDF support) has already been
>> supported
>> >> and will be available in the coming release 1.9.
>> >> As Python UDF is very important for Python users, we'd like to start
>> the
>> >> discussion about the Python UDF support in the Python Table API.
>> >> Aljoscha Krettek, Dian Fu and I have discussed offline and have
>> drafted a
>> >> design doc[1]. It includes the following items:
>> >>
>> >> - The user-defined function interfaces.
>> >> - The user-defined function execution architecture.
>> >>
>> >> As mentioned by many guys in the previous discussion thread[2], a
>> >> portability framework was introduced in Apache Beam in latest
>> releases. It
>> >> provides well-defined, language-neutral data structures and protocols
>> for
>> >> language-neutral user-defined function execution. This design is based
>> on
>> >> Beam's portability framework. We will introduce how to make use of
>> Beam's
>> >> portability framework for user-defined function execution: data
>> >> transmission, state access, checkpoint, metrics, logging, etc.
>> >>
>> >> Considering that the design relies on Beam's portability framework for
>> >> Python user-defined function execution and not all the contributors in
>> >> Flink community are familiar with Beam's portability framework, we have
>> >> done a prototype[3] for proof of concept and also ease of
>> understanding of
>> >> the design.
>> >>
>> >> Welcome any feedback.
>> >>
>> >> Best,
>> >> Jincheng
>> >>
>> >> [1]
>> >>
>> >>
>> https://docs.google.com/document/d/1WpTyCXAQh8Jr2yWfz7MWCD2-lou05QaQFb810ZvTefY/edit?usp=sharing
>> >> [2]
>> >>
>> >>
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-38-Support-python-language-in-flink-TableAPI-td28061.html
>> >> [3] https://github.com/dianfu/flink/commits/udf_poc
>> >>
>>
>>


Re: [DISCUSS] Flink project bylaws

2019-08-22 Thread Becket Qin
Thanks for collecting the ideas of Bylaws changes. It is a good idea!

Jiangjie (Becket) Qin

On Thu, Aug 22, 2019 at 12:11 PM Robert Metzger  wrote:

> I have started a Wiki page (editable by all) for collecting ideas for
> Bylaws changes, so that we can batch changes together and then vote on
> them:
> https://cwiki.apache.org/confluence/display/FLINK/Ideas+for+Bylaw+changes
>
> On Tue, Aug 13, 2019 at 1:41 PM Becket Qin  wrote:
>
> > Hi Robert,
> >
> > Thanks for help apply the changes. I agree we should freeze the change to
> > the bylaws page starting from now. For this particular addition of
> > clarification, I'll send a notice in the voting thread to let who have
> > already voted to know.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Tue, Aug 13, 2019 at 1:29 PM Robert Metzger 
> > wrote:
> >
> > > Hi Becket,
> > > I've applied the proposed change to the document:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=120731026&selectedPageVersions=20&selectedPageVersions=19
> > >
> > > I would propose to stop accepting changes to the document now, as it
> > might
> > > start a discussion about the validity of the vote and the bylaws
> itself.
> > > Changes to the document require a 2/3 majority.
> > >
> > >
> > > On Tue, Aug 13, 2019 at 12:20 PM Maximilian Michels 
> > > wrote:
> > >
> > > > Hi Becket,
> > > >
> > > > Thanks for clarifying and updating the draft. The changes look good
> to
> > > me.
> > > >
> > > > I don't feel strong about a 2/3 majority in case of committer/PMC
> > > > removal. Like you pointed out, both provide a significant hurdle due
> to
> > > > possible vetos or a 2/3 majority.
> > > >
> > > > Thanks,
> > > > Max
> > > >
> > > > On 13.08.19 10:36, Becket Qin wrote:
> > > > > Piotr just reminded me that there was a previous suggestion to
> > clarify
> > > a
> > > > > committer's responsibility when commit his/her own patch. So I'd
> like
> > > to
> > > > > incorporate that in the bylaws. The additional clarification is
> > > following
> > > > > in bold and italic font.
> > > > >
> > > > > one +1 from a committer followed by a Lazy approval (not counting
> the
> > > > vote
> > > > >> of the contributor), moving to lazy majority if a -1 is received.
> > > > >>
> > > > >
> > > > >
> > > > > Note that this implies that committers can +1 their own commits and
> > > merge
> > > > >> right away. *However, the committe**rs should use their best
> > judgement
> > > > to
> > > > >> respect the components expertise and ongoing development plan.*
> > > > >
> > > > >
> > > > > This does not really change any of the existing bylaws, just about
> > > > > clarification.
> > > > >
> > > > > If there is no objection to this additional clarification, after
> the
> > > > bylaws
> > > > > wiki is updated, I'll send an update notice to the voting thread to
> > > > inform
> > > > > those who already voted about this addition.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jiangjie (Becket) Qin
> > > > >
> > > > > On Mon, Aug 12, 2019 at 11:19 AM Becket Qin 
> > > > wrote:
> > > > >
> > > > >> Hi Maximillian,
> > > > >>
> > > > >> Thanks for the feedback. Please see the reply below:
> > > > >>
> > > > >> Step 2 should include a personal email to the PMC members in
> > question.
> > > > >>
> > > > >> I'm afraid reminders inside the vote thread could be overlooked
> > > easily.
> > > > >>
> > > > >>
> > > > >> This is exactly what I meant to say by "reach out" :) I just made
> it
> > > > more
> > > > >> explicit.
> > > > >>
> > > > >> The way the terms are described in the draft, the consensus is
> > "lazy",
> > > > >>> i.e. requires only 3 binding votes. I'd suggest renaming it to
> > "Lazy
> > > > >>> Consensus". This is in line with the other definitions such as
> > "Lazy
> > > > >>> Majority".
> > > > >>
> > > > >> It was initially called "lazy consensus", but Robert pointed out
> > that
> > > > >> "lazy consensus" actually means something different in ASF term
> [1].
> > > > >> Here "lazy" pretty much means "assume everyone is +1 unless
> someone
> > > says
> > > > >> otherwise". This means any vote that requires a minimum number of
> +1
> > > is
> > > > not
> > > > >> really a "lazy" vote.
> > > > >>
> > > > >> Removing a committer / PMC member only requires 3 binding votes.
> I'd
> > > > >>> expect an important action like this to require a 2/3 majority.
> > > > >>
> > > > >> Personally I think consensus is good enough here. PMC members can
> > > cast a
> > > > >> veto if they disagree about the removal. In some sense, it is more
> > > > >> difficult than with 2/3 majority to remove a committer / PMC
> member.
> > > > Also,
> > > > >> it might be a hard decision for some PMC members if they have
> never
> > > > worked
> > > > >> with the person in question. That said, I am OK to change it to
> 2/3
> > > > >> majority as this will happen very rarely.
> > > > >>
> > > > >> Thanks,
> > > > >>
> > > > >> Jiangjie (Becket) Qin
> > > > >>
> > > > >>

Re: CiBot Update

2019-08-22 Thread Ethan Li
Hi Chesnay,

This is really nice feature!

Can I ask how is this implemented? Do you have the related Jira/PR/docs that I 
can take a look? I’d like to introduce it to another project if applicable. 
Thank you very much!

Best,
Ethan

> On Aug 22, 2019, at 8:34 AM, Biao Liu  wrote:
> 
> Thanks Chesnay a lot,
> 
> I love this feature!
> 
> Thanks,
> Biao /'bɪ.aʊ/
> 
> 
> 
> On Thu, 22 Aug 2019 at 20:55, Hequn Cheng  wrote:
> 
>> Cool, thanks Chesnay a lot for the improvement!
>> 
>> Best, Hequn
>> 
>> On Thu, Aug 22, 2019 at 5:02 PM Zhu Zhu  wrote:
>> 
>>> Thanks Chesnay for the CI improvement!
>>> It is very helpful.
>>> 
>>> Thanks,
>>> Zhu Zhu
>>> 
>>> zhijiang  于2019年8月22日周四 下午4:18写道:
>>> 
 It is really very convenient now. Valuable work, Chesnay!
 
 Best,
 Zhijiang
 --
 From:Till Rohrmann 
 Send Time:2019年8月22日(星期四) 10:13
 To:dev 
 Subject:Re: CiBot Update
 
 Thanks for the continuous work on the CiBot Chesnay!
 
 Cheers,
 Till
 
 On Thu, Aug 22, 2019 at 9:47 AM Jark Wu  wrote:
 
> Great work! Thanks Chesnay!
> 
> 
> 
> On Thu, 22 Aug 2019 at 15:42, Xintong Song 
 wrote:
> 
>> The re-triggering travis feature is so convenient. Thanks Chesnay~!
>> 
>> Thank you~
>> 
>> Xintong Song
>> 
>> 
>> 
>> On Thu, Aug 22, 2019 at 9:26 AM Stephan Ewen 
>>> wrote:
>> 
>>> Nice, thanks!
>>> 
>>> On Thu, Aug 22, 2019 at 3:59 AM Zili Chen 
> wrote:
>>> 
 Thanks for your announcement. Nice work!
 
 Best,
 tison.
 
 
 vino yang  于2019年8月22日周四 上午8:14写道:
 
> +1 for "@flinkbot run travis", it is very convenient.
> 
> Chesnay Schepler  于2019年8月21日周三
>> 下午9:12写道:
> 
>> Hi everyone,
>> 
>> this is an update on recent changes to the CI bot.
>> 
>> 
>> The bot now cancels builds if a new commit was added to a
>> PR,
 and
>> cancels all builds if the PR was closed.
>> (This was implemented a while ago; I'm just mentioning it
>>> again
> for
>> discoverability)
>> 
>> 
>> Additionally, starting today you can now re-trigger a
>> Travis
 run
> by
>> writing a comment "@flinkbot run travis"; this means you no
> longer
>>> have
>> to commit an empty commit or do other shenanigans to get
 another
>>> build
>> running.
>> Note that this will /not/ work if the PR was re-opened,
>> until
 at
>>> least
 1
>> new build was triggered by a push.
>> 
> 
 
>>> 
>> 
> 
 
 
>>> 
>> 



Re: CiBot Update

2019-08-22 Thread Ethan Li
My question is specifically about implementation of "@flinkbot run travis"

> On Aug 22, 2019, at 1:06 PM, Ethan Li  wrote:
> 
> Hi Chesnay,
> 
> This is really nice feature!
> 
> Can I ask how is this implemented? Do you have the related Jira/PR/docs that 
> I can take a look? I’d like to introduce it to another project if applicable. 
> Thank you very much!
> 
> Best,
> Ethan
> 
>> On Aug 22, 2019, at 8:34 AM, Biao Liu > > wrote:
>> 
>> Thanks Chesnay a lot,
>> 
>> I love this feature!
>> 
>> Thanks,
>> Biao /'bɪ.aʊ/
>> 
>> 
>> 
>> On Thu, 22 Aug 2019 at 20:55, Hequn Cheng > > wrote:
>> 
>>> Cool, thanks Chesnay a lot for the improvement!
>>> 
>>> Best, Hequn
>>> 
>>> On Thu, Aug 22, 2019 at 5:02 PM Zhu Zhu >> > wrote:
>>> 
 Thanks Chesnay for the CI improvement!
 It is very helpful.
 
 Thanks,
 Zhu Zhu
 
 zhijiang >>> > 于2019年8月22日周四 下午4:18写道:
 
> It is really very convenient now. Valuable work, Chesnay!
> 
> Best,
> Zhijiang
> --
> From:Till Rohrmann mailto:trohrm...@apache.org>>
> Send Time:2019年8月22日(星期四) 10:13
> To:dev mailto:dev@flink.apache.org>>
> Subject:Re: CiBot Update
> 
> Thanks for the continuous work on the CiBot Chesnay!
> 
> Cheers,
> Till
> 
> On Thu, Aug 22, 2019 at 9:47 AM Jark Wu  > wrote:
> 
>> Great work! Thanks Chesnay!
>> 
>> 
>> 
>> On Thu, 22 Aug 2019 at 15:42, Xintong Song > >
> wrote:
>> 
>>> The re-triggering travis feature is so convenient. Thanks Chesnay~!
>>> 
>>> Thank you~
>>> 
>>> Xintong Song
>>> 
>>> 
>>> 
>>> On Thu, Aug 22, 2019 at 9:26 AM Stephan Ewen >> >
 wrote:
>>> 
 Nice, thanks!
 
 On Thu, Aug 22, 2019 at 3:59 AM Zili Chen >>> >
>> wrote:
 
> Thanks for your announcement. Nice work!
> 
> Best,
> tison.
> 
> 
> vino yang mailto:yanghua1...@gmail.com>> 
> 于2019年8月22日周四 上午8:14写道:
> 
>> +1 for "@flinkbot run travis", it is very convenient.
>> 
>> Chesnay Schepler mailto:ches...@apache.org>> 
>> 于2019年8月21日周三
>>> 下午9:12写道:
>> 
>>> Hi everyone,
>>> 
>>> this is an update on recent changes to the CI bot.
>>> 
>>> 
>>> The bot now cancels builds if a new commit was added to a
>>> PR,
> and
>>> cancels all builds if the PR was closed.
>>> (This was implemented a while ago; I'm just mentioning it
 again
>> for
>>> discoverability)
>>> 
>>> 
>>> Additionally, starting today you can now re-trigger a
>>> Travis
> run
>> by
>>> writing a comment "@flinkbot run travis"; this means you no
>> longer
 have
>>> to commit an empty commit or do other shenanigans to get
> another
 build
>>> running.
>>> Note that this will /not/ work if the PR was re-opened,
>>> until
> at
 least
> 1
>>> new build was triggered by a push.
>>> 
>> 
> 
 
>>> 
>> 
> 
> 
 
>>> 
> 



Re: CiBot Update

2019-08-22 Thread Dian Fu
Thanks Chesnay for your great work! A very useful feature! 

Just one minor suggestion: It will be better if we could add this command to 
the section "Bot commands" in the flinkbot template.

Regards,
Dian

> 在 2019年8月23日,上午2:06,Ethan Li  写道:
> 
> My question is specifically about implementation of "@flinkbot run travis"
> 
>> On Aug 22, 2019, at 1:06 PM, Ethan Li  wrote:
>> 
>> Hi Chesnay,
>> 
>> This is really nice feature!
>> 
>> Can I ask how is this implemented? Do you have the related Jira/PR/docs that 
>> I can take a look? I’d like to introduce it to another project if 
>> applicable. Thank you very much!
>> 
>> Best,
>> Ethan
>> 
>>> On Aug 22, 2019, at 8:34 AM, Biao Liu >> > wrote:
>>> 
>>> Thanks Chesnay a lot,
>>> 
>>> I love this feature!
>>> 
>>> Thanks,
>>> Biao /'bɪ.aʊ/
>>> 
>>> 
>>> 
>>> On Thu, 22 Aug 2019 at 20:55, Hequn Cheng >> > wrote:
>>> 
 Cool, thanks Chesnay a lot for the improvement!
 
 Best, Hequn
 
 On Thu, Aug 22, 2019 at 5:02 PM Zhu Zhu >>> > wrote:
 
> Thanks Chesnay for the CI improvement!
> It is very helpful.
> 
> Thanks,
> Zhu Zhu
> 
> zhijiang  > 于2019年8月22日周四 下午4:18写道:
> 
>> It is really very convenient now. Valuable work, Chesnay!
>> 
>> Best,
>> Zhijiang
>> --
>> From:Till Rohrmann mailto:trohrm...@apache.org>>
>> Send Time:2019年8月22日(星期四) 10:13
>> To:dev mailto:dev@flink.apache.org>>
>> Subject:Re: CiBot Update
>> 
>> Thanks for the continuous work on the CiBot Chesnay!
>> 
>> Cheers,
>> Till
>> 
>> On Thu, Aug 22, 2019 at 9:47 AM Jark Wu > > wrote:
>> 
>>> Great work! Thanks Chesnay!
>>> 
>>> 
>>> 
>>> On Thu, 22 Aug 2019 at 15:42, Xintong Song >> >
>> wrote:
>>> 
 The re-triggering travis feature is so convenient. Thanks Chesnay~!
 
 Thank you~
 
 Xintong Song
 
 
 
 On Thu, Aug 22, 2019 at 9:26 AM Stephan Ewen >>> >
> wrote:
 
> Nice, thanks!
> 
> On Thu, Aug 22, 2019 at 3:59 AM Zili Chen  >
>>> wrote:
> 
>> Thanks for your announcement. Nice work!
>> 
>> Best,
>> tison.
>> 
>> 
>> vino yang mailto:yanghua1...@gmail.com>> 
>> 于2019年8月22日周四 上午8:14写道:
>> 
>>> +1 for "@flinkbot run travis", it is very convenient.
>>> 
>>> Chesnay Schepler mailto:ches...@apache.org>> 
>>> 于2019年8月21日周三
 下午9:12写道:
>>> 
 Hi everyone,
 
 this is an update on recent changes to the CI bot.
 
 
 The bot now cancels builds if a new commit was added to a
 PR,
>> and
 cancels all builds if the PR was closed.
 (This was implemented a while ago; I'm just mentioning it
> again
>>> for
 discoverability)
 
 
 Additionally, starting today you can now re-trigger a
 Travis
>> run
>>> by
 writing a comment "@flinkbot run travis"; this means you no
>>> longer
> have
 to commit an empty commit or do other shenanigans to get
>> another
> build
 running.
 Note that this will /not/ work if the PR was re-opened,
 until
>> at
> least
>> 1
 new build was triggered by a push.
 
>>> 
>> 
> 
 
>>> 
>> 
>> 
> 
 
>> 
> 



Re: [DISCUSS] Flink Python User-Defined Function for Table API

2019-08-22 Thread Dian Fu
Hi Jincheng,

+1 to start the FLIP create and VOTE on this feature. I'm willing to help on 
the FLIP create if you don't mind. As I haven't created a FLIP before, it will 
be great if you could help on this. :)

Regards,
Dian

> 在 2019年8月22日,下午11:41,jincheng sun  写道:
> 
> Hi all,
> 
> Thanks a lot for your feedback. If there are no more suggestions and
> comments, I think it's better to  initiate a vote to create a FLIP for
> Apache Flink Python UDFs.
> What do you think?
> 
> Best, Jincheng
> 
> jincheng sun  于2019年8月15日周四 上午12:54写道:
> 
>> Hi Thomas,
>> 
>> Thanks for your confirmation and the very important reminder about bundle
>> processing.
>> 
>> I have had add the description about how to perform bundle processing from
>> the perspective of checkpoint and watermark. Feel free to leave comments if
>> there are anything not describe clearly.
>> 
>> Best,
>> Jincheng
>> 
>> 
>> Dian Fu  于2019年8月14日周三 上午10:08写道:
>> 
>>> Hi Thomas,
>>> 
>>> Thanks a lot the suggestions.
>>> 
>>> Regarding to bundle processing, there is a section "Checkpoint"[1] in the
>>> design doc which talks about how to handle the checkpoint.
>>> However, I think you are right that we should talk more about it, such as
>>> what's bundle processing, how it affects the checkpoint and watermark, how
>>> to handle the checkpoint and watermark, etc.
>>> 
>>> [1]
>>> https://docs.google.com/document/d/1WpTyCXAQh8Jr2yWfz7MWCD2-lou05QaQFb810ZvTefY/edit#heading=h.urladt565yo3
>>> <
>>> https://docs.google.com/document/d/1WpTyCXAQh8Jr2yWfz7MWCD2-lou05QaQFb810ZvTefY/edit#heading=h.urladt565yo3
 
>>> 
>>> Regards,
>>> Dian
>>> 
 在 2019年8月14日,上午1:01,Thomas Weise  写道:
 
 Hi Jincheng,
 
 Thanks for putting this together. The proposal is very detailed,
>>> thorough
 and for me as a Beam Flink runner contributor easy to understand :)
 
 One thing that you should probably detail more is the bundle
>>> processing. It
 is critically important for performance that multiple elements are
 processed in a bundle. The default bundle size in the Flink runner is
>>> 1s or
 1000 elements, whichever comes first. And for streaming, you can find
>>> the
 logic necessary to align the bundle processing with watermarks and
 checkpointing here:
 
>>> https://github.com/apache/beam/blob/release-2.14.0/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/ExecutableStageDoFnOperator.java
 
 Thomas
 
 
 
 
 
 
 
 On Tue, Aug 13, 2019 at 7:05 AM jincheng sun 
 wrote:
 
> Hi all,
> 
> The Python Table API(without Python UDF support) has already been
>>> supported
> and will be available in the coming release 1.9.
> As Python UDF is very important for Python users, we'd like to start
>>> the
> discussion about the Python UDF support in the Python Table API.
> Aljoscha Krettek, Dian Fu and I have discussed offline and have
>>> drafted a
> design doc[1]. It includes the following items:
> 
> - The user-defined function interfaces.
> - The user-defined function execution architecture.
> 
> As mentioned by many guys in the previous discussion thread[2], a
> portability framework was introduced in Apache Beam in latest
>>> releases. It
> provides well-defined, language-neutral data structures and protocols
>>> for
> language-neutral user-defined function execution. This design is based
>>> on
> Beam's portability framework. We will introduce how to make use of
>>> Beam's
> portability framework for user-defined function execution: data
> transmission, state access, checkpoint, metrics, logging, etc.
> 
> Considering that the design relies on Beam's portability framework for
> Python user-defined function execution and not all the contributors in
> Flink community are familiar with Beam's portability framework, we have
> done a prototype[3] for proof of concept and also ease of
>>> understanding of
> the design.
> 
> Welcome any feedback.
> 
> Best,
> Jincheng
> 
> [1]
> 
> 
>>> https://docs.google.com/document/d/1WpTyCXAQh8Jr2yWfz7MWCD2-lou05QaQFb810ZvTefY/edit?usp=sharing
> [2]
> 
> 
>>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-38-Support-python-language-in-flink-TableAPI-td28061.html
> [3] https://github.com/dianfu/flink/commits/udf_poc
> 
>>> 
>>> 



Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Dian Fu
Great news! Thanks Gordon and Kurt for pushing this forward and everybody who 
contributed to this release.

Regards,
Dian

> 在 2019年8月23日,上午9:41,Guowei Ma  写道:
> 
> Congratulations!!
> Best,
> Guowei
> 
> 
> Congxian Qiu mailto:qcx978132...@gmail.com>> 
> 于2019年8月23日周五 上午9:32写道:
> Congratulations, and thanks for everyone who make this release possible.
> Best,
> Congxian
> 
> 
> Kurt Young mailto:ykt...@gmail.com>> 于2019年8月23日周五 
> 上午8:13写道:
> Great to hear! Thanks Gordon for driving the release,  and it's been a great 
> pleasure to work with you as release managers for the last couple of weeks. 
> And thanks everyone who contributed to this version, you're making Flink an 
> even better project!
> 
> Best,
> Kurt 
> 
> Yun Tang mailto:myas...@live.com>>于2019年8月23日 周五02:17写道:
> Glad to hear this and really appreciate Gordon and Kurt's drive on this 
> release, and thanks for everyone who ever contributed to this release.
> 
> Best
> Yun Tang
> From: Becket Qin mailto:becket@gmail.com>>
> Sent: Friday, August 23, 2019 0:19
> To: 不常用邮箱 mailto:xu_soft39211...@163.com>>
> Cc: Yang Wang mailto:danrtsey...@gmail.com>>; user 
> mailto:u...@flink.apache.org>>
> Subject: Re: [ANNOUNCE] Apache Flink 1.9.0 released
>  
> Cheers!! Thanks Gordon and Kurt for driving the release!
> 
> On Thu, Aug 22, 2019 at 5:36 PM 不常用邮箱  > wrote:
> Good news!
> 
> Best.
> -- 
> Louis
> Email: xu_soft39211...@163.com 
> 
>> On Aug 22, 2019, at 22:10, Yang Wang > > wrote:
>> 
>> Glad to hear that.
>> Thanks Gordon, Kurt and everyone who had made contributions to the great 
>> version.
>> 
>> 
>> Best,
>> Yang
>> 
>> 
>> Biao Liu mailto:mmyy1...@gmail.com>> 于2019年8月22日周四 
>> 下午9:33写道:
>> Great news!
>> 
>> Thank your Gordon & Kurt for being the release managers!
>> Thanks all contributors worked on this release!
>> 
>> Thanks,
>> Biao /'bɪ.aʊ/
>> 
>> 
>> 
>> On Thu, 22 Aug 2019 at 21:14, Paul Lam > > wrote:
>> Well done! Thanks to everyone who contributed to the release!
>> 
>> Best,
>> Paul Lam
>> 
>> Yu Li mailto:car...@gmail.com>> 于2019年8月22日周四 下午9:03写道:
>> Thanks for the update Gordon, and congratulations!
>> 
>> Great thanks to all for making this release possible, especially to our 
>> release managers!
>> 
>> Best Regards,
>> Yu
>> 
>> 
>> On Thu, 22 Aug 2019 at 14:55, Xintong Song > > wrote:
>> Congratulations!
>> Thanks Gordon and Kurt for being the release managers, and thanks all the 
>> contributors.
>> 
>> Thank you~
>> Xintong Song
>> 
>> 
>> On Thu, Aug 22, 2019 at 2:39 PM Yun Gao > > wrote:
>>  Congratulations ! 
>> 
>>  Very thanks for Gordon and Kurt for managing the release and very 
>> thanks for everyone for the contributions !
>> 
>>   Best, 
>>   Yun 
>> 
>> 
>> 
>> --
>> From:Zhu Zhu mailto:reed...@gmail.com>>
>> Send Time:2019 Aug. 22 (Thu.) 20:18
>> To:Eliza mailto:e...@chinabuckets.com>>
>> Cc:user mailto:u...@flink.apache.org>>
>> Subject:Re: [ANNOUNCE] Apache Flink 1.9.0 released
>> 
>> Thanks Gordon for the update.
>> Congratulations that we have Flink 1.9.0 released!
>> Thanks to all the contributors.
>> 
>> Thanks,
>> Zhu Zhu
>> 
>> 
>> Eliza mailto:e...@chinabuckets.com>> 于2019年8月22日周四 
>> 下午8:10写道:
>> 
>> 
>> On 2019/8/22 星期四 下午 8:03, Tzu-Li (Gordon) Tai wrote:
>> > The Apache Flink community is very happy to announce the release of 
>> > Apache Flink 1.9.0, which is the latest major release.
>> 
>> Congratulations and thanks~
>> 
>> regards.
>> 
> 
> -- 
> Best,
> Kurt



Re: [DISCUSS] Flink client api enhancement for downstream project

2019-08-22 Thread Zili Chen
Hi Yang,

It would be helpful if you check Stephan's last comment,
which states that isolation is important.

For per-job mode, we run a dedicated cluster(maybe it
should have been a couple of JM and TMs during FLIP-6
design) for a specific job. Thus the process is prevented
from other jobs.

In our cases there was a time we suffered from multi
jobs submitted by different users and they affected
each other so that all ran into an error state. Also,
run the client inside the cluster could save client
resource at some points.

However, we also face several issues as you mentioned,
that in per-job mode it always uses parent classloader
thus classloading issues occur.

BTW, one can makes an analogy between session/per-job mode
in  Flink, and client/cluster mode in Spark.

Best,
tison.


Yang Wang  于2019年8月22日周四 上午11:25写道:

> From the user's perspective, it is really confused about the scope of
> per-job cluster.
>
>
> If it means a flink cluster with single job, so that we could get better
> isolation.
>
> Now it does not matter how we deploy the cluster, directly deploy(mode1)
>
> or start a flink cluster and then submit job through cluster client(mode2).
>
>
> Otherwise, if it just means directly deploy, how should we name the mode2,
>
> session with job or something else?
>
> We could also benefit from the mode2. Users could get the same isolation
> with mode1.
>
> The user code and dependencies will be loaded by user class loader
>
> to avoid class conflict with framework.
>
>
>
> Anyway, both of the two submission modes are useful.
>
> We just need to clarify the concepts.
>
>
>
>
> Best,
>
> Yang
>
> Zili Chen  于2019年8月20日周二 下午5:58写道:
>
> > Thanks for the clarification.
> >
> > The idea JobDeployer ever came into my mind when I was muddled with
> > how to execute per-job mode and session mode with the same user code
> > and framework codepath.
> >
> > With the concept JobDeployer we back to the statement that environment
> > knows every configs of cluster deployment and job submission. We
> > configure or generate from configuration a specific JobDeployer in
> > environment and then code align on
> >
> > *JobClient client = env.execute().get();*
> >
> > which in session mode returned by clusterClient.submitJob and in per-job
> > mode returned by clusterDescriptor.deployJobCluster.
> >
> > Here comes a problem that currently we directly run ClusterEntrypoint
> > with extracted job graph. Follow the JobDeployer way we'd better
> > align entry point of per-job deployment at JobDeployer. Users run
> > their main method or by a Cli(finally call main method) to deploy the
> > job cluster.
> >
> > Best,
> > tison.
> >
> >
> > Stephan Ewen  于2019年8月20日周二 下午4:40写道:
> >
> > > Till has made some good comments here.
> > >
> > > Two things to add:
> > >
> > >   - The job mode is very nice in the way that it runs the client inside
> > the
> > > cluster (in the same image/process that is the JM) and thus unifies
> both
> > > applications and what the Spark world calls the "driver mode".
> > >
> > >   - Another thing I would add is that during the FLIP-6 design, we were
> > > thinking about setups where Dispatcher and JobManager are separate
> > > processes.
> > > A Yarn or Mesos Dispatcher of a session could run independently
> (even
> > > as privileged processes executing no code).
> > > Then you the "per-job" mode could still be helpful: when a job is
> > > submitted to the dispatcher, it launches the JM again in a per-job
> mode,
> > so
> > > that JM and TM processes are bound to teh job only. For higher security
> > > setups, it is important that processes are not reused across jobs.
> > >
> > > On Tue, Aug 20, 2019 at 10:27 AM Till Rohrmann 
> > > wrote:
> > >
> > > > I would not be in favour of getting rid of the per-job mode since it
> > > > simplifies the process of running Flink jobs considerably. Moreover,
> it
> > > is
> > > > not only well suited for container deployments but also for
> deployments
> > > > where you want to guarantee job isolation. For example, a user could
> > use
> > > > the per-job mode on Yarn to execute his job on a separate cluster.
> > > >
> > > > I think that having two notions of cluster deployments (session vs.
> > > per-job
> > > > mode) does not necessarily contradict your ideas for the client api
> > > > refactoring. For example one could have the following interfaces:
> > > >
> > > > - ClusterDeploymentDescriptor: encapsulates the logic how to deploy a
> > > > cluster.
> > > > - ClusterClient: allows to interact with a cluster
> > > > - JobClient: allows to interact with a running job
> > > >
> > > > Now the ClusterDeploymentDescriptor could have two methods:
> > > >
> > > > - ClusterClient deploySessionCluster()
> > > > - JobClusterClient/JobClient deployPerJobCluster(JobGraph)
> > > >
> > > > where JobClusterClient is either a supertype of ClusterClient which
> > does
> > > > not give you the functionality to submit jobs or deployPerJobCluster
> > > > returns directl

Re: [DISCUSS] Flink Python User-Defined Function for Table API

2019-08-22 Thread Hequn Cheng
+1 for starting the vote.

Thanks Jincheng a lot for the discussion.

Best, Hequn

On Fri, Aug 23, 2019 at 10:06 AM Dian Fu  wrote:

> Hi Jincheng,
>
> +1 to start the FLIP create and VOTE on this feature. I'm willing to help
> on the FLIP create if you don't mind. As I haven't created a FLIP before,
> it will be great if you could help on this. :)
>
> Regards,
> Dian
>
> > 在 2019年8月22日,下午11:41,jincheng sun  写道:
> >
> > Hi all,
> >
> > Thanks a lot for your feedback. If there are no more suggestions and
> > comments, I think it's better to  initiate a vote to create a FLIP for
> > Apache Flink Python UDFs.
> > What do you think?
> >
> > Best, Jincheng
> >
> > jincheng sun  于2019年8月15日周四 上午12:54写道:
> >
> >> Hi Thomas,
> >>
> >> Thanks for your confirmation and the very important reminder about
> bundle
> >> processing.
> >>
> >> I have had add the description about how to perform bundle processing
> from
> >> the perspective of checkpoint and watermark. Feel free to leave
> comments if
> >> there are anything not describe clearly.
> >>
> >> Best,
> >> Jincheng
> >>
> >>
> >> Dian Fu  于2019年8月14日周三 上午10:08写道:
> >>
> >>> Hi Thomas,
> >>>
> >>> Thanks a lot the suggestions.
> >>>
> >>> Regarding to bundle processing, there is a section "Checkpoint"[1] in
> the
> >>> design doc which talks about how to handle the checkpoint.
> >>> However, I think you are right that we should talk more about it, such
> as
> >>> what's bundle processing, how it affects the checkpoint and watermark,
> how
> >>> to handle the checkpoint and watermark, etc.
> >>>
> >>> [1]
> >>>
> https://docs.google.com/document/d/1WpTyCXAQh8Jr2yWfz7MWCD2-lou05QaQFb810ZvTefY/edit#heading=h.urladt565yo3
> >>> <
> >>>
> https://docs.google.com/document/d/1WpTyCXAQh8Jr2yWfz7MWCD2-lou05QaQFb810ZvTefY/edit#heading=h.urladt565yo3
> 
> >>>
> >>> Regards,
> >>> Dian
> >>>
>  在 2019年8月14日,上午1:01,Thomas Weise  写道:
> 
>  Hi Jincheng,
> 
>  Thanks for putting this together. The proposal is very detailed,
> >>> thorough
>  and for me as a Beam Flink runner contributor easy to understand :)
> 
>  One thing that you should probably detail more is the bundle
> >>> processing. It
>  is critically important for performance that multiple elements are
>  processed in a bundle. The default bundle size in the Flink runner is
> >>> 1s or
>  1000 elements, whichever comes first. And for streaming, you can find
> >>> the
>  logic necessary to align the bundle processing with watermarks and
>  checkpointing here:
> 
> >>>
> https://github.com/apache/beam/blob/release-2.14.0/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/ExecutableStageDoFnOperator.java
> 
>  Thomas
> 
> 
> 
> 
> 
> 
> 
>  On Tue, Aug 13, 2019 at 7:05 AM jincheng sun <
> sunjincheng...@gmail.com>
>  wrote:
> 
> > Hi all,
> >
> > The Python Table API(without Python UDF support) has already been
> >>> supported
> > and will be available in the coming release 1.9.
> > As Python UDF is very important for Python users, we'd like to start
> >>> the
> > discussion about the Python UDF support in the Python Table API.
> > Aljoscha Krettek, Dian Fu and I have discussed offline and have
> >>> drafted a
> > design doc[1]. It includes the following items:
> >
> > - The user-defined function interfaces.
> > - The user-defined function execution architecture.
> >
> > As mentioned by many guys in the previous discussion thread[2], a
> > portability framework was introduced in Apache Beam in latest
> >>> releases. It
> > provides well-defined, language-neutral data structures and protocols
> >>> for
> > language-neutral user-defined function execution. This design is
> based
> >>> on
> > Beam's portability framework. We will introduce how to make use of
> >>> Beam's
> > portability framework for user-defined function execution: data
> > transmission, state access, checkpoint, metrics, logging, etc.
> >
> > Considering that the design relies on Beam's portability framework
> for
> > Python user-defined function execution and not all the contributors
> in
> > Flink community are familiar with Beam's portability framework, we
> have
> > done a prototype[3] for proof of concept and also ease of
> >>> understanding of
> > the design.
> >
> > Welcome any feedback.
> >
> > Best,
> > Jincheng
> >
> > [1]
> >
> >
> >>>
> https://docs.google.com/document/d/1WpTyCXAQh8Jr2yWfz7MWCD2-lou05QaQFb810ZvTefY/edit?usp=sharing
> > [2]
> >
> >
> >>>
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-38-Support-python-language-in-flink-TableAPI-td28061.html
> > [3] https://github.com/dianfu/flink/commits/udf_poc
> >
> >>>
> >>>
>
>


Re:[ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Haibo Sun
Great news! Thanks Gordon and Kurt!Best,
Haibo

At 2019-08-22 20:03:26, "Tzu-Li (Gordon) Tai"  wrote:
>The Apache Flink community is very happy to announce the release of Apache
>Flink 1.9.0, which is the latest major release.
>
>Apache Flink® is an open-source stream processing framework for
>distributed, high-performing, always-available, and accurate data streaming
>applications.
>
>The release is available for download at:
>https://flink.apache.org/downloads.html
>
>Please check out the release blog post for an overview of the improvements
>for this new major release:
>https://flink.apache.org/news/2019/08/22/release-1.9.0.html
>
>The full release notes are available in Jira:
>https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601
>
>We would like to thank all contributors of the Apache Flink community who
>made this release possible!
>
>Cheers,
>Gordon


Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread Peter Huang
It is great news for the community. Thanks to everyone who contributed to
the release management. Congratulations!

On Thu, Aug 22, 2019 at 9:14 PM Haibo Sun  wrote:

> Great news! Thanks Gordon and Kurt!
>
> Best,
> Haibo
>
>
> At 2019-08-22 20:03:26, "Tzu-Li (Gordon) Tai"  wrote:
> >The Apache Flink community is very happy to announce the release of Apache
> >Flink 1.9.0, which is the latest major release.
> >
> >Apache Flink® is an open-source stream processing framework for
> >distributed, high-performing, always-available, and accurate data streaming
> >applications.
> >
> >The release is available for download at:
> >https://flink.apache.org/downloads.html
> >
> >Please check out the release blog post for an overview of the improvements
> >for this new major release:
> >https://flink.apache.org/news/2019/08/22/release-1.9.0.html
> >
> >The full release notes are available in Jira:
> >https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601
> >
> >We would like to thank all contributors of the Apache Flink community who
> >made this release possible!
> >
> >Cheers,
> >Gordon
>
>


Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread qi luo
Congratulations and thanks for the hard work!

Qi

> On Aug 22, 2019, at 8:03 PM, Tzu-Li (Gordon) Tai  wrote:
> 
> The Apache Flink community is very happy to announce the release of Apache 
> Flink 1.9.0, which is the latest major release.
> 
> Apache Flink® is an open-source stream processing framework for distributed, 
> high-performing, always-available, and accurate data streaming applications.
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html 
> 
> 
> Please check out the release blog post for an overview of the improvements 
> for this new major release:
> https://flink.apache.org/news/2019/08/22/release-1.9.0.html 
> 
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344601
>  
> 
> 
> We would like to thank all contributors of the Apache Flink community who 
> made this release possible!
> 
> Cheers,
> Gordon