Re: [ANNOUNCE] Build Issues Solved

2016-06-02 Thread Ufuk Celebi
With the recent fixes, the builds are more stable, but I still see
many failing, because of the Scala shell tests, which lead to the JVMs
crashing. I've researched this a little bit, but didn't find an
obvious solution to the problem.

Does it make sense to disable the tests until someone has time to look into it?

– Ufuk

On Tue, May 31, 2016 at 1:46 PM, Stephan Ewen  wrote:
> You are right, Chiwan.
>
> I think that this pattern you use should be supported, though. Would be
> good to check if the job executes at the point of the "collect()" calls
> more than is necessary.
> That would explain the network buffer issue then...
>
> On Tue, May 31, 2016 at 12:18 PM, Chiwan Park  wrote:
>
>> Hi Stephan,
>>
>> Yes, right. But KNNITSuite calls
>> ExecutionEnvironment.getExecutionEnvironment only once [1]. I’m testing
>> with moving method call of getExecutionEnvironment to each test case.
>>
>> [1]:
>> https://github.com/apache/flink/blob/master/flink-libraries/flink-ml/src/test/scala/org/apache/flink/ml/nn/KNNITSuite.scala#L45
>>
>> Regards,
>> Chiwan Park
>>
>> > On May 31, 2016, at 7:09 PM, Stephan Ewen  wrote:
>> >
>> > Hi Chiwan!
>> >
>> > I think the Execution environment is not shared, because what the
>> > TestEnvironment sets is a Context Environment Factory. Every time you
>> call
>> > "ExecutionEnvironment.getExecutionEnvironment()", you get a new
>> environment.
>> >
>> > Stephan
>> >
>> >
>> > On Tue, May 31, 2016 at 11:53 AM, Chiwan Park 
>> wrote:
>> >
>> >> I’ve created a JIRA issue [1] related to KNN test cases. I will send a
>> PR
>> >> for it.
>> >>
>> >> From my investigation [2], cluster for ML tests have only one
>> taskmanager
>> >> with 4 slots. Is 2048 insufficient for total number of network numbers?
>> I
>> >> still think the problem is sharing ExecutionEnvironment between test
>> cases.
>> >>
>> >> [1]: https://issues.apache.org/jira/browse/FLINK-3994
>> >> [2]:
>> >>
>> https://github.com/apache/flink/blob/master/flink-test-utils/src/test/scala/org/apache/flink/test/util/FlinkTestBase.scala#L56
>> >>
>> >> Regards,
>> >> Chiwan Park
>> >>
>> >>> On May 31, 2016, at 6:05 PM, Maximilian Michels 
>> wrote:
>> >>>
>> >>> Thanks Stephan for the synopsis of our last weeks test instability
>> >>> madness. It's sad to see the shortcomings of Maven test plugins but
>> >>> another lesson learned is that our testing infrastructure should get a
>> >>> bit more attention. We have reached a point several times where our
>> >>> tests where inherently instable. Now we saw that even more problems
>> >>> were hidden in the dark. I would like to see more maintenance
>> >>> dedicated to testing.
>> >>>
>> >>> @Chiwan: Please, no hotfix! Please open a JIRA issue and a pull
>> >>> request with a systematic fix. Those things are too crucial to be
>> >>> fixed on the go. The problems is that Travis reports the number of
>> >>> processors to be "32" (which is used for the number of task slots in
>> >>> local execution). The network buffers are not adjusted accordingly. We
>> >>> should set them correctly in the MiniCluster. Also, we could define an
>> >>> upper limit to the number of task slots for tests.
>> >>>
>> >>> On Tue, May 31, 2016 at 10:59 AM, Chiwan Park 
>> >> wrote:
>>  I think that the tests fail because of sharing ExecutionEnvironment
>> >> between test cases. I’m not sure why it is problem, but it is only
>> >> difference between other ML tests.
>> 
>>  I created a hotfix and pushed it to my repository. When it seems fixed
>> >> [1], I’ll merge the hotfix to master branch.
>> 
>>  [1]: https://travis-ci.org/chiwanpark/flink/builds/134104491
>> 
>>  Regards,
>>  Chiwan Park
>> 
>> > On May 31, 2016, at 5:43 PM, Chiwan Park 
>> >> wrote:
>> >
>> > Maybe it seems about KNN test case which is merged into yesterday.
>> >> I’ll look into ML test.
>> >
>> > Regards,
>> > Chiwan Park
>> >
>> >> On May 31, 2016, at 5:38 PM, Ufuk Celebi  wrote:
>> >>
>> >> Currently, an ML test is reliably failing and occasionally some HA
>> >> tests. Is someone looking into the ML test?
>> >>
>> >> For HA, I will revert a commit, which might cause the HA
>> >> instabilities. Till is working on a proper fix as far as I know.
>> >>
>> >> On Tue, May 31, 2016 at 3:50 AM, Chiwan Park > >
>> >> wrote:
>> >>> Thanks for the great work! :-)
>> >>>
>> >>> Regards,
>> >>> Chiwan Park
>> >>>
>>  On May 31, 2016, at 7:47 AM, Flavio Pompermaier <
>> >> pomperma...@okkam.it> wrote:
>> 
>>  Awesome work guys!
>>  And even more thanks for the detailed report...This
>> troubleshooting
>> >> summary
>>  will be undoubtedly useful for all our maven projects!
>> 
>>  Best,
>>  Flavio
>>  On 30 May 2016 23:47, "Ufuk Celebi"  wrote:
>> 
>> > Thanks for the effort, Max and Stephan! Happy to see the green
>> >> light again.
>> >

Re: [ANNOUNCE] Build Issues Solved

2016-06-02 Thread Maximilian Michels
I thought this had been fixed by Chiwan in the meantime. Could you
post a build log?

On Thu, Jun 2, 2016 at 1:14 PM, Ufuk Celebi  wrote:
> With the recent fixes, the builds are more stable, but I still see
> many failing, because of the Scala shell tests, which lead to the JVMs
> crashing. I've researched this a little bit, but didn't find an
> obvious solution to the problem.
>
> Does it make sense to disable the tests until someone has time to look into 
> it?
>
> – Ufuk
>
> On Tue, May 31, 2016 at 1:46 PM, Stephan Ewen  wrote:
>> You are right, Chiwan.
>>
>> I think that this pattern you use should be supported, though. Would be
>> good to check if the job executes at the point of the "collect()" calls
>> more than is necessary.
>> That would explain the network buffer issue then...
>>
>> On Tue, May 31, 2016 at 12:18 PM, Chiwan Park  wrote:
>>
>>> Hi Stephan,
>>>
>>> Yes, right. But KNNITSuite calls
>>> ExecutionEnvironment.getExecutionEnvironment only once [1]. I’m testing
>>> with moving method call of getExecutionEnvironment to each test case.
>>>
>>> [1]:
>>> https://github.com/apache/flink/blob/master/flink-libraries/flink-ml/src/test/scala/org/apache/flink/ml/nn/KNNITSuite.scala#L45
>>>
>>> Regards,
>>> Chiwan Park
>>>
>>> > On May 31, 2016, at 7:09 PM, Stephan Ewen  wrote:
>>> >
>>> > Hi Chiwan!
>>> >
>>> > I think the Execution environment is not shared, because what the
>>> > TestEnvironment sets is a Context Environment Factory. Every time you
>>> call
>>> > "ExecutionEnvironment.getExecutionEnvironment()", you get a new
>>> environment.
>>> >
>>> > Stephan
>>> >
>>> >
>>> > On Tue, May 31, 2016 at 11:53 AM, Chiwan Park 
>>> wrote:
>>> >
>>> >> I’ve created a JIRA issue [1] related to KNN test cases. I will send a
>>> PR
>>> >> for it.
>>> >>
>>> >> From my investigation [2], cluster for ML tests have only one
>>> taskmanager
>>> >> with 4 slots. Is 2048 insufficient for total number of network numbers?
>>> I
>>> >> still think the problem is sharing ExecutionEnvironment between test
>>> cases.
>>> >>
>>> >> [1]: https://issues.apache.org/jira/browse/FLINK-3994
>>> >> [2]:
>>> >>
>>> https://github.com/apache/flink/blob/master/flink-test-utils/src/test/scala/org/apache/flink/test/util/FlinkTestBase.scala#L56
>>> >>
>>> >> Regards,
>>> >> Chiwan Park
>>> >>
>>> >>> On May 31, 2016, at 6:05 PM, Maximilian Michels 
>>> wrote:
>>> >>>
>>> >>> Thanks Stephan for the synopsis of our last weeks test instability
>>> >>> madness. It's sad to see the shortcomings of Maven test plugins but
>>> >>> another lesson learned is that our testing infrastructure should get a
>>> >>> bit more attention. We have reached a point several times where our
>>> >>> tests where inherently instable. Now we saw that even more problems
>>> >>> were hidden in the dark. I would like to see more maintenance
>>> >>> dedicated to testing.
>>> >>>
>>> >>> @Chiwan: Please, no hotfix! Please open a JIRA issue and a pull
>>> >>> request with a systematic fix. Those things are too crucial to be
>>> >>> fixed on the go. The problems is that Travis reports the number of
>>> >>> processors to be "32" (which is used for the number of task slots in
>>> >>> local execution). The network buffers are not adjusted accordingly. We
>>> >>> should set them correctly in the MiniCluster. Also, we could define an
>>> >>> upper limit to the number of task slots for tests.
>>> >>>
>>> >>> On Tue, May 31, 2016 at 10:59 AM, Chiwan Park 
>>> >> wrote:
>>>  I think that the tests fail because of sharing ExecutionEnvironment
>>> >> between test cases. I’m not sure why it is problem, but it is only
>>> >> difference between other ML tests.
>>> 
>>>  I created a hotfix and pushed it to my repository. When it seems fixed
>>> >> [1], I’ll merge the hotfix to master branch.
>>> 
>>>  [1]: https://travis-ci.org/chiwanpark/flink/builds/134104491
>>> 
>>>  Regards,
>>>  Chiwan Park
>>> 
>>> > On May 31, 2016, at 5:43 PM, Chiwan Park 
>>> >> wrote:
>>> >
>>> > Maybe it seems about KNN test case which is merged into yesterday.
>>> >> I’ll look into ML test.
>>> >
>>> > Regards,
>>> > Chiwan Park
>>> >
>>> >> On May 31, 2016, at 5:38 PM, Ufuk Celebi  wrote:
>>> >>
>>> >> Currently, an ML test is reliably failing and occasionally some HA
>>> >> tests. Is someone looking into the ML test?
>>> >>
>>> >> For HA, I will revert a commit, which might cause the HA
>>> >> instabilities. Till is working on a proper fix as far as I know.
>>> >>
>>> >> On Tue, May 31, 2016 at 3:50 AM, Chiwan Park >> >
>>> >> wrote:
>>> >>> Thanks for the great work! :-)
>>> >>>
>>> >>> Regards,
>>> >>> Chiwan Park
>>> >>>
>>>  On May 31, 2016, at 7:47 AM, Flavio Pompermaier <
>>> >> pomperma...@okkam.it> wrote:
>>> 
>>>  Awesome work guys!
>>>  And even more thanks for the detailed report...This
>>> troubleshooting
>>> >> summary
>>>  

Re: [ANNOUNCE] Build Issues Solved

2016-06-02 Thread Ufuk Celebi
On Thu, Jun 2, 2016 at 1:26 PM, Maximilian Michels  wrote:
> I thought this had been fixed by Chiwan in the meantime. Could you

Chiwan fixed the ML issues IMO. You can pick any of the recent builds
from https://travis-ci.org/apache/flink/builds

For example: 
https://s3.amazonaws.com/archive.travis-ci.org/jobs/134458335/log.txt


How to run table api in 1.1-SNAPSHOT

2016-06-02 Thread Cody Innowhere
Hi guys,
I'm trying to run Table-API in master trunk using the sql/registerDataSet
APIs in TableEnvironment class.

According to the doc in table.md, after registering a table, I should be
able to use a sql query on the tabelEnv, so I made a slight change in
WordCountTable.scala by simply adding two lines:

-
object WordCountTable {

  case class WC(word: String, count: Int)

  def main(args: Array[String]): Unit = {

// set up execution environment
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env)

val input = env.fromElements(WC("hello", 1), WC("hello", 2), WC("ciao",
3))
val expr = input.toTable(tEnv)

// *** added lines ***
tEnv.registerDataSet("WC", input, 'word, 'count)
val result1 = tEnv.sql("SELECT word FROM WC ")

val result = expr
  .groupBy('word)
  .select('word, 'count.sum as 'count)
  .toDataSet[WC]

result.print()
  }
}

As you can see current query sql is "SELECT word FROM WC" and it works.
But when I change query sql to :
"SELECT word, count FROM WC" it does not work with the exception:
"Exception in thread "main"
org.apache.calcite.sql.parser.SqlParseException: Encountered "count FROM"
at line 1, column 13.
Was expecting one of:
  ...
  ..."

Do I miss something?

BTW., I read the doc at
https://docs.google.com/document/d/1TLayJNOTBle_-m1rQfgA6Ouj1oYsfqRjPcp1h2TVqdI/,
I suppose Task2 has been finished already, right? And is somebody working
on Task3? Do we have a time map for SQL on Flink?

Thanks~


[jira] [Created] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-4009:
-

 Summary: Scala Shell fails to find library for inclusion in test
 Key: FLINK-4009
 URL: https://issues.apache.org/jira/browse/FLINK-4009
 Project: Flink
  Issue Type: Test
  Components: Scala Shell, Tests
Affects Versions: 1.1.0
Reporter: Maximilian Michels
Assignee: Maximilian Michels
 Fix For: 1.1.0


The Scala Shell test fails to find the flink-ml library jar in the target 
folder. This is due to its working directory being expected in 
"flink-scala-shell/target" when it is in fact "flink-scala-shell". I'm a bit 
puzzled why that could have changed recently. The last incident I recall where 
we had to change paths was when we introduced shading of all artifacts to 
produce effective poms (via the force-shading module). I'm assuming the change 
of paths has to do with switching from Failsafe to Surefire in FLINK-3909.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4010) Scala Shell tests may fail because of a locked STDIN

2016-06-02 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-4010:
-

 Summary: Scala Shell tests may fail because of a locked STDIN
 Key: FLINK-4010
 URL: https://issues.apache.org/jira/browse/FLINK-4010
 Project: Flink
  Issue Type: Test
  Components: Scala Shell, Tests
Affects Versions: 1.1.0
Reporter: Maximilian Michels
Assignee: Maximilian Michels
 Fix For: 1.1.0


The Surefire plugin uses STDIN to communicate with forked processes. When the 
Surefire plugin and the Scala Shell synchronize on the STDIN this may result in 
a deadlock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] Build Issues Solved

2016-06-02 Thread Maximilian Michels
I think this is related to the Yarn bug with the YarnSessionCli we
just fixed. The problem is that forked processes of the Surefire
plugin communicate via STDIN. The Scala Shell also reads from STDIN
which results in a deadlock from time to time...

Created an issue for that: https://issues.apache.org/jira/browse/FLINK-4010


On Thu, Jun 2, 2016 at 1:31 PM, Ufuk Celebi  wrote:
> On Thu, Jun 2, 2016 at 1:26 PM, Maximilian Michels  wrote:
>> I thought this had been fixed by Chiwan in the meantime. Could you
>
> Chiwan fixed the ML issues IMO. You can pick any of the recent builds
> from https://travis-ci.org/apache/flink/builds
>
> For example: 
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/134458335/log.txt


[jira] [Created] (FLINK-4011) Unable to access completed job in web frontend

2016-06-02 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-4011:
-

 Summary: Unable to access completed job in web frontend
 Key: FLINK-4011
 URL: https://issues.apache.org/jira/browse/FLINK-4011
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Affects Versions: 1.1.0
Reporter: Robert Metzger
Assignee: Robert Metzger
Priority: Critical


In the current master, I'm not able to access a finished job's detail page.

The JobManager logs shows the following exception:

{code}
2016-06-02 15:23:08,581 WARN  
org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler - Error while 
handling request
java.lang.RuntimeException: Couldn't deserialize ExecutionConfig.
at 
org.apache.flink.runtime.webmonitor.handlers.JobConfigHandler.handleRequest(JobConfigHandler.java:52)
at 
org.apache.flink.runtime.webmonitor.handlers.AbstractExecutionGraphRequestHandler.handleRequest(AbstractExecutionGraphRequestHandler.java:61)
at 
org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler.respondAsLeader(RuntimeMonitorHandler.java:88)
at 
org.apache.flink.runtime.webmonitor.RuntimeMonitorHandlerBase.channelRead0(RuntimeMonitorHandlerBase.java:84)
at 
org.apache.flink.runtime.webmonitor.RuntimeMonitorHandlerBase.channelRead0(RuntimeMonitorHandlerBase.java:44)
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at io.netty.handler.codec.http.router.Handler.routed(Handler.java:62)
at 
io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:57)
at 
io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:20)
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:105)
at 
org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:65)
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:55)
at 
org.apache.flink.runtime.webmonitor.handlers.JobConfigHandler.handleRequest(JobConfigHandler.java:50)
... 31 more

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4012) Docs: Links to "Iterations" are broken (404)

2016-06-02 Thread Bernd Louis (JIRA)
Bernd Louis created FLINK-4012:
--

 Summary: Docs: Links to "Iterations" are broken (404) 
 Key: FLINK-4012
 URL: https://issues.apache.org/jira/browse/FLINK-4012
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.0.2, 1.1.0
Reporter: Bernd Louis
Priority: Trivial
 Fix For: 1.1.0, 1.0.2


1. Browse: 
https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/common/index.html
 or 
https://ci.apache.org/projects/flink/flink-docs-master/apis/common/index.html
2. Find the text "information on iterations (see Iterations)."
3. Click "Iterations" 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Flink Kafka Consumer throwing Null Pointer Exception

2016-06-02 Thread Akshay Thaka Shingote
Hello,

Can anyone help with this issue 
http://stackoverflow.com/questions/37568822/flink-kafka-consumer-throws-null-pointer-exception-when-using-datastream-key-by
   ...I have stucked here & I haven't got any
Solution to solve this issue


Regards,
Akshay Shingote
Information transmitted by this e-mail is proprietary to YASH Technologies and/ 
or its Customers and is intended for use only by the individual or entity to 
which it is addressed, and may contain information that is privileged, 
confidential or exempt from disclosure under applicable law. If you are not the 
intended recipient or it appears that this mail has been forwarded to you 
without proper authority, you are notified that any use or dissemination of 
this information in any manner is strictly prohibited. In such cases, please 
notify us immediately at i...@yash.com and delete this mail from your records.


Re: Flink Kafka Consumer throwing Null Pointer Exception

2016-06-02 Thread Aljoscha Krettek
I just wrote an answer for this on Stackoverflow:

The problem is in this function:

@Override
public TypeInformation getProducedType() {
// TODO Auto-generated method stub
return null;
}

you cannot return null here.

On Thu, 2 Jun 2016 at 17:57 Akshay Thaka Shingote 
wrote:

> Hello,
>
> Can anyone help with this issue
> http://stackoverflow.com/questions/37568822/flink-kafka-consumer-throws-null-pointer-exception-when-using-datastream-key-by
>  ...I have stucked here & I haven't got any
> Solution to solve this issue
>
>
> Regards,
> Akshay Shingote
> Information transmitted by this e-mail is proprietary to YASH Technologies
> and/ or its Customers and is intended for use only by the individual or
> entity to which it is addressed, and may contain information that is
> privileged, confidential or exempt from disclosure under applicable law. If
> you are not the intended recipient or it appears that this mail has been
> forwarded to you without proper authority, you are notified that any use or
> dissemination of this information in any manner is strictly prohibited. In
> such cases, please notify us immediately at i...@yash.com and delete this
> mail from your records.
>


Re: [PROPOSAL] Structure the Flink Open Source Development

2016-06-02 Thread Henry Saputra
+1 for shepherd

I would prefer using that term rather than maintainer. It is being used in
Incubator PMC to help them keeping healthy development in podlings.

The term "maintainer" kind of being scrutinized in ASF communities, in
recent episodes happening in Spark community.

- Henry

On Wed, Jun 1, 2016 at 12:00 PM, Stephan Ewen  wrote:

> I like the name "shepherd". It implies a non-authorative role, and implies
> guidance, which is very fitting.
>
> I also thing there is no problem with having a "component shepherd" and a
> "pull request shepherd".
>
> Stephan
>
>
> On Wed, Jun 1, 2016 at 7:11 PM, Fabian Hueske  wrote:
>
> > I think calling the role maintainer is not a good idea.
> > The Spark community had a maintainer process which they just voted to
> > remove. From my understanding, a maintainer in Spark had a more active
> role
> > than the role we are currently discussing.
> >
> > I would prefer to not call the role "maintainer" to make clear that the
> > responsibilities are different (less active) and mainly observing.
> >
> > 2016-06-01 13:14 GMT+02:00 Ufuk Celebi :
> >
> > > Thanks! I like the idea of renaming it.  I'm fine with shepherd and I
> > > also like Vasia's suggestion "champion".
> > >
> > > I would like to add "Distributed checkpoints" as a separate component
> > > to track development for check- and savepoints.
> > >
> > >
> > >
> > > On Wed, Jun 1, 2016 at 10:59 AM, Aljoscha Krettek  >
> > > wrote:
> > > > Btw, in Jira, if we clean up our components we can also set a
> component
> > > > Lead that would get notified of issues for that component.
> > > >
> > > > On Wed, 1 Jun 2016 at 10:43 Chesnay Schepler 
> > wrote:
> > > >
> > > >> I'd also go with maintainer.
> > > >>
> > > >> On 01.06.2016 10:32, Aljoscha Krettek wrote:
> > > >> > Hi,
> > > >> > I think maintainer is also fine if we clearly specify that they
> are
> > > not
> > > >> > meant as dictators or gatekeepers of the component that they are
> > > >> > responsible for.
> > > >> >
> > > >> > -Aljoscha
> > > >> >
> > > >> > On Wed, 1 Jun 2016 at 09:48 Vasiliki Kalavri <
> > > vasilikikala...@gmail.com>
> > > >> > wrote:
> > > >> >
> > > >> >> Hi,
> > > >> >>
> > > >> >> we could go for something like "sponsor" or "champion" :)
> > > >> >> I'm fine with the proposal. Good to see more than 1 person for
> both
> > > >> Gelly
> > > >> >> and Table API.
> > > >> >>
> > > >> >> cheers,
> > > >> >> -V.
> > > >> >>
> > > >> >> On 1 June 2016 at 05:46, Tzu-Li (Gordon) Tai  >
> > > >> wrote:
> > > >> >>
> > > >> >>> I'd like to be added to the Streaming Connectors component
> > (already
> > > >> >> edited
> > > >> >>> Wiki) :)
> > > >> >>>
> > > >> >>> Ah, naming, one of the hardest problems in programming :P Some
> > > >> comments:
> > > >> >>> I agree with Robert that the name "maintainers" will be somewhat
> > > >> >> misleading
> > > >> >>> regarding the authoritative difference with committers / PMCs,
> > > >> especially
> > > >> >>> for future newcomers to the community who don't come across the
> > > >> original
> > > >> >>> discussion on this thread.
> > > >> >>>
> > > >> >>> Simone's suggestion of Overseer seems good. The name naturally
> > > matches
> > > >> >> its
> > > >> >>> role -
> > > >> >>> - A group of "Overseers" for components, who keeps an eye on
> > related
> > > >> mail
> > > >> >>> threads, known limitations and issues, JIRAs, open PRs,
> requested
> > > >> >> features,
> > > >> >>> and potential new overseers and committers, etc, for the
> component
> > > >> >>> (original
> > > >> >>> maintainer role).
> > > >> >>> - A "Shepherd" for individual PRs, assigned from the overseers
> of
> > > the
> > > >> >>> component with the aim to guide the submitting contributor.
> > > Overseers
> > > >> >>> typically pick up new PRs to shepherd themselves, or the leading
> > > >> overseer
> > > >> >>> allocates an overseer to shepherd a PR which hasn't been picked
> up
> > > yet
> > > >> >>> after
> > > >> >>> a certain period of time.
> > > >> >>>
> > > >> >>> Or perhaps we can also simply go for "Shepherds" for components
> > and
> > > >> >>> "Assigned Shepherd" for individual PRs?
> > > >> >>>
> > > >> >>>
> > > >> >>>
> > > >> >>> --
> > > >> >>> View this message in context:
> > > >> >>>
> > > >> >>
> > > >>
> > >
> >
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/PROPOSAL-Structure-the-Flink-Open-Source-Development-tp11598p11932.html
> > > >> >>> Sent from the Apache Flink Mailing List archive. mailing list
> > > archive
> > > >> at
> > > >> >>> Nabble.com.
> > > >> >>>
> > > >>
> > > >>
> > >
> >
>


[jira] [Created] (FLINK-4013) GraphAlgorithms to simplify directed and undirected graphs

2016-06-02 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-4013:
-

 Summary: GraphAlgorithms to simplify directed and undirected graphs
 Key: FLINK-4013
 URL: https://issues.apache.org/jira/browse/FLINK-4013
 Project: Flink
  Issue Type: New Feature
  Components: Gelly
Affects Versions: 1.1.0
Reporter: Greg Hogan
Assignee: Greg Hogan
Priority: Minor
 Fix For: 1.1.0


Create a directed {{GraphAlgorithm}} to remove self-loops and duplicate edges 
and an undirected {{GraphAlgorithm}} to symmetrize and remove self-loops and 
duplicate edges.

Remove {{RMatGraph.setSimpleGraph}} and the associated logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)