Re: On integrating Flink with Apache NiFi
Hi Kostas, my question is related to the fact that now Flink is available as Cascading engine. So, if I had to write a general purpose dataflow UI I think I'd use Cascading or Dataflow, so that I could draw my pipeline with the UI and write the execution code using just one API. Obviously this introduce some latency in the dependency update process but, from the UI developer perspective, I don't have to develop 3 different connectors (Flink, Spark and Tez for example) because they come for free using Cascading or DataFlow APIs. Does it make sense? Best, Flavio On Tue, Sep 22, 2015 at 10:58 PM, Kostas Tzoumas wrote: > I had a discussion with Joe from the NiFi community, and they are > interested in contributing a connector between NiFi and Flink. I created a > JIRA issue for that: https://issues.apache.org/jira/browse/FLINK-2740 > > I believe that this is the easiest and most useful integration point to > begin with, as NiFi and Flink are two quite different beasts, and ideally > you would want to use them in conjunction. > > A UI to compose Flink programs would be a somewhat different topic, as that > would need to be more tailored to Flink's capabilities. > > @Flavio: I am not sure I understood the question regarding cascading. > Perhaps start a new thread about it? > > On Tue, Sep 22, 2015 at 7:05 PM, Flavio Pompermaier > wrote: > > > I saw that now cascading supports Flink..so maybe you could think in > > programming a cascading abstraction to have also spark and tez > > compatibility for free!what do you think? > > On 22 Sep 2015 11:17, "Christian Kreutzfeldt" wrote: > > > > > Hi Slim, > > > > > > thanks for sharing the presentation. Like Flavio I see the (UI) > > > capabilities provided by Apache NiFi as very helpful for (enterprise) > > > customers. > > > > > > At the Otto Group we are currently think about how to track data > through > > a > > > stream based system like we do in the batch world using staging layers. > > > Apache NiFi seems to provide a feasible approach which gives lots of > > > insights into your running system. Especially in those cases where > > > topologies are still under development or people are curious on how > these > > > mysterious streaming systems work UI supported development & tracking / > > > monitoring is a big win. > > > > > > Having these requirements in mind, I don't see an integration point > with > > > Apache NiFi as the project does not only provide an UI/orchestration > > layer > > > but has a runtime etc. available too. An integration like Spark (shown > > > right at the beginning of the demo) seems to be possible but what I'd > > like > > > to see is a more Apache NiFi-like UI layer exclusively provided for > > Apache > > > Flink. It would bring Flink very (!!!) much closer to the enterprise > > market > > > as it solves some crucial requirements when it comes to development and > > > monitoring - no need for developers with in-depth programming knowledge > > > since topologies may be built visually, no need for depth tech > > > understanding when it comes to monitoring ... > > > > > > From my point of view, solving the issue includes the following > aspects: > > > > > > * providing an appropriate UI layer to visualize data flow / operator > > > graphs along with metrics (system & user-defined) exported by operators > > > * message tracking by integrating K/V stores and index management tools > > > > > > ...probably both worth more than a year development time ;-) > > > > > > But if there were some efforts to implement & integrate such a feature, > > > text me if dev support is require. > > > > > > Best regards, > > > Christian > > > > > > > > > 2015-09-19 12:42 GMT+02:00 Flavio Pompermaier : > > > > > > > As a user that would be very helpful! > > > > On 19 Sep 2015 12:34, "Slim Baltagi" wrote: > > > > > > > > > Hi Flink experts! > > > > > > > > > > I came across Apache Nifi https://nifi.apache.org > > > > > https://www.youtube.com/watch?v=sQCgtCoZyFQ > > > > > In the Nifi project, there is an open JIRA > > > > > issue:https://issues.apache.org/jira/browse/NIFI-823 to evaluate / > > > > provide > > > > > integration with Apache Flink. > > > > > > > > > > Is the integration of Flink with NiFi on the roadmap of the Apache > > > Flink > > > > > project? > > > > > > > > > > Expected advantages for Flink when integrating with Nifi would be: > > > > > - GUI: Web-based user interface to design data flows > > > > > - Dynamic dataflows: Modify dataflow at runtime > > > > > - Security: Authentication, authorization, encryption > > > > > - Data Provenance: Track data flow from beginning to end > > > > > > > > > > What do you think? > > > > > > > > > > Thanks > > > > > > > > > > Slim Baltagi > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > View this message in context: > > > > > > > > > > > > > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/On-integrating-Flink-with-Apache-NiFi-tp8059.html > > > > > Sent f
Re: On integrating Flink with Apache NiFi
That depends on your requirements. A common interface is usually less feature rich than the abstracted processing engines. For example, Cascading does not support iterations or streaming. If you are fine with the features (and limitations) of the common interface, your approach might make sense. Cheers, Fabian 2015-09-23 9:41 GMT+02:00 Flavio Pompermaier : > Hi Kostas, > my question is related to the fact that now Flink is available as Cascading > engine. > So, if I had to write a general purpose dataflow UI I think I'd use > Cascading or Dataflow, > so that I could draw my pipeline with the UI and write the execution code > using just one API. > > Obviously this introduce some latency in the dependency update process but, > from the UI developer perspective, I don't have to develop 3 different > connectors (Flink, Spark and Tez for example) because they come for free > using Cascading or DataFlow APIs. Does it make sense? > > Best, > Flavio > > On Tue, Sep 22, 2015 at 10:58 PM, Kostas Tzoumas > wrote: > > > I had a discussion with Joe from the NiFi community, and they are > > interested in contributing a connector between NiFi and Flink. I created > a > > JIRA issue for that: https://issues.apache.org/jira/browse/FLINK-2740 > > > > I believe that this is the easiest and most useful integration point to > > begin with, as NiFi and Flink are two quite different beasts, and ideally > > you would want to use them in conjunction. > > > > A UI to compose Flink programs would be a somewhat different topic, as > that > > would need to be more tailored to Flink's capabilities. > > > > @Flavio: I am not sure I understood the question regarding cascading. > > Perhaps start a new thread about it? > > > > On Tue, Sep 22, 2015 at 7:05 PM, Flavio Pompermaier < > pomperma...@okkam.it> > > wrote: > > > > > I saw that now cascading supports Flink..so maybe you could think in > > > programming a cascading abstraction to have also spark and tez > > > compatibility for free!what do you think? > > > On 22 Sep 2015 11:17, "Christian Kreutzfeldt" > wrote: > > > > > > > Hi Slim, > > > > > > > > thanks for sharing the presentation. Like Flavio I see the (UI) > > > > capabilities provided by Apache NiFi as very helpful for (enterprise) > > > > customers. > > > > > > > > At the Otto Group we are currently think about how to track data > > through > > > a > > > > stream based system like we do in the batch world using staging > layers. > > > > Apache NiFi seems to provide a feasible approach which gives lots of > > > > insights into your running system. Especially in those cases where > > > > topologies are still under development or people are curious on how > > these > > > > mysterious streaming systems work UI supported development & > tracking / > > > > monitoring is a big win. > > > > > > > > Having these requirements in mind, I don't see an integration point > > with > > > > Apache NiFi as the project does not only provide an UI/orchestration > > > layer > > > > but has a runtime etc. available too. An integration like Spark > (shown > > > > right at the beginning of the demo) seems to be possible but what I'd > > > like > > > > to see is a more Apache NiFi-like UI layer exclusively provided for > > > Apache > > > > Flink. It would bring Flink very (!!!) much closer to the enterprise > > > market > > > > as it solves some crucial requirements when it comes to development > and > > > > monitoring - no need for developers with in-depth programming > knowledge > > > > since topologies may be built visually, no need for depth tech > > > > understanding when it comes to monitoring ... > > > > > > > > From my point of view, solving the issue includes the following > > aspects: > > > > > > > > * providing an appropriate UI layer to visualize data flow / operator > > > > graphs along with metrics (system & user-defined) exported by > operators > > > > * message tracking by integrating K/V stores and index management > tools > > > > > > > > ...probably both worth more than a year development time ;-) > > > > > > > > But if there were some efforts to implement & integrate such a > feature, > > > > text me if dev support is require. > > > > > > > > Best regards, > > > > Christian > > > > > > > > > > > > 2015-09-19 12:42 GMT+02:00 Flavio Pompermaier >: > > > > > > > > > As a user that would be very helpful! > > > > > On 19 Sep 2015 12:34, "Slim Baltagi" wrote: > > > > > > > > > > > Hi Flink experts! > > > > > > > > > > > > I came across Apache Nifi https://nifi.apache.org > > > > > > https://www.youtube.com/watch?v=sQCgtCoZyFQ > > > > > > In the Nifi project, there is an open JIRA > > > > > > issue:https://issues.apache.org/jira/browse/NIFI-823 to > evaluate / > > > > > provide > > > > > > integration with Apache Flink. > > > > > > > > > > > > Is the integration of Flink with NiFi on the roadmap of the > Apache > > > > Flink > > > > > > project? > > > > > > > > > > > > Expected advantages for Flink when integrating with Nifi would
[VOTE] Release Apache Flink 0.10.0-milestone-1 (RC1)
Dear community, Please vote on releasing the following candidate as Apache Flink version 0.10.0-milestone-1. This is a milestone release that contains several features of Flink's next major release version 0.10.0. This is the first RC for this release. - The commit to be voted on: ffdd484ce5596149bb4ea234561c7fcf83bc3167 Branch: release-0.10.0-milestone-1-rc1 ( https://git-wip-us.apache.org/repos/asf?p=flink.git;a=shortlog;h=refs/heads/release-0.10.0-milestone-1-rc1 ) The release artifacts to be voted on can be found at: http://people.apache.org/~fhueske/flink-0.10.0-milestone-1-rc1/ Release artifacts are signed with the key with fingerprint 34911D5A: http://www.apache.org/dist/flink/KEYS The staging repository for this release can be found at: https://repository.apache.org/content/repositories/orgapacheflink-1046 - Please vote on releasing this package as Apache Flink 0.10.0-milestone-1. The vote is open for the next 72 hours and passes if a majority of at least three +1 PMC votes are cast. The vote ends on Saturday (September 26, 2015). [ ] +1 Release this package as Apache Flink 0.10.0-milestone-1 [ ] -1 Do not release this package, because... – Fabian
[jira] [Created] (FLINK-2743) Add new RNG based on XORShift algorithm
Chengxiang Li created FLINK-2743: Summary: Add new RNG based on XORShift algorithm Key: FLINK-2743 URL: https://issues.apache.org/jira/browse/FLINK-2743 Project: Flink Issue Type: Improvement Components: Core Reporter: Chengxiang Li Assignee: Chengxiang Li Priority: Minor [XORShift algorithm|https://en.wikipedia.org/wiki/Xorshift] is an optimized algorithm for random number generator, implement a RNG based on it would help to improve the performance of operations where RNG is heavily used. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [jira] [Created] (FLINK-2695) KafkaITCase.testConcurrentProducerConsumerTopology failed on Travis
The tests use a ZooKeeper mini cluster and multiple Kafka MiniClusters. It appears that these are not very stable in our test setup. Let's see what we can do to improve reliability there. 1) As a first step, I would suggest to reduce the number of concurrent tests to one for this project, as it will prevent that we have multiple concurrent mini clusters competing for compute resources. 2) The method "SimpleConsumerThread.getLastOffset()" Should probably re-retrieve the leader, or we should allow the program more recovery retries... Greetings, Stephan On Wed, Sep 23, 2015 at 4:04 AM, Li, Chengxiang wrote: > Found more KafkaITCase failure at: > https://travis-ci.org/apache/flink/jobs/81592146 > > Failed tests: > > KafkaITCase.testConcurrentProducerConsumerTopology:50->KafkaConsumerTestBase.runSimpleConcurrentProducerConsumerTopology:334->KafkaTestBase.tryExecute:313 > Test failed: The program execution failed: Job execution failed. > Tests in error: > > KafkaITCase.testCancelingEmptyTopic:57->KafkaConsumerTestBase.runCancelingOnEmptyInputTest:594 > » > > KafkaITCase.testCancelingFullTopic:62->KafkaConsumerTestBase.runCancelingOnFullInputTest:529 > » > > KafkaITCase.testMultipleSourcesOnePartition:89->KafkaConsumerTestBase.runMultipleSourcesOnePartitionExactlyOnceTest:450 > » ProgramInvocation > > KafkaITCase.testOffsetInZookeeper:45->KafkaConsumerTestBase.runOffsetInZookeeperValidationTest:205->KafkaConsumerTestBase.writeSequence:938 > » ProgramInvocation > > KafkaITCase.testOneToOneSources:79->KafkaConsumerTestBase.runOneToOneExactlyOnceTest:356 > » ProgramInvocation > > It happens only on the test mode of JDK: oraclejdk8 > PROFILE="-Dhadoop.version=2.5.0 -Dmaven.javadoc.skip=true". > > Thanks > Chengxiang > > -Original Message- > From: Till Rohrmann (JIRA) [mailto:j...@apache.org] > Sent: Thursday, September 17, 2015 11:02 PM > To: dev@flink.apache.org > Subject: [jira] [Created] (FLINK-2695) > KafkaITCase.testConcurrentProducerConsumerTopology failed on Travis > > Till Rohrmann created FLINK-2695: > > > Summary: KafkaITCase.testConcurrentProducerConsumerTopology > failed on Travis > Key: FLINK-2695 > URL: https://issues.apache.org/jira/browse/FLINK-2695 > Project: Flink > Issue Type: Bug > Reporter: Till Rohrmann > Priority: Critical > > > The {{KafkaITCase.testConcurrentProducerConsumerTopology}} failed on > Travis with > > {code} > --- > T E S T S > --- > Running org.apache.flink.streaming.connectors.kafka.KafkaProducerITCase > Running org.apache.flink.streaming.connectors.kafka.KafkaITCase > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.296 sec > - in org.apache.flink.streaming.connectors.kafka.KafkaProducerITCase > 09/16/2015 17:19:36 Job execution switched to status RUNNING. > 09/16/2015 17:19:36 Source: Custom Source -> Sink: Unnamed(1/1) > switched to SCHEDULED > 09/16/2015 17:19:36 Source: Custom Source -> Sink: Unnamed(1/1) > switched to DEPLOYING > 09/16/2015 17:19:36 Source: Custom Source -> Sink: Unnamed(1/1) > switched to RUNNING > 09/16/2015 17:19:36 Source: Custom Source -> Sink: Unnamed(1/1) > switched to FINISHED > 09/16/2015 17:19:36 Job execution switched to status FINISHED. > 09/16/2015 17:19:36 Job execution switched to status RUNNING. > 09/16/2015 17:19:36 Source: Custom Source -> Map -> Flat Map(1/1) > switched to SCHEDULED > 09/16/2015 17:19:36 Source: Custom Source -> Map -> Flat Map(1/1) > switched to DEPLOYING > 09/16/2015 17:19:36 Source: Custom Source -> Map -> Flat Map(1/1) > switched to RUNNING > 09/16/2015 17:19:36 Source: Custom Source -> Map -> Flat Map(1/1) > switched to FAILED > java.lang.Exception: Could not forward element to next operator > at > org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:242) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.run(FlinkKafkaConsumer.java:382) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:58) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:58) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:171) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: Could not forward element to next > operator > at > org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:332) > at > org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:316) > at > org.apache.flink.streaming.runtime.io.CollectorW
[jira] [Created] (FLINK-2744) Reduce number of concurrent test forks to 1 for the Kafka connector project
Stephan Ewen created FLINK-2744: --- Summary: Reduce number of concurrent test forks to 1 for the Kafka connector project Key: FLINK-2744 URL: https://issues.apache.org/jira/browse/FLINK-2744 Project: Flink Issue Type: Improvement Components: Streaming Connectors Affects Versions: 0.10 Reporter: Stephan Ewen Assignee: Stephan Ewen Fix For: 0.10 Since the Kafka connector tests are heavyweight with many Mini Clusters, their stability would benefit from not having multiple builds competing for resources. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-2745) Create a new module for all Flink micro benchmark.
Chengxiang Li created FLINK-2745: Summary: Create a new module for all Flink micro benchmark. Key: FLINK-2745 URL: https://issues.apache.org/jira/browse/FLINK-2745 Project: Flink Issue Type: New Feature Components: Tests Reporter: Chengxiang Li Assignee: Chengxiang Li Priority: Minor Currently in Flink, there many micro benchmarks spread from different modules, these benchmarks measure on manual, triggered by Junit test, no warmup, no multi iteration. Move all benchmark to a single module and import [JMH|http://openjdk.java.net/projects/code-tools/jmh/] as the back benchmark framework may help Flink devlopers to build benchmark more accurate and easier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Build get stuck at BarrierBufferMassiveRandomTest
@Stephan, have you pushed that fix for SocketClientSinkTest? Local builds still hang for me :S On 21 September 2015 at 22:55, Vasiliki Kalavri wrote: > Yes, you're right. BarrierBufferMassiveRandomTest has actually finished > :-) > Sorry for the confusion! I'll wait for your fix then, thanks! > > On 21 September 2015 at 22:51, Stephan Ewen wrote: > >> I am actually very happy that it is not the >> "BarrierBufferMassiveRandomTest", that would be hell to debug... >> >> On Mon, Sep 21, 2015 at 10:51 PM, Stephan Ewen wrote: >> >> > Ah, actually it is a different test. I think you got confused by the >> > sysout log, because multiple parallel tests print there (that makes it >> not >> > always obvious which one hangs). >> > >> > The test is the "SocketClientSinkTest.testSocketSinkRetryAccess()" test. >> > You can see that by looking in which test case the "main" thread is >> stuck, >> > >> > This test is very unstable, but, fortunately, I made a fix 1h ago and it >> > is being tested on Travis right now :-) >> > >> > Cheers, >> > Stephan >> > >> > >> > >> > On Mon, Sep 21, 2015 at 10:23 PM, Vasiliki Kalavri < >> > vasilikikala...@gmail.com> wrote: >> > >> >> Locally yes. >> >> >> >> Here's the stack trace: >> >> >> >> >> >> 2015-09-21 22:22:46 >> >> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed >> mode): >> >> >> >> "Attach Listener" daemon prio=5 tid=0x7ff9d104e800 nid=0x4013 >> waiting >> >> on condition [0x] >> >>java.lang.Thread.State: RUNNABLE >> >> >> >> "Service Thread" daemon prio=5 tid=0x7ff9d3807000 nid=0x4c03 >> runnable >> >> [0x] >> >>java.lang.Thread.State: RUNNABLE >> >> >> >> "C2 CompilerThread1" daemon prio=5 tid=0x7ff9d2001000 nid=0x4a03 >> >> waiting on condition [0x] >> >>java.lang.Thread.State: RUNNABLE >> >> >> >> "C2 CompilerThread0" daemon prio=5 tid=0x7ff9d201e000 nid=0x4803 >> >> waiting on condition [0x] >> >>java.lang.Thread.State: RUNNABLE >> >> >> >> "Signal Dispatcher" daemon prio=5 tid=0x7ff9d3012800 nid=0x451b >> >> runnable [0x] >> >>java.lang.Thread.State: RUNNABLE >> >> >> >> "Finalizer" daemon prio=5 tid=0x7ff9d4005800 nid=0x3303 in >> >> Object.wait() [0x00011430d000] >> >>java.lang.Thread.State: WAITING (on object monitor) >> >> at java.lang.Object.wait(Native Method) >> >> - waiting on <0x0007ef504858> (a java.lang.ref.ReferenceQueue$Lock) >> >> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) >> >> - locked <0x0007ef504858> (a java.lang.ref.ReferenceQueue$Lock) >> >> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) >> >> at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) >> >> >> >> "Reference Handler" daemon prio=5 tid=0x7ff9d480b000 nid=0x3103 in >> >> Object.wait() [0x00011420a000] >> >>java.lang.Thread.State: WAITING (on object monitor) >> >> at java.lang.Object.wait(Native Method) >> >> - waiting on <0x0007ef504470> (a java.lang.ref.Reference$Lock) >> >> at java.lang.Object.wait(Object.java:503) >> >> at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133) >> >> - locked <0x0007ef504470> (a java.lang.ref.Reference$Lock) >> >> >> >> "main" prio=5 tid=0x7ff9d480 nid=0xd03 runnable >> >> [0x00010b764000] >> >>java.lang.Thread.State: RUNNABLE >> >> at java.net.PlainSocketImpl.socketAccept(Native Method) >> >> at >> >> >> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) >> >> at java.net.ServerSocket.implAccept(ServerSocket.java:530) >> >> at java.net.ServerSocket.accept(ServerSocket.java:498) >> >> at >> >> >> >> >> org.apache.flink.streaming.api.functions.sink.SocketClientSinkTest.testSocketSinkRetryAccess(SocketClientSinkTest.java:315) >> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> >> at >> >> >> >> >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> >> at >> >> >> >> >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> >> at java.lang.reflect.Method.invoke(Method.java:606) >> >> at >> >> >> >> >> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) >> >> at >> >> >> >> >> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) >> >> at >> >> >> >> >> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) >> >> at >> >> >> >> >> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) >> >> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) >> >> at org.junit.rules.RunRules.evaluate(RunRules.java:20) >> >> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) >> >> at >> >> >> >> >> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) >> >> at >> >> >> >> >> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.j
[jira] [Created] (FLINK-2746) Add RetryOnException annotation for tests
Stephan Ewen created FLINK-2746: --- Summary: Add RetryOnException annotation for tests Key: FLINK-2746 URL: https://issues.apache.org/jira/browse/FLINK-2746 Project: Flink Issue Type: Improvement Components: Tests Affects Versions: 0.10 Reporter: Stephan Ewen Assignee: Stephan Ewen Fix For: 0.10 Similar to the RetryOnFailure, I want to add a RetryOnException annotation which only retries tests in a specific exception... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Build get stuck at BarrierBufferMassiveRandomTest
I have pushed it, yes. If you rebase onto the latest master, it should work. If you can verify that it still hangs, can you post a stack trace dump? Thanks, Stephan On Wed, Sep 23, 2015 at 12:37 PM, Vasiliki Kalavri < vasilikikala...@gmail.com> wrote: > @Stephan, have you pushed that fix for SocketClientSinkTest? Local builds > still hang for me :S > > On 21 September 2015 at 22:55, Vasiliki Kalavri > > wrote: > > > Yes, you're right. BarrierBufferMassiveRandomTest has actually finished > > :-) > > Sorry for the confusion! I'll wait for your fix then, thanks! > > > > On 21 September 2015 at 22:51, Stephan Ewen wrote: > > > >> I am actually very happy that it is not the > >> "BarrierBufferMassiveRandomTest", that would be hell to debug... > >> > >> On Mon, Sep 21, 2015 at 10:51 PM, Stephan Ewen > wrote: > >> > >> > Ah, actually it is a different test. I think you got confused by the > >> > sysout log, because multiple parallel tests print there (that makes it > >> not > >> > always obvious which one hangs). > >> > > >> > The test is the "SocketClientSinkTest.testSocketSinkRetryAccess()" > test. > >> > You can see that by looking in which test case the "main" thread is > >> stuck, > >> > > >> > This test is very unstable, but, fortunately, I made a fix 1h ago and > it > >> > is being tested on Travis right now :-) > >> > > >> > Cheers, > >> > Stephan > >> > > >> > > >> > > >> > On Mon, Sep 21, 2015 at 10:23 PM, Vasiliki Kalavri < > >> > vasilikikala...@gmail.com> wrote: > >> > > >> >> Locally yes. > >> >> > >> >> Here's the stack trace: > >> >> > >> >> > >> >> 2015-09-21 22:22:46 > >> >> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed > >> mode): > >> >> > >> >> "Attach Listener" daemon prio=5 tid=0x7ff9d104e800 nid=0x4013 > >> waiting > >> >> on condition [0x] > >> >>java.lang.Thread.State: RUNNABLE > >> >> > >> >> "Service Thread" daemon prio=5 tid=0x7ff9d3807000 nid=0x4c03 > >> runnable > >> >> [0x] > >> >>java.lang.Thread.State: RUNNABLE > >> >> > >> >> "C2 CompilerThread1" daemon prio=5 tid=0x7ff9d2001000 nid=0x4a03 > >> >> waiting on condition [0x] > >> >>java.lang.Thread.State: RUNNABLE > >> >> > >> >> "C2 CompilerThread0" daemon prio=5 tid=0x7ff9d201e000 nid=0x4803 > >> >> waiting on condition [0x] > >> >>java.lang.Thread.State: RUNNABLE > >> >> > >> >> "Signal Dispatcher" daemon prio=5 tid=0x7ff9d3012800 nid=0x451b > >> >> runnable [0x] > >> >>java.lang.Thread.State: RUNNABLE > >> >> > >> >> "Finalizer" daemon prio=5 tid=0x7ff9d4005800 nid=0x3303 in > >> >> Object.wait() [0x00011430d000] > >> >>java.lang.Thread.State: WAITING (on object monitor) > >> >> at java.lang.Object.wait(Native Method) > >> >> - waiting on <0x0007ef504858> (a > java.lang.ref.ReferenceQueue$Lock) > >> >> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) > >> >> - locked <0x0007ef504858> (a java.lang.ref.ReferenceQueue$Lock) > >> >> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) > >> >> at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) > >> >> > >> >> "Reference Handler" daemon prio=5 tid=0x7ff9d480b000 nid=0x3103 > in > >> >> Object.wait() [0x00011420a000] > >> >>java.lang.Thread.State: WAITING (on object monitor) > >> >> at java.lang.Object.wait(Native Method) > >> >> - waiting on <0x0007ef504470> (a java.lang.ref.Reference$Lock) > >> >> at java.lang.Object.wait(Object.java:503) > >> >> at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133) > >> >> - locked <0x0007ef504470> (a java.lang.ref.Reference$Lock) > >> >> > >> >> "main" prio=5 tid=0x7ff9d480 nid=0xd03 runnable > >> >> [0x00010b764000] > >> >>java.lang.Thread.State: RUNNABLE > >> >> at java.net.PlainSocketImpl.socketAccept(Native Method) > >> >> at > >> >> > >> > java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) > >> >> at java.net.ServerSocket.implAccept(ServerSocket.java:530) > >> >> at java.net.ServerSocket.accept(ServerSocket.java:498) > >> >> at > >> >> > >> >> > >> > org.apache.flink.streaming.api.functions.sink.SocketClientSinkTest.testSocketSinkRetryAccess(SocketClientSinkTest.java:315) > >> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > >> >> at > >> >> > >> >> > >> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > >> >> at > >> >> > >> >> > >> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > >> >> at java.lang.reflect.Method.invoke(Method.java:606) > >> >> at > >> >> > >> >> > >> > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > >> >> at > >> >> > >> >> > >> > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > >> >> at > >> >> > >> >> > >> > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMe
Re: Build get stuck at BarrierBufferMassiveRandomTest
Hi, It's the latest master I'm trying to build, but it still hangs. Here's the trace: - 2015-09-23 13:48:41 Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed mode): "Attach Listener" daemon prio=5 tid=0x7faeb984a000 nid=0x3707 waiting on condition [0x] java.lang.Thread.State: RUNNABLE "Service Thread" daemon prio=5 tid=0x7faeb9808000 nid=0x4d03 runnable [0x] java.lang.Thread.State: RUNNABLE "C2 CompilerThread1" daemon prio=5 tid=0x7faebb00e800 nid=0x4b03 waiting on condition [0x] java.lang.Thread.State: RUNNABLE "C2 CompilerThread0" daemon prio=5 tid=0x7faebb840800 nid=0x4903 waiting on condition [0x] java.lang.Thread.State: RUNNABLE "Signal Dispatcher" daemon prio=5 tid=0x7faeba806800 nid=0x3d0f runnable [0x] java.lang.Thread.State: RUNNABLE "Finalizer" daemon prio=5 tid=0x7faebb836800 nid=0x3303 in Object.wait() [0x00014eff8000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) - locked <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) "Reference Handler" daemon prio=5 tid=0x7faebb004000 nid=0x3103 in Object.wait() [0x00014eef5000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x000138a84470> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:503) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133) - locked <0x000138a84470> (a java.lang.ref.Reference$Lock) "main" prio=5 tid=0x7faeb9009800 nid=0xd03 runnable [0x00010f1c] java.lang.Thread.State: RUNNABLE at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.flink.streaming.api.functions.sink.SocketClientSinkTest.testSocketSinkRetryAccess(SocketClientSinkTest.java:315) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) "VM Thread" prio=5 tid=0x7faebb82e800 nid=0x2f03 runnable "GC task thread#0 (ParallelGC)" prio=5 tid=0x7faeb9806800 nid=0x1e03 runnable "GC task thread#1 (ParallelGC)" prio=5 tid=0x7faebb00 nid=0x2103 runnable "GC task thread#2 (ParallelGC)" prio=5 tid=0x7faebb001000 nid=0x2303 runnable "GC task thread#3 (ParallelGC)" prio=5 tid=0x7faebb001800 nid=0x2503 runnable "GC task thread#4 (ParallelGC)" prio=5 tid=0x7faebb002000 nid=0x2703 runnable "GC task thread#5 (ParallelGC)" prio=5 tid=0x7faebb002800 nid=0x2903 runnable "GC task thread#6 (ParallelGC)" prio=5 tid=0x7faebb
[jira] [Created] (FLINK-2747) TypeExtractor does not correctly analyze Scala Immutables (AnyVal)
Aljoscha Krettek created FLINK-2747: --- Summary: TypeExtractor does not correctly analyze Scala Immutables (AnyVal) Key: FLINK-2747 URL: https://issues.apache.org/jira/browse/FLINK-2747 Project: Flink Issue Type: Bug Components: Type Serialization System Reporter: Aljoscha Krettek This example program only works correctly if Kryo is force-enabled. {code} object Test { class Id(val underlying: Int) extends AnyVal class X(var id: Id) { def this() { this(new Id(0)) } } class MySource extends SourceFunction[X] { def run(ctx: SourceFunction.SourceContext[X]) { ctx.collect(new X(new Id(1))) } def cancel() {} } def main(args: Array[String]) { val env = StreamExecutionContext.getExecutionContext env.addSource(new MySource).print env.execute("Test") } } {code} The program fails with this: {code} Caused by: java.lang.RuntimeException: Cannot instantiate class. at org.apache.flink.api.java.typeutils.runtime.PojoSerializer.createInstance(PojoSerializer.java:227) at org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:421) at org.apache.flink.streaming.runtime.streamrecord.StreamRecordSerializer.deserialize(StreamRecordSerializer.java:110) at org.apache.flink.streaming.runtime.streamrecord.StreamRecordSerializer.deserialize(StreamRecordSerializer.java:41) at org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55) at org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:125) at org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:136) at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:55) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:198) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:579) at java.lang.Thread.run(Thread.java:745) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Extending and improving our "How to contribute" page
Hi everybody, I guess we all have noticed that the Flink community is quickly growing and more and more contributions are coming in. Recently, a few contributions proposed new features without being discussed on the mailing list. Some of these contributions were not accepted in the end. In other cases, pull requests had to be heavily reworked because the approach taken was not the best one. These are situations which should be avoided because both the contributor as well as the person who reviewed the contribution invested a lot of time for nothing. I had a look at our “How to contribute” and “Coding guideline” pages and think, we can improve them. I see basically two issues: 1. The documents do not explain how to propose and discuss new features and improvements. 2. The documents are quite technical and the structure could be improved, IMO. I would like to improve these pages and propose the following additions: 1. Request contributors and committers to start discussions on the mailing list for new features. This discussion should help to figure out whether such a new feature is a good fit for Flink and give first pointers for a design to implement it. 2. Require contributors and committers to write design documents for all new features and major improvements. These documents should be attached to a JIRA issue and follow a template which needs to be defined. 3. Extend the “Coding Style Guides” and add patterns that are commonly remarked in pull requests. 4. Restructure the current pages into three pages: a general guide for contributions and two guides for how to contribute to code and website with all technical issues (repository, IDE setup, build system, etc.) Looking forward for your comments, Fabian
Re: [VOTE] Release Apache Flink 0.10.0-milestone-1 (RC1)
Hi, I just encountered a problem in the WebClient. If I upload multiple jars, only the first jar that is uploaded is shown. Can anybody verify my observation? -Matthias On 09/23/2015 10:54 AM, Fabian Hueske wrote: > Dear community, > > Please vote on releasing the following candidate as Apache Flink version > 0.10.0-milestone-1. This is a milestone release that contains several > features of Flink's next major release version 0.10.0. > > This is the first RC for this release. > > - > The commit to be voted on: > ffdd484ce5596149bb4ea234561c7fcf83bc3167 > > Branch: > release-0.10.0-milestone-1-rc1 ( > https://git-wip-us.apache.org/repos/asf?p=flink.git;a=shortlog;h=refs/heads/release-0.10.0-milestone-1-rc1 > ) > > The release artifacts to be voted on can be found at: > http://people.apache.org/~fhueske/flink-0.10.0-milestone-1-rc1/ > > Release artifacts are signed with the key with fingerprint 34911D5A: > http://www.apache.org/dist/flink/KEYS > > The staging repository for this release can be found at: > https://repository.apache.org/content/repositories/orgapacheflink-1046 > - > > Please vote on releasing this package as Apache Flink 0.10.0-milestone-1. > > The vote is open for the next 72 hours and passes if a majority of at least > three +1 PMC votes are cast. > > The vote ends on Saturday (September 26, 2015). > > [ ] +1 Release this package as Apache Flink 0.10.0-milestone-1 > [ ] -1 Do not release this package, because... > > – Fabian > signature.asc Description: OpenPGP digital signature
Re: Extending and improving our "How to contribute" page
Big +1. For (1), a discussion in JIRA would also be an option IMO For (2), let us come up with few examples on what constitutes a feature that needs a design doc, and what should be in the doc (IMO architecture/general approach, components touched, interfaces changed) On Wed, Sep 23, 2015 at 2:24 PM, Fabian Hueske wrote: > Hi everybody, > > I guess we all have noticed that the Flink community is quickly growing and > more and more contributions are coming in. Recently, a few contributions > proposed new features without being discussed on the mailing list. Some of > these contributions were not accepted in the end. In other cases, pull > requests had to be heavily reworked because the approach taken was not the > best one. These are situations which should be avoided because both the > contributor as well as the person who reviewed the contribution invested a > lot of time for nothing. > > I had a look at our “How to contribute” and “Coding guideline” pages and > think, we can improve them. I see basically two issues: > > 1. The documents do not explain how to propose and discuss new features > and improvements. > 2. The documents are quite technical and the structure could be improved, > IMO. > > I would like to improve these pages and propose the following additions: > > 1. Request contributors and committers to start discussions on the > mailing list for new features. This discussion should help to figure out > whether such a new feature is a good fit for Flink and give first pointers > for a design to implement it. > 2. Require contributors and committers to write design documents for all > new features and major improvements. These documents should be attached to > a JIRA issue and follow a template which needs to be defined. > 3. Extend the “Coding Style Guides” and add patterns that are commonly > remarked in pull requests. > 4. Restructure the current pages into three pages: a general guide for > contributions and two guides for how to contribute to code and website with > all technical issues (repository, IDE setup, build system, etc.) > > Looking forward for your comments, > Fabian >
Re: Build get stuck at BarrierBufferMassiveRandomTest
Same here. > On 23 Sep 2015, at 13:50, Vasiliki Kalavri wrote: > > Hi, > > It's the latest master I'm trying to build, but it still hangs. > Here's the trace: > > - > 2015-09-23 13:48:41 > Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed mode): > > "Attach Listener" daemon prio=5 tid=0x7faeb984a000 nid=0x3707 waiting > on condition [0x] > java.lang.Thread.State: RUNNABLE > > "Service Thread" daemon prio=5 tid=0x7faeb9808000 nid=0x4d03 runnable > [0x] > java.lang.Thread.State: RUNNABLE > > "C2 CompilerThread1" daemon prio=5 tid=0x7faebb00e800 nid=0x4b03 > waiting on condition [0x] > java.lang.Thread.State: RUNNABLE > > "C2 CompilerThread0" daemon prio=5 tid=0x7faebb840800 nid=0x4903 > waiting on condition [0x] > java.lang.Thread.State: RUNNABLE > > "Signal Dispatcher" daemon prio=5 tid=0x7faeba806800 nid=0x3d0f > runnable [0x] > java.lang.Thread.State: RUNNABLE > > "Finalizer" daemon prio=5 tid=0x7faebb836800 nid=0x3303 in > Object.wait() [0x00014eff8000] > java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) > - locked <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) > at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) > > "Reference Handler" daemon prio=5 tid=0x7faebb004000 nid=0x3103 in > Object.wait() [0x00014eef5000] > java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x000138a84470> (a java.lang.ref.Reference$Lock) > at java.lang.Object.wait(Object.java:503) > at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133) > - locked <0x000138a84470> (a java.lang.ref.Reference$Lock) > > "main" prio=5 tid=0x7faeb9009800 nid=0xd03 runnable [0x00010f1c] > java.lang.Thread.State: RUNNABLE > at java.net.PlainSocketImpl.socketAccept(Native Method) > at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) > at java.net.ServerSocket.implAccept(ServerSocket.java:530) > at java.net.ServerSocket.accept(ServerSocket.java:498) > at > org.apache.flink.streaming.api.functions.sink.SocketClientSinkTest.testSocketSinkRetryAccess(SocketClientSinkTest.java:315) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) > > "VM Thread" prio=5 tid=0x7faebb82e800 nid=0x2f03 runnable > > "GC task thread#0 (ParallelGC)" prio=5 tid=0x7faeb9806800 nid=0x1e03 > runnable > > "GC task thread#1 (ParallelGC)" prio=5 tid=0x7faebb00 nid=0x2103 > runnable > > "GC task thread#2 (ParallelGC)" prio=5 tid=0x7faebb001000 nid=0x2303 > runnable > > "GC task thread#3 (Paralle
Re: Build get stuck at BarrierBufferMassiveRandomTest
Okay, will look into this is a bit today... On Wed, Sep 23, 2015 at 4:04 PM, Ufuk Celebi wrote: > Same here. > > > On 23 Sep 2015, at 13:50, Vasiliki Kalavri > wrote: > > > > Hi, > > > > It's the latest master I'm trying to build, but it still hangs. > > Here's the trace: > > > > - > > 2015-09-23 13:48:41 > > Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed > mode): > > > > "Attach Listener" daemon prio=5 tid=0x7faeb984a000 nid=0x3707 waiting > > on condition [0x] > > java.lang.Thread.State: RUNNABLE > > > > "Service Thread" daemon prio=5 tid=0x7faeb9808000 nid=0x4d03 runnable > > [0x] > > java.lang.Thread.State: RUNNABLE > > > > "C2 CompilerThread1" daemon prio=5 tid=0x7faebb00e800 nid=0x4b03 > > waiting on condition [0x] > > java.lang.Thread.State: RUNNABLE > > > > "C2 CompilerThread0" daemon prio=5 tid=0x7faebb840800 nid=0x4903 > > waiting on condition [0x] > > java.lang.Thread.State: RUNNABLE > > > > "Signal Dispatcher" daemon prio=5 tid=0x7faeba806800 nid=0x3d0f > > runnable [0x] > > java.lang.Thread.State: RUNNABLE > > > > "Finalizer" daemon prio=5 tid=0x7faebb836800 nid=0x3303 in > > Object.wait() [0x00014eff8000] > > java.lang.Thread.State: WAITING (on object monitor) > > at java.lang.Object.wait(Native Method) > > - waiting on <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) > > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) > > - locked <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) > > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) > > at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) > > > > "Reference Handler" daemon prio=5 tid=0x7faebb004000 nid=0x3103 in > > Object.wait() [0x00014eef5000] > > java.lang.Thread.State: WAITING (on object monitor) > > at java.lang.Object.wait(Native Method) > > - waiting on <0x000138a84470> (a java.lang.ref.Reference$Lock) > > at java.lang.Object.wait(Object.java:503) > > at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133) > > - locked <0x000138a84470> (a java.lang.ref.Reference$Lock) > > > > "main" prio=5 tid=0x7faeb9009800 nid=0xd03 runnable > [0x00010f1c] > > java.lang.Thread.State: RUNNABLE > > at java.net.PlainSocketImpl.socketAccept(Native Method) > > at > java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) > > at java.net.ServerSocket.implAccept(ServerSocket.java:530) > > at java.net.ServerSocket.accept(ServerSocket.java:498) > > at > > > org.apache.flink.streaming.api.functions.sink.SocketClientSinkTest.testSocketSinkRetryAccess(SocketClientSinkTest.java:315) > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > at > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > at > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:606) > > at > > > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > > at > > > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > > at > > > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > > at > > > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > > at > > > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > > at > > > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > > at > > > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283) > > at > > > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173) > > at > > > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) > > at > > > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128) > > at > > > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203) > > at > > > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155) > > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) > > > > "VM Thread" prio=5 tid=0x7
Re: [VOTE] Release Apache Flink 0.10.0-milestone-1 (RC1)
I had a closer look, found the bug, and have a fix ready. https://github.com/mjsax/flink/tree/hotfixWebClient The bug was introduced by FLINK-2175. It seems, something went wrong when I tested this back than. :/ -Matthias On 09/23/2015 02:27 PM, Matthias J. Sax wrote: > Hi, > > I just encountered a problem in the WebClient. If I upload multiple > jars, only the first jar that is uploaded is shown. Can anybody verify > my observation? > > -Matthias > > On 09/23/2015 10:54 AM, Fabian Hueske wrote: >> Dear community, >> >> Please vote on releasing the following candidate as Apache Flink version >> 0.10.0-milestone-1. This is a milestone release that contains several >> features of Flink's next major release version 0.10.0. >> >> This is the first RC for this release. >> >> - >> The commit to be voted on: >> ffdd484ce5596149bb4ea234561c7fcf83bc3167 >> >> Branch: >> release-0.10.0-milestone-1-rc1 ( >> https://git-wip-us.apache.org/repos/asf?p=flink.git;a=shortlog;h=refs/heads/release-0.10.0-milestone-1-rc1 >> ) >> >> The release artifacts to be voted on can be found at: >> http://people.apache.org/~fhueske/flink-0.10.0-milestone-1-rc1/ >> >> Release artifacts are signed with the key with fingerprint 34911D5A: >> http://www.apache.org/dist/flink/KEYS >> >> The staging repository for this release can be found at: >> https://repository.apache.org/content/repositories/orgapacheflink-1046 >> - >> >> Please vote on releasing this package as Apache Flink 0.10.0-milestone-1. >> >> The vote is open for the next 72 hours and passes if a majority of at least >> three +1 PMC votes are cast. >> >> The vote ends on Saturday (September 26, 2015). >> >> [ ] +1 Release this package as Apache Flink 0.10.0-milestone-1 >> [ ] -1 Do not release this package, because... >> >> – Fabian >> > signature.asc Description: OpenPGP digital signature
Re: Build get stuck at BarrierBufferMassiveRandomTest
It hangs for me too at the same test when doing "clean verify" > On 23 Sep 2015, at 16:09, Stephan Ewen wrote: > > Okay, will look into this is a bit today... > > On Wed, Sep 23, 2015 at 4:04 PM, Ufuk Celebi wrote: > >> Same here. >> >>> On 23 Sep 2015, at 13:50, Vasiliki Kalavri >> wrote: >>> >>> Hi, >>> >>> It's the latest master I'm trying to build, but it still hangs. >>> Here's the trace: >>> >>> - >>> 2015-09-23 13:48:41 >>> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed >> mode): >>> >>> "Attach Listener" daemon prio=5 tid=0x7faeb984a000 nid=0x3707 waiting >>> on condition [0x] >>> java.lang.Thread.State: RUNNABLE >>> >>> "Service Thread" daemon prio=5 tid=0x7faeb9808000 nid=0x4d03 runnable >>> [0x] >>> java.lang.Thread.State: RUNNABLE >>> >>> "C2 CompilerThread1" daemon prio=5 tid=0x7faebb00e800 nid=0x4b03 >>> waiting on condition [0x] >>> java.lang.Thread.State: RUNNABLE >>> >>> "C2 CompilerThread0" daemon prio=5 tid=0x7faebb840800 nid=0x4903 >>> waiting on condition [0x] >>> java.lang.Thread.State: RUNNABLE >>> >>> "Signal Dispatcher" daemon prio=5 tid=0x7faeba806800 nid=0x3d0f >>> runnable [0x] >>> java.lang.Thread.State: RUNNABLE >>> >>> "Finalizer" daemon prio=5 tid=0x7faebb836800 nid=0x3303 in >>> Object.wait() [0x00014eff8000] >>> java.lang.Thread.State: WAITING (on object monitor) >>> at java.lang.Object.wait(Native Method) >>> - waiting on <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) >>> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) >>> - locked <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) >>> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) >>> at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) >>> >>> "Reference Handler" daemon prio=5 tid=0x7faebb004000 nid=0x3103 in >>> Object.wait() [0x00014eef5000] >>> java.lang.Thread.State: WAITING (on object monitor) >>> at java.lang.Object.wait(Native Method) >>> - waiting on <0x000138a84470> (a java.lang.ref.Reference$Lock) >>> at java.lang.Object.wait(Object.java:503) >>> at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133) >>> - locked <0x000138a84470> (a java.lang.ref.Reference$Lock) >>> >>> "main" prio=5 tid=0x7faeb9009800 nid=0xd03 runnable >> [0x00010f1c] >>> java.lang.Thread.State: RUNNABLE >>> at java.net.PlainSocketImpl.socketAccept(Native Method) >>> at >> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) >>> at java.net.ServerSocket.implAccept(ServerSocket.java:530) >>> at java.net.ServerSocket.accept(ServerSocket.java:498) >>> at >>> >> org.apache.flink.streaming.api.functions.sink.SocketClientSinkTest.testSocketSinkRetryAccess(SocketClientSinkTest.java:315) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >>> at >>> >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:606) >>> at >>> >> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) >>> at >>> >> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) >>> at >>> >> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) >>> at >>> >> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) >>> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) >>> at org.junit.rules.RunRules.evaluate(RunRules.java:20) >>> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) >>> at >>> >> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) >>> at >>> >> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) >>> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) >>> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) >>> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) >>> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) >>> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) >>> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) >>> at >>> >> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283) >>> at >>> >> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173) >>> at >>> >> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) >>> at >>> >> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128) >>> at >>> >> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203) >>> at >>> >> org.apache.maven.surefire.booter.ForkedBooter
Re: [VOTE] Release Apache Flink 0.10.0-milestone-1 (RC1)
Thank you for immediately providing a fix. I would suggest to do some more testing to reduce the number of RCs we need to create On Wed, Sep 23, 2015 at 4:26 PM, Matthias J. Sax wrote: > I had a closer look, found the bug, and have a fix ready. > > https://github.com/mjsax/flink/tree/hotfixWebClient > > The bug was introduced by FLINK-2175. It seems, something went wrong > when I tested this back than. :/ > > -Matthias > > > On 09/23/2015 02:27 PM, Matthias J. Sax wrote: > > Hi, > > > > I just encountered a problem in the WebClient. If I upload multiple > > jars, only the first jar that is uploaded is shown. Can anybody verify > > my observation? > > > > -Matthias > > > > On 09/23/2015 10:54 AM, Fabian Hueske wrote: > >> Dear community, > >> > >> Please vote on releasing the following candidate as Apache Flink version > >> 0.10.0-milestone-1. This is a milestone release that contains several > >> features of Flink's next major release version 0.10.0. > >> > >> This is the first RC for this release. > >> > >> - > >> The commit to be voted on: > >> ffdd484ce5596149bb4ea234561c7fcf83bc3167 > >> > >> Branch: > >> release-0.10.0-milestone-1-rc1 ( > >> > https://git-wip-us.apache.org/repos/asf?p=flink.git;a=shortlog;h=refs/heads/release-0.10.0-milestone-1-rc1 > >> ) > >> > >> The release artifacts to be voted on can be found at: > >> http://people.apache.org/~fhueske/flink-0.10.0-milestone-1-rc1/ > >> > >> Release artifacts are signed with the key with fingerprint 34911D5A: > >> http://www.apache.org/dist/flink/KEYS > >> > >> The staging repository for this release can be found at: > >> https://repository.apache.org/content/repositories/orgapacheflink-1046 > >> - > >> > >> Please vote on releasing this package as Apache Flink > 0.10.0-milestone-1. > >> > >> The vote is open for the next 72 hours and passes if a majority of at > least > >> three +1 PMC votes are cast. > >> > >> The vote ends on Saturday (September 26, 2015). > >> > >> [ ] +1 Release this package as Apache Flink 0.10.0-milestone-1 > >> [ ] -1 Do not release this package, because... > >> > >> – Fabian > >> > > > >
[jira] [Created] (FLINK-2748) Accumulator fetch failure leads to duplicate job result response
Ufuk Celebi created FLINK-2748: -- Summary: Accumulator fetch failure leads to duplicate job result response Key: FLINK-2748 URL: https://issues.apache.org/jira/browse/FLINK-2748 Project: Flink Issue Type: Bug Components: JobManager Affects Versions: master Reporter: Ufuk Celebi On {{JobStatusChanged}} message and a failure to catch the accumulator result the client will receive a {{JobResultFailure}} and {{JobResultSuccess}} response {code} newJobStatus match { case JobStatus.FINISHED => val accumulatorResults: java.util.Map[String, SerializedValue[AnyRef]] = try { executionGraph.getAccumulatorsSerialized() } catch { case e: Exception => log.error(s"Cannot fetch final accumulators for job $jobID", e) val exception = new JobExecutionException(jobID, "Failed to retrieve accumulator results.", e) jobInfo.client ! decorateMessage(JobResultFailure( new SerializedThrowable(exception))) Collections.emptyMap() <<< HERE } val result = new SerializedJobExecutionResult( jobID, jobInfo.duration, accumulatorResults) jobInfo.client ! decorateMessage(JobResultSuccess(result)) <<< HERE {code} Furthermore the indentation is off. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Extending and improving our "How to contribute" page
Hi, I agree with you Fabian. Clarifying these issues in the "How to Contribute" guide will save lots of time both to reviewers and contributors. It is a really disappointing situation when someone spends time implementing something and their PR ends up being rejected because either the feature was not needed or the implementation details were never agreed on. That said, I think we should also make sure that we don't raise the bar too high for simple contributions. Regarding (1) and (2), I think we should clarify what kind of additions/changes require this process to be followed. e.g. do we need to discuss additions for which JIRAs already exist? Ideas described in the roadmaps? Adding a new algorithm to Gelly/Flink-ML? Regarding (3), maybe we can all suggest some examples/patterns that we've seen when reviewing PRs and then choose the most common (or all). (4) sounds good to me. Cheers, Vasia. On 23 September 2015 at 15:08, Kostas Tzoumas wrote: > Big +1. > > For (1), a discussion in JIRA would also be an option IMO > > For (2), let us come up with few examples on what constitutes a feature > that needs a design doc, and what should be in the doc (IMO > architecture/general approach, components touched, interfaces changed) > > > > On Wed, Sep 23, 2015 at 2:24 PM, Fabian Hueske wrote: > > > Hi everybody, > > > > I guess we all have noticed that the Flink community is quickly growing > and > > more and more contributions are coming in. Recently, a few contributions > > proposed new features without being discussed on the mailing list. Some > of > > these contributions were not accepted in the end. In other cases, pull > > requests had to be heavily reworked because the approach taken was not > the > > best one. These are situations which should be avoided because both the > > contributor as well as the person who reviewed the contribution invested > a > > lot of time for nothing. > > > > I had a look at our “How to contribute” and “Coding guideline” pages and > > think, we can improve them. I see basically two issues: > > > > 1. The documents do not explain how to propose and discuss new features > > and improvements. > > 2. The documents are quite technical and the structure could be > improved, > > IMO. > > > > I would like to improve these pages and propose the following additions: > > > > 1. Request contributors and committers to start discussions on the > > mailing list for new features. This discussion should help to figure out > > whether such a new feature is a good fit for Flink and give first > pointers > > for a design to implement it. > > 2. Require contributors and committers to write design documents for > all > > new features and major improvements. These documents should be attached > to > > a JIRA issue and follow a template which needs to be defined. > > 3. Extend the “Coding Style Guides” and add patterns that are commonly > > remarked in pull requests. > > 4. Restructure the current pages into three pages: a general guide for > > contributions and two guides for how to contribute to code and website > with > > all technical issues (repository, IDE setup, build system, etc.) > > > > Looking forward for your comments, > > Fabian > > >
[jira] [Created] (FLINK-2749) CliFrontendAddressConfigurationTest fails on GCE
Fabian Hueske created FLINK-2749: Summary: CliFrontendAddressConfigurationTest fails on GCE Key: FLINK-2749 URL: https://issues.apache.org/jira/browse/FLINK-2749 Project: Flink Issue Type: Bug Components: Tests Affects Versions: 0.10 Reporter: Fabian Hueske The CLIFrontendAddressConfigurationTest fails when building the 0.10.0-milestone-1-rc1 with {{mvn clean install -Dhadoop.version=2.6.0}} on a Google Compute Engine machine with the following error: {code} CliFrontendAddressConfigurationTest.testInvalidYarnConfig:159 expected:<1[92.168.1.33]> but was:<1[0.240.221.7]> CliFrontendAddressConfigurationTest.testInvalidConfigAndNoOption:67 we expect an exception here because the we have no config CliFrontendAddressConfigurationTest.testValidConfig:104 expected:<1[92.168.1.33]> but was:<1[0.240.221.7]> {code} Not sure if this is "just" a failing test or a bug in the component. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [VOTE] Release Apache Flink 0.10.0-milestone-1 (RC1)
I agree with continued testing before creating a new RC. I build Flink from the source distribution: - mvn clean install : works! - mvn clean install -Dscala-2.11 : works! - mvn clean install -Dhadoop.version=2.7.0 : fails (FLINK-2651) - mvn clean install -Dhadoop.version=2.6.0 : fails (FLINK-2750) - mvn clean install -Dhadoop.version=2.6.0 (on Google Compute Cloud machine) : fails (FLINK-2749) 2015-09-23 16:29 GMT+02:00 Robert Metzger : > Thank you for immediately providing a fix. > > I would suggest to do some more testing to reduce the number of RCs we need > to create > > On Wed, Sep 23, 2015 at 4:26 PM, Matthias J. Sax wrote: > > > I had a closer look, found the bug, and have a fix ready. > > > > https://github.com/mjsax/flink/tree/hotfixWebClient > > > > The bug was introduced by FLINK-2175. It seems, something went wrong > > when I tested this back than. :/ > > > > -Matthias > > > > > > On 09/23/2015 02:27 PM, Matthias J. Sax wrote: > > > Hi, > > > > > > I just encountered a problem in the WebClient. If I upload multiple > > > jars, only the first jar that is uploaded is shown. Can anybody verify > > > my observation? > > > > > > -Matthias > > > > > > On 09/23/2015 10:54 AM, Fabian Hueske wrote: > > >> Dear community, > > >> > > >> Please vote on releasing the following candidate as Apache Flink > version > > >> 0.10.0-milestone-1. This is a milestone release that contains several > > >> features of Flink's next major release version 0.10.0. > > >> > > >> This is the first RC for this release. > > >> > > >> - > > >> The commit to be voted on: > > >> ffdd484ce5596149bb4ea234561c7fcf83bc3167 > > >> > > >> Branch: > > >> release-0.10.0-milestone-1-rc1 ( > > >> > > > https://git-wip-us.apache.org/repos/asf?p=flink.git;a=shortlog;h=refs/heads/release-0.10.0-milestone-1-rc1 > > >> ) > > >> > > >> The release artifacts to be voted on can be found at: > > >> http://people.apache.org/~fhueske/flink-0.10.0-milestone-1-rc1/ > > >> > > >> Release artifacts are signed with the key with fingerprint 34911D5A: > > >> http://www.apache.org/dist/flink/KEYS > > >> > > >> The staging repository for this release can be found at: > > >> > https://repository.apache.org/content/repositories/orgapacheflink-1046 > > >> - > > >> > > >> Please vote on releasing this package as Apache Flink > > 0.10.0-milestone-1. > > >> > > >> The vote is open for the next 72 hours and passes if a majority of at > > least > > >> three +1 PMC votes are cast. > > >> > > >> The vote ends on Saturday (September 26, 2015). > > >> > > >> [ ] +1 Release this package as Apache Flink 0.10.0-milestone-1 > > >> [ ] -1 Do not release this package, because... > > >> > > >> – Fabian > > >> > > > > > > > >
[jira] [Created] (FLINK-2750) FileStateHandleTest fails when building for Hadoop 2.6.0
Fabian Hueske created FLINK-2750: Summary: FileStateHandleTest fails when building for Hadoop 2.6.0 Key: FLINK-2750 URL: https://issues.apache.org/jira/browse/FLINK-2750 Project: Flink Issue Type: Bug Components: Tests Affects Versions: 0.10 Reporter: Fabian Hueske Fix For: 0.10 The {{FileStateHandleTest}} fails when building the 0.10.0-milestone-1-rc1 with {{mvn clean install -Dhadoop.version=2.6.0}} with the following exception {code} java.lang.ClassNotFoundException: org.apache.hadoop.net.StaticMapping {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [VOTE] Release Apache Flink 0.10.0-milestone-1 (RC1)
I found another issue: The source release contains the ".git" directory. On Wed, Sep 23, 2015 at 5:27 PM, Fabian Hueske wrote: > I agree with continued testing before creating a new RC. > > I build Flink from the source distribution: > > - mvn clean install : works! > - mvn clean install -Dscala-2.11 : works! > - mvn clean install -Dhadoop.version=2.7.0 : fails (FLINK-2651) > - mvn clean install -Dhadoop.version=2.6.0 : fails (FLINK-2750) > - mvn clean install -Dhadoop.version=2.6.0 (on Google Compute Cloud > machine) : fails (FLINK-2749) > > 2015-09-23 16:29 GMT+02:00 Robert Metzger : > > > Thank you for immediately providing a fix. > > > > I would suggest to do some more testing to reduce the number of RCs we > need > > to create > > > > On Wed, Sep 23, 2015 at 4:26 PM, Matthias J. Sax > wrote: > > > > > I had a closer look, found the bug, and have a fix ready. > > > > > > https://github.com/mjsax/flink/tree/hotfixWebClient > > > > > > The bug was introduced by FLINK-2175. It seems, something went wrong > > > when I tested this back than. :/ > > > > > > -Matthias > > > > > > > > > On 09/23/2015 02:27 PM, Matthias J. Sax wrote: > > > > Hi, > > > > > > > > I just encountered a problem in the WebClient. If I upload multiple > > > > jars, only the first jar that is uploaded is shown. Can anybody > verify > > > > my observation? > > > > > > > > -Matthias > > > > > > > > On 09/23/2015 10:54 AM, Fabian Hueske wrote: > > > >> Dear community, > > > >> > > > >> Please vote on releasing the following candidate as Apache Flink > > version > > > >> 0.10.0-milestone-1. This is a milestone release that contains > several > > > >> features of Flink's next major release version 0.10.0. > > > >> > > > >> This is the first RC for this release. > > > >> > > > >> - > > > >> The commit to be voted on: > > > >> ffdd484ce5596149bb4ea234561c7fcf83bc3167 > > > >> > > > >> Branch: > > > >> release-0.10.0-milestone-1-rc1 ( > > > >> > > > > > > https://git-wip-us.apache.org/repos/asf?p=flink.git;a=shortlog;h=refs/heads/release-0.10.0-milestone-1-rc1 > > > >> ) > > > >> > > > >> The release artifacts to be voted on can be found at: > > > >> http://people.apache.org/~fhueske/flink-0.10.0-milestone-1-rc1/ > > > >> > > > >> Release artifacts are signed with the key with fingerprint 34911D5A: > > > >> http://www.apache.org/dist/flink/KEYS > > > >> > > > >> The staging repository for this release can be found at: > > > >> > > https://repository.apache.org/content/repositories/orgapacheflink-1046 > > > >> - > > > >> > > > >> Please vote on releasing this package as Apache Flink > > > 0.10.0-milestone-1. > > > >> > > > >> The vote is open for the next 72 hours and passes if a majority of > at > > > least > > > >> three +1 PMC votes are cast. > > > >> > > > >> The vote ends on Saturday (September 26, 2015). > > > >> > > > >> [ ] +1 Release this package as Apache Flink 0.10.0-milestone-1 > > > >> [ ] -1 Do not release this package, because... > > > >> > > > >> – Fabian > > > >> > > > > > > > > > > > > >
[jira] [Created] (FLINK-2752) Documentation is not easily differentiable from the Flink homepage
Maximilian Michels created FLINK-2752: - Summary: Documentation is not easily differentiable from the Flink homepage Key: FLINK-2752 URL: https://issues.apache.org/jira/browse/FLINK-2752 Project: Flink Issue Type: Bug Components: Documentation Affects Versions: 0.9, 0.10 Reporter: Maximilian Michels Priority: Minor Fix For: 0.9, 0.10 When users go to the documentation, either via the homepage's quickstart menu or the documentation menu, the transition to the documentation is not easily noticeable; the layout of both pages are pretty much the same. There should be a hint in the documentation page "Flink documentation version X. Click here to go back to the homepage" or something similar. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-2751) Quickstart is in documentation but only linked through the Flink homepage
Maximilian Michels created FLINK-2751: - Summary: Quickstart is in documentation but only linked through the Flink homepage Key: FLINK-2751 URL: https://issues.apache.org/jira/browse/FLINK-2751 Project: Flink Issue Type: Bug Components: Documentation Affects Versions: 0.9, 0.10 Reporter: Maximilian Michels Fix For: 0.9, 0.10 The Quickstart docs contained in {{docs/quickstart}} should also be included in the documentation menu. Basically, we could copy over the Quickstart menu from the Flink homepage. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Extending and improving our "How to contribute" page
Thanks, Fabian for driving this! I agree with your points. Concerning Vasia's comment to not raise the bar too high: That is true, the requirements should be reasonable. We can definitely tag issues as "simple" which means they do not require a design document. That should be more for new features and needs not be very detailed. We could also make the inverse, meaning we explicitly tag certain issues as "requires design document". Greetings, Stephan On Wed, Sep 23, 2015 at 5:05 PM, Vasiliki Kalavri wrote: > Hi, > > I agree with you Fabian. Clarifying these issues in the "How to Contribute" > guide will save lots of time both to reviewers and contributors. It is a > really disappointing situation when someone spends time implementing > something and their PR ends up being rejected because either the feature > was not needed or the implementation details were never agreed on. > > That said, I think we should also make sure that we don't raise the bar too > high for simple contributions. > > Regarding (1) and (2), I think we should clarify what kind of > additions/changes require this process to be followed. e.g. do we need to > discuss additions for which JIRAs already exist? Ideas described in the > roadmaps? Adding a new algorithm to Gelly/Flink-ML? > > Regarding (3), maybe we can all suggest some examples/patterns that we've > seen when reviewing PRs and then choose the most common (or all). > > (4) sounds good to me. > > Cheers, > Vasia. > > On 23 September 2015 at 15:08, Kostas Tzoumas wrote: > > > Big +1. > > > > For (1), a discussion in JIRA would also be an option IMO > > > > For (2), let us come up with few examples on what constitutes a feature > > that needs a design doc, and what should be in the doc (IMO > > architecture/general approach, components touched, interfaces changed) > > > > > > > > On Wed, Sep 23, 2015 at 2:24 PM, Fabian Hueske > wrote: > > > > > Hi everybody, > > > > > > I guess we all have noticed that the Flink community is quickly growing > > and > > > more and more contributions are coming in. Recently, a few > contributions > > > proposed new features without being discussed on the mailing list. Some > > of > > > these contributions were not accepted in the end. In other cases, pull > > > requests had to be heavily reworked because the approach taken was not > > the > > > best one. These are situations which should be avoided because both the > > > contributor as well as the person who reviewed the contribution > invested > > a > > > lot of time for nothing. > > > > > > I had a look at our “How to contribute” and “Coding guideline” pages > and > > > think, we can improve them. I see basically two issues: > > > > > > 1. The documents do not explain how to propose and discuss new > features > > > and improvements. > > > 2. The documents are quite technical and the structure could be > > improved, > > > IMO. > > > > > > I would like to improve these pages and propose the following > additions: > > > > > > 1. Request contributors and committers to start discussions on the > > > mailing list for new features. This discussion should help to figure > out > > > whether such a new feature is a good fit for Flink and give first > > pointers > > > for a design to implement it. > > > 2. Require contributors and committers to write design documents for > > all > > > new features and major improvements. These documents should be attached > > to > > > a JIRA issue and follow a template which needs to be defined. > > > 3. Extend the “Coding Style Guides” and add patterns that are > commonly > > > remarked in pull requests. > > > 4. Restructure the current pages into three pages: a general guide > for > > > contributions and two guides for how to contribute to code and website > > with > > > all technical issues (repository, IDE setup, build system, etc.) > > > > > > Looking forward for your comments, > > > Fabian > > > > > >
Re: Extending and improving our "How to contribute" page
Thanks again, Fabian for starting the discussions. For (1) and (2) I think it is good idea and will help people to understand and follow the author thought process. Following up with Stephan's reply, some new features solutions could be explained thoroughly in the PR descriptions but some requires additional reviews of the proposed design. I like the idea of using tag in JIRA whether new features should or should not being accompanied by design document. Agree with (3) and (4). As for (3) are you thinking about more of style of code syntax via checkstyle updates, or best practices in term of no mutable state if possible, throw precise Exception if possible for interfaces, etc. ? - Henry On Wed, Sep 23, 2015 at 9:31 AM, Stephan Ewen wrote: > Thanks, Fabian for driving this! > > I agree with your points. > > Concerning Vasia's comment to not raise the bar too high: > That is true, the requirements should be reasonable. We can definitely tag > issues as "simple" which means they do not require a design document. That > should be more for new features and needs not be very detailed. > > We could also make the inverse, meaning we explicitly tag certain issues as > "requires design document". > > Greetings, > Stephan > > > > > On Wed, Sep 23, 2015 at 5:05 PM, Vasiliki Kalavri > wrote: > >> Hi, >> >> I agree with you Fabian. Clarifying these issues in the "How to Contribute" >> guide will save lots of time both to reviewers and contributors. It is a >> really disappointing situation when someone spends time implementing >> something and their PR ends up being rejected because either the feature >> was not needed or the implementation details were never agreed on. >> >> That said, I think we should also make sure that we don't raise the bar too >> high for simple contributions. >> >> Regarding (1) and (2), I think we should clarify what kind of >> additions/changes require this process to be followed. e.g. do we need to >> discuss additions for which JIRAs already exist? Ideas described in the >> roadmaps? Adding a new algorithm to Gelly/Flink-ML? >> >> Regarding (3), maybe we can all suggest some examples/patterns that we've >> seen when reviewing PRs and then choose the most common (or all). >> >> (4) sounds good to me. >> >> Cheers, >> Vasia. >> >> On 23 September 2015 at 15:08, Kostas Tzoumas wrote: >> >> > Big +1. >> > >> > For (1), a discussion in JIRA would also be an option IMO >> > >> > For (2), let us come up with few examples on what constitutes a feature >> > that needs a design doc, and what should be in the doc (IMO >> > architecture/general approach, components touched, interfaces changed) >> > >> > >> > >> > On Wed, Sep 23, 2015 at 2:24 PM, Fabian Hueske >> wrote: >> > >> > > Hi everybody, >> > > >> > > I guess we all have noticed that the Flink community is quickly growing >> > and >> > > more and more contributions are coming in. Recently, a few >> contributions >> > > proposed new features without being discussed on the mailing list. Some >> > of >> > > these contributions were not accepted in the end. In other cases, pull >> > > requests had to be heavily reworked because the approach taken was not >> > the >> > > best one. These are situations which should be avoided because both the >> > > contributor as well as the person who reviewed the contribution >> invested >> > a >> > > lot of time for nothing. >> > > >> > > I had a look at our “How to contribute” and “Coding guideline” pages >> and >> > > think, we can improve them. I see basically two issues: >> > > >> > > 1. The documents do not explain how to propose and discuss new >> features >> > > and improvements. >> > > 2. The documents are quite technical and the structure could be >> > improved, >> > > IMO. >> > > >> > > I would like to improve these pages and propose the following >> additions: >> > > >> > > 1. Request contributors and committers to start discussions on the >> > > mailing list for new features. This discussion should help to figure >> out >> > > whether such a new feature is a good fit for Flink and give first >> > pointers >> > > for a design to implement it. >> > > 2. Require contributors and committers to write design documents for >> > all >> > > new features and major improvements. These documents should be attached >> > to >> > > a JIRA issue and follow a template which needs to be defined. >> > > 3. Extend the “Coding Style Guides” and add patterns that are >> commonly >> > > remarked in pull requests. >> > > 4. Restructure the current pages into three pages: a general guide >> for >> > > contributions and two guides for how to contribute to code and website >> > with >> > > all technical issues (repository, IDE setup, build system, etc.) >> > > >> > > Looking forward for your comments, >> > > Fabian >> > > >> > >>
[Proposal] Gelly Graph Generators
I would like to propose that Flink include a selection of graph generators in Gelly. Generated graphs will be useful for performing scalability, stress, and regression testing as well as benchmarking and comparing algorithms, both for Flink users and developers. Generated data is infinitely scalable yet described by a few simple parameters and can often substitute for user data or sharing large files when reporting issues. Spark's GraphX includes a modest GraphGenerators class [1]. The initial implementation would focus on Erdos-Renyi, R-Mat [2], and Kronecker [3] generators. A key consideration is that the graphs should be seedable and generate the same Graph regardless of parallelism. Generated data is a complement to my proposed "Checksum method for DataSet and Graph" [4]. [1] http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.util.GraphGenerators$ [2] R-MAT: A Recursive Model for Graph Mining; http://snap.stanford.edu/class/cs224w-readings/chakrabarti04rmat.pdf [3] Kronecker graphs: An Approach to Modeling Networks; http://arxiv.org/pdf/0812.4905v2.pdf [4] https://issues.apache.org/jira/browse/FLINK-2716 Greg Hogan
Re: Build get stuck at BarrierBufferMassiveRandomTest
I’ve pushed a fix. > On 23 Sep 2015, at 16:28, Paris Carbone wrote: > > It hangs for me too at the same test when doing "clean verify" > >> On 23 Sep 2015, at 16:09, Stephan Ewen wrote: >> >> Okay, will look into this is a bit today... >> >> On Wed, Sep 23, 2015 at 4:04 PM, Ufuk Celebi wrote: >> >>> Same here. >>> On 23 Sep 2015, at 13:50, Vasiliki Kalavri >>> wrote: Hi, It's the latest master I'm trying to build, but it still hangs. Here's the trace: - 2015-09-23 13:48:41 Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed >>> mode): "Attach Listener" daemon prio=5 tid=0x7faeb984a000 nid=0x3707 waiting on condition [0x] java.lang.Thread.State: RUNNABLE "Service Thread" daemon prio=5 tid=0x7faeb9808000 nid=0x4d03 runnable [0x] java.lang.Thread.State: RUNNABLE "C2 CompilerThread1" daemon prio=5 tid=0x7faebb00e800 nid=0x4b03 waiting on condition [0x] java.lang.Thread.State: RUNNABLE "C2 CompilerThread0" daemon prio=5 tid=0x7faebb840800 nid=0x4903 waiting on condition [0x] java.lang.Thread.State: RUNNABLE "Signal Dispatcher" daemon prio=5 tid=0x7faeba806800 nid=0x3d0f runnable [0x] java.lang.Thread.State: RUNNABLE "Finalizer" daemon prio=5 tid=0x7faebb836800 nid=0x3303 in Object.wait() [0x00014eff8000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) - locked <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) "Reference Handler" daemon prio=5 tid=0x7faebb004000 nid=0x3103 in Object.wait() [0x00014eef5000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x000138a84470> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:503) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133) - locked <0x000138a84470> (a java.lang.ref.Reference$Lock) "main" prio=5 tid=0x7faeb9009800 nid=0xd03 runnable >>> [0x00010f1c] java.lang.Thread.State: RUNNABLE at java.net.PlainSocketImpl.socketAccept(Native Method) at >>> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at >>> org.apache.flink.streaming.api.functions.sink.SocketClientSinkTest.testSocketSinkRetryAccess(SocketClientSinkTest.java:315) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at >>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at >>> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at >>> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at >>> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at >>> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at >>> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at >>> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283) at >>> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173) at >>> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at >>> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider
Re: Build get stuck at BarrierBufferMassiveRandomTest
Turned out it was due to different behavior of Sockets under Ubuntu (Debian) and OS/X (BSD) That why it did not happen for Travis and me... +1 for OS diversity among committers :-) On Wed, Sep 23, 2015 at 7:50 PM, Ufuk Celebi wrote: > I’ve pushed a fix. > > > On 23 Sep 2015, at 16:28, Paris Carbone wrote: > > > > It hangs for me too at the same test when doing "clean verify" > > > >> On 23 Sep 2015, at 16:09, Stephan Ewen wrote: > >> > >> Okay, will look into this is a bit today... > >> > >> On Wed, Sep 23, 2015 at 4:04 PM, Ufuk Celebi wrote: > >> > >>> Same here. > >>> > On 23 Sep 2015, at 13:50, Vasiliki Kalavri > > >>> wrote: > > Hi, > > It's the latest master I'm trying to build, but it still hangs. > Here's the trace: > > - > 2015-09-23 13:48:41 > Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed > >>> mode): > > "Attach Listener" daemon prio=5 tid=0x7faeb984a000 nid=0x3707 > waiting > on condition [0x] > java.lang.Thread.State: RUNNABLE > > "Service Thread" daemon prio=5 tid=0x7faeb9808000 nid=0x4d03 > runnable > [0x] > java.lang.Thread.State: RUNNABLE > > "C2 CompilerThread1" daemon prio=5 tid=0x7faebb00e800 nid=0x4b03 > waiting on condition [0x] > java.lang.Thread.State: RUNNABLE > > "C2 CompilerThread0" daemon prio=5 tid=0x7faebb840800 nid=0x4903 > waiting on condition [0x] > java.lang.Thread.State: RUNNABLE > > "Signal Dispatcher" daemon prio=5 tid=0x7faeba806800 nid=0x3d0f > runnable [0x] > java.lang.Thread.State: RUNNABLE > > "Finalizer" daemon prio=5 tid=0x7faebb836800 nid=0x3303 in > Object.wait() [0x00014eff8000] > java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x000138a84858> (a > java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) > - locked <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) > at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) > > "Reference Handler" daemon prio=5 tid=0x7faebb004000 nid=0x3103 in > Object.wait() [0x00014eef5000] > java.lang.Thread.State: WAITING (on object monitor) > at java.lang.Object.wait(Native Method) > - waiting on <0x000138a84470> (a java.lang.ref.Reference$Lock) > at java.lang.Object.wait(Object.java:503) > at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133) > - locked <0x000138a84470> (a java.lang.ref.Reference$Lock) > > "main" prio=5 tid=0x7faeb9009800 nid=0xd03 runnable > >>> [0x00010f1c] > java.lang.Thread.State: RUNNABLE > at java.net.PlainSocketImpl.socketAccept(Native Method) > at > >>> > java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) > at java.net.ServerSocket.implAccept(ServerSocket.java:530) > at java.net.ServerSocket.accept(ServerSocket.java:498) > at > > >>> > org.apache.flink.streaming.api.functions.sink.SocketClientSinkTest.testSocketSinkRetryAccess(SocketClientSinkTest.java:315) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > > >>> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > > >>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > > >>> > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > > >>> > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > > >>> > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > > >>> > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > > >>> > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > > >>> > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > >>>
[jira] [Created] (FLINK-2753) Add new window API to streaming API
Stephan Ewen created FLINK-2753: --- Summary: Add new window API to streaming API Key: FLINK-2753 URL: https://issues.apache.org/jira/browse/FLINK-2753 Project: Flink Issue Type: Sub-task Components: Streaming Affects Versions: 0.10 Reporter: Stephan Ewen Assignee: Stephan Ewen Fix For: 0.10 The API integration should follow the API design as documented here: https://cwiki.apache.org/confluence/display/FLINK/Streams+and+Operations+on+Streams This issue needs: - Add new {{KeyedWindowDataStream}}, created by calling{{window()}} on the {{KeyedDataStream}}. - Add a utility that converts Window Policies into concrete window implementations. This is important to realize special case implementations that do not directly use generic mechanisms implemented by window policies. - Instantiating the operators (dedicated and generic) from the window policies - I will add a stub for the new Window Policy classes, based on the existing policy classes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Build get stuck at BarrierBufferMassiveRandomTest
Thank you both! I verify that it's solved now :) On 23 September 2015 at 20:00, Stephan Ewen wrote: > Turned out it was due to different behavior of Sockets under Ubuntu > (Debian) and OS/X (BSD) > > That why it did not happen for Travis and me... > > +1 for OS diversity among committers :-) > > On Wed, Sep 23, 2015 at 7:50 PM, Ufuk Celebi wrote: > > > I’ve pushed a fix. > > > > > On 23 Sep 2015, at 16:28, Paris Carbone wrote: > > > > > > It hangs for me too at the same test when doing "clean verify" > > > > > >> On 23 Sep 2015, at 16:09, Stephan Ewen wrote: > > >> > > >> Okay, will look into this is a bit today... > > >> > > >> On Wed, Sep 23, 2015 at 4:04 PM, Ufuk Celebi wrote: > > >> > > >>> Same here. > > >>> > > On 23 Sep 2015, at 13:50, Vasiliki Kalavri < > vasilikikala...@gmail.com > > > > > >>> wrote: > > > > Hi, > > > > It's the latest master I'm trying to build, but it still hangs. > > Here's the trace: > > > > - > > 2015-09-23 13:48:41 > > Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.75-b04 mixed > > >>> mode): > > > > "Attach Listener" daemon prio=5 tid=0x7faeb984a000 nid=0x3707 > > waiting > > on condition [0x] > > java.lang.Thread.State: RUNNABLE > > > > "Service Thread" daemon prio=5 tid=0x7faeb9808000 nid=0x4d03 > > runnable > > [0x] > > java.lang.Thread.State: RUNNABLE > > > > "C2 CompilerThread1" daemon prio=5 tid=0x7faebb00e800 nid=0x4b03 > > waiting on condition [0x] > > java.lang.Thread.State: RUNNABLE > > > > "C2 CompilerThread0" daemon prio=5 tid=0x7faebb840800 nid=0x4903 > > waiting on condition [0x] > > java.lang.Thread.State: RUNNABLE > > > > "Signal Dispatcher" daemon prio=5 tid=0x7faeba806800 nid=0x3d0f > > runnable [0x] > > java.lang.Thread.State: RUNNABLE > > > > "Finalizer" daemon prio=5 tid=0x7faebb836800 nid=0x3303 in > > Object.wait() [0x00014eff8000] > > java.lang.Thread.State: WAITING (on object monitor) > > at java.lang.Object.wait(Native Method) > > - waiting on <0x000138a84858> (a > > java.lang.ref.ReferenceQueue$Lock) > > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) > > - locked <0x000138a84858> (a java.lang.ref.ReferenceQueue$Lock) > > at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) > > at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) > > > > "Reference Handler" daemon prio=5 tid=0x7faebb004000 nid=0x3103 > in > > Object.wait() [0x00014eef5000] > > java.lang.Thread.State: WAITING (on object monitor) > > at java.lang.Object.wait(Native Method) > > - waiting on <0x000138a84470> (a java.lang.ref.Reference$Lock) > > at java.lang.Object.wait(Object.java:503) > > at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133) > > - locked <0x000138a84470> (a java.lang.ref.Reference$Lock) > > > > "main" prio=5 tid=0x7faeb9009800 nid=0xd03 runnable > > >>> [0x00010f1c] > > java.lang.Thread.State: RUNNABLE > > at java.net.PlainSocketImpl.socketAccept(Native Method) > > at > > >>> > > java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) > > at java.net.ServerSocket.implAccept(ServerSocket.java:530) > > at java.net.ServerSocket.accept(ServerSocket.java:498) > > at > > > > >>> > > > org.apache.flink.streaming.api.functions.sink.SocketClientSinkTest.testSocketSinkRetryAccess(SocketClientSinkTest.java:315) > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > at > > > > >>> > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > at > > > > >>> > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:606) > > at > > > > >>> > > > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > > at > > > > >>> > > > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > > at > > > > >>> > > > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > > at > > > > >>> > > > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > > at > > > > >>> > > > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > > at > > > > >>> > > > org.junit.runners.BlockJUnit4ClassRunner.runChil
[jira] [Created] (FLINK-2754) FixedLengthRecordSorter can not write to output cross MemorySegments.
Chengxiang Li created FLINK-2754: Summary: FixedLengthRecordSorter can not write to output cross MemorySegments. Key: FLINK-2754 URL: https://issues.apache.org/jira/browse/FLINK-2754 Project: Flink Issue Type: Bug Components: Distributed Runtime Reporter: Chengxiang Li Assignee: Chengxiang Li FixedLengthRecordSorter can not write to output cross MemorySegments, it works well as it's only called to write a single record before. Should fix it and add more unit test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)