Re: Flink Run command :Replace placeholder value in spring xml

2017-01-17 Thread raikarsunil
Hi, Let me give a overall picture : I am using properties file which contains values(passwords,kafkahostnames,schemaName etc..) related to DB,kafka ,flink etc.. which is Environment Specific(Dev,QA,Production) . Using this properties file in spring xml ,set values to beans .

Rolling sink parquet/Avro output

2017-01-17 Thread Biswajit Das
Hi There , Does any have Rolling sink parquet/Avro writer reference ??. I'm seeing some issue given stream is handle at rolling sink and I don't see much option override or even open at subclass . I could resolve the same with a custom rolling sink writer, just wondering if any one has done simil

Re: 1.1.1: JobManager config endpoint no longer supplies port

2017-01-17 Thread Shannon Carey
A followup (in case anyone is interested): we worked around this by making a request to the "/jars" endpoint of the UI. The response has an attribute called "address" which includes the DNS name and port where the UI is accessible.

RE: Release 1.2?

2017-01-17 Thread denis.dollfus
Thanks for the quick update, sounds perfect! And I‘ll try the staging repo trick in pom.xml. Denis From: Stephan Ewen [mailto:se...@apache.org] Sent: mardi 17 janvier 2017 11:42 To: user@flink.apache.org Subject: Re: Release 1.2? So far it looks as if the next release candidate comes in a few da

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Neil Derraugh
I re-read that enough times and it finally made sense. I wasn’t paying attention and thought 0.10.2 was the Kafka version —which hasn’t been released yet either - ha :(. I switched to a recent version and it’s all good. :) Thanks ! Neil > On Jan 17, 2017, at 11:14 AM, Neil Derraugh > wrote:

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Neil Derraugh
Hi Timo & Fabian, Thanks for replying. I'm using Zeppelin built off master. And Flink 1.2 built off the release-1.2 branch. Is that the right branch? Neil -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Zeppelin-Flink-Kafka-Connector-tp1

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Fabian Hueske
The connectors are included in the release and available as individual Maven artifacts. So Flink 1.2.0 will provide a flink-connector-kafka-0.10 artifact (with version 1.2.0). 2017-01-17 16:22 GMT+01:00 Foster, Craig : > Are connectors being included in the 1.2.0 release or do you mean Kafka > sp

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Foster, Craig
Are connectors being included in the 1.2.0 release or do you mean Kafka specifically? From: Fabian Hueske Reply-To: "user@flink.apache.org" Date: Tuesday, January 17, 2017 at 7:10 AM To: "user@flink.apache.org" Subject: Re: Zeppelin: Flink Kafka Connector One thing to add: Flink 1.2.0 has not

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Fabian Hueske
One thing to add: Flink 1.2.0 has not been release yet. The FlinkKafkaConsumer010 is only available in a SNAPSHOT release or the first release candidate (RC0). Best, Fabian 2017-01-17 16:08 GMT+01:00 Timo Walther : > You are using an old version of Flink (0.10.2). A FlinkKafkaConsumer010 > was n

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Timo Walther
You are using an old version of Flink (0.10.2). A FlinkKafkaConsumer010 was not present at that time. You need to upgrade to Flink 1.2. Timo Am 17/01/17 um 15:58 schrieb Neil Derraugh: This is really a Zeppelin question, and I’ve already posted to the user list there. I’m just trying to dra

Re: Flink Run command :Replace placeholder value in spring xml

2017-01-17 Thread Timo Walther
I'm not sure what you want to do with this configuration. But you should keep in mind that all properties you set are only valid in the Flink Client that submits the job to the JobManager, you cannot access this property within a Flink Function such as MapFunction. Maybe you could show us a bit

Zeppelin: Flink Kafka Connector

2017-01-17 Thread Neil Derraugh
This is really a Zeppelin question, and I’ve already posted to the user list there. I’m just trying to draw in as many relevant eyeballs as possible. If you can help please reply on the Zeppelin mailing list. In my Zeppelin notebook I’m having a problem importing the Kafka streaming library f

Re: Flink Run command :Replace placeholder value in spring xml

2017-01-17 Thread raikarsunil
Hi, args[0]=/home/myfilepath/file.properties Thanks, Sunil -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Run-command-Replace-placeholder-value-in-spring-xml-tp11109p2.html Sent from the Apache Flink User Mailing List archive. m

Re: Flink Run command :Replace placeholder value in spring xml

2017-01-17 Thread Timo Walther
Hi Sunil, what is the content of args[0] when you execute public static void main(String[] args) { System.out.println(args[0]); } Am 17/01/17 um 14:55 schrieb raikarsunil: Hi, I am not able to replace value into spring place holder .Below is the xml code snippet . file:#{systemProperties['con

Re: Possible JVM native memory leak

2017-01-17 Thread Timo Walther
This sounds like a RocksDB issue. Maybe Stefan (in CC) has an idea? Timo Am 17/01/17 um 14:52 schrieb Avihai Berkovitz: Hello, I am running a streaming job on a small cluster, and after a few hours I noticed that my TaskManager processes are being killed by the OOM killer. The processes we

Flink Run command :Replace placeholder value in spring xml

2017-01-17 Thread raikarsunil
Hi,I am not able to replace value into spring place holder .Below is the xml code snippet . file:#{systemProperties['configFileName']} In the above code I need to replace *configFileName* with actual file which I need to provide externally.Below is the

Possible JVM native memory leak

2017-01-17 Thread Avihai Berkovitz
Hello, I am running a streaming job on a small cluster, and after a few hours I noticed that my TaskManager processes are being killed by the OOM killer. The processes were using too much memory. After a bit of monitoring, I have the following status: * The maximum heap size (Xmx) is 4

Re: Three input stream operator and back pressure

2017-01-17 Thread Stephan Ewen
Hi! Just to avoid confusion: the DataStream network readers does currently not support backpressuring only one input (as this conflicts with other design aspects). (The DataSet network readers do support that FYI) How about simply "correcting" the order later? If you have pre-sorted data per stre

Re: Three input stream operator and back pressure

2017-01-17 Thread Dmitry Golubets
Hi Stephan, In one of our components we have to process events in order, due to business logic requirements. That is for sure introduces a bottleneck, but other aspects are fine. I'm not taking about really resorting data, but just about consuming it in the right order. I.e. if two streams are al

Re: Three input stream operator and back pressure

2017-01-17 Thread Stephan Ewen
Hi Dmitry! The streaming runtime makes a conscious decision to not merge streams as in an ordered merge. The reason is that this is at large scale typically bad for scalability / network performance. Also, in certain DAGs, it may lead to deadlocks. Even the two input operator delivers records on

Re: Release 1.2?

2017-01-17 Thread Stephan Ewen
So far it looks as if the next release candidate comes in a few days (end of this week, beginning of next). Keep the fingers crossed that it passes! If you want to help speed the release up, I would recommend to check the next release candidate (simply use it as the deployed version and see if it

Re: Help using HBase with Flink 1.1.4

2017-01-17 Thread Stephan Ewen
Flavio is right: Flink should not expose Guava at all. Make sure you build it following this trick: https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/building.html#dependency-shading On Tue, Jan 17, 2017 at 11:18 AM, Flavio Pompermaier wrote: > I had very annoying problem in depl

Re: Help using HBase with Flink 1.1.4

2017-01-17 Thread Flavio Pompermaier
I had very annoying problem in deploying a Flink job for Hbase 1.2 on cloudera cdh 5.9.0the problem was caused by the fact that with maven < 3.3 you could build flink dist just using mvn clean install, with maven >= 3.3 you should do another mvn clean install from the flink-dist directory (I st

Re: Three input stream operator and back pressure

2017-01-17 Thread Dmitry Golubets
Hi Timo, I don't have any key to join on, so I'm not sure Window Join would work for me. Can I implement my own "low level" operator in any way? I would appreciate if you can give me a hint or a link to example of how to do it. Best regards, Dmitry On Tue, Jan 17, 2017 at 9:24 AM, Timo Walthe

Re: Apache Flink 1.1.4 - Java 8 - CommunityDetection.java:158 - java.lang.NullPointerException

2017-01-17 Thread Miguel Coimbra
Hello Vasia, I am going to look into this. Hopefully I will contribute to the implementation and documentation. Regards, -- Forwarded message -- From: Vasiliki Kalavri To: user@flink.apache.org Cc: Date: Sun, 15 Jan 2017 18:01:41 +0100 Subject: Re: Apache Flink 1.1.4 - Java 8 -

Re: Release 1.2?

2017-01-17 Thread Timo Walther
Hi Denis, the first 1.2 RC0 has already been released and the RC1 is on the way (maybe already this week). I think that we can expect a 1.2 release in 3-4 weeks. Regards, Timo Am 17/01/17 um 10:04 schrieb denis.doll...@thomsonreuters.com: Hi all, Do you have some ballpark estimate for a

Re: Three input stream operator and back pressure

2017-01-17 Thread Timo Walther
Hi Dmitry, the runtime supports an arbitrary number of inputs, however, the API does currently not provide a convenient way. You could use the "union" operator to reduce the number of inputs. Otherwise I think you have to implement your own operator. That depends on your use case though. You

Release 1.2?

2017-01-17 Thread denis.dollfus
Hi all, Do you have some ballpark estimate for a stable release of Flink 1.2? We are still at a proof-of-concept stage and are interested in several features of 1.2, notably async stream operations (FLINK-4391). Thank you, Denis This e-mail is for the sole use