Re: Flink on EC"

2015-11-08 Thread Thomas Götzinger
Sorry for Confusing, the flink cluster throws following stack trace.. org.apache.flink.client.program.ProgramInvocationException: The program execution failed: Failed to submit job 29a2a25d49aa0706588ccdb8b7e81c6c (Flink Java Job at Sun Nov 08 18:50:52 UTC 2015) at org.apache.flink.clie

Hi, join with two columns of both tables

2015-11-08 Thread Philip Lee
I want to join two tables with two columns like //AND sr_customer_sk = ws_bill_customer_sk //AND sr_item_sk = ws_item_sk val srJoinWs = storeReturn.join(webSales).where(_._item_sk).equalTo(_._item_sk){ (storeReturn: StoreReturn, webSales: WebSales, out: Collector[(Long,L

Re: Flink on EC"

2015-11-08 Thread Thomas Götzinger
HI Fabian, thanks for reply. I use a karamel receipt to install flink on ec2.Currently I am using flink-0.9.1-bin-hadoop24.tgz . In that file the NativeS3FileSystem is included. First I’ve tried it with the st

finite subset of an infinite data stream

2015-11-08 Thread rss rss
Hello, I need to extract a finite subset like a data buffer from an infinite data stream. The best way for me is to obtain a finite stream with data accumulated for a 1minute before (as example). But I not found any existing technique to do it. As a possible ways how to do something near to a

Re: Creating a Source using an Akka actor

2015-11-08 Thread Stephan Ewen
Hi Hector! I know of users that use camel to produce a stream into Apache Kafka and then use Flink to consume and process the Kafka stream. That pattern work well. Greetings, Stephan On Sun, Nov 8, 2015 at 1:33 PM, rmetzger0 wrote: > Hi Hector, > > I'm sorry that nobody replied to your messag

Re: Creating a Source using an Akka actor

2015-11-08 Thread rmetzger0
Hi Hector, I'm sorry that nobody replied to your message so far. I think you are not subscribed to the user@flink.apache.org mailing list when you posted this question. Therefore, we Flink community didn't receive your message. To subscribe, send an email to user-subscr...@flink.apache.org. Regar

Re: Flink execution time benchmark.

2015-11-08 Thread rmetzger0
Hi Saleh, I'm sorry that nobody replied to your message. I think you were not subscribed to the user@flink.apache.org. mailing list when you posted this question. Therefore, we Flink community didn't receive your message. Did you resolve the issue in the meantime or are you still seeking for help?

Re: Apache Flink Operator State as Query Cache

2015-11-08 Thread Welly Tambunan
Thanks for the answer. Currently the approach that i'm using right now is creating a base/marker interface to stream different type of message to the same operator. Not sure about the performance hit about this compare to the CoFlatMap function. Basically this one is providing query cache, so i'm