Re: Monitoring single-run job statistics

2016-01-06 Thread Filip Łęczycki
Hi Stephan, Thank you for you answer. I would love to contribute but currently I have no capacity as I am buried with my thesis. I will reach out after graduating :) Bestr regards. Filip Pozdrawiam, Filip Łęczycki 2016-01-05 10:35 GMT+01:00 Stephan Ewen : > Hi Filip! > > There are

Re: Monitoring single-run job statistics

2016-01-04 Thread Filip Łęczycki
ld be stored for the whole Flink cluster, instead of a single job. Do you know whether is there a way to extract mentioned stats (cpu time, mem time, gc time) for a single job ran on long running Flink cluster? I will be very grateful for an answer:) Best regards, Filip Pozdrawiam, Filip Łęc

Monitoring single-run job statistics

2016-01-01 Thread Filip Łęczycki
r any help:) Best regards/Pozdrawiam, Filip Łęczycki

Re: What is the equivalent of Spark RDD is Flink

2015-12-28 Thread Filip Łęczycki
cute is called and before that only a execution plan is built. Is that correct or are there other significant differences between Spark and Flink lazy execution approach that I failed to grasp? Best regards, Filip Łęczycki Pozdrawiam, Filip Łęczycki 2015-12-25 10:17 GMT+01:00 Aljoscha Krettek : &g

Re: Problem with passing arguments to Flink Web Submission Client

2015-12-21 Thread Filip Łęczycki
Hi, Regarding the CLI, I have been using >bin/flink run myJarFile.jar -f flink -i -m 1 and it is working perfectly fine. Is there a difference between this two ways of submitting a job ("bin/flink MyJar.jar" and "bin/flink run MyJar.jar")? I will open a Jira. Best

Problem with passing arguments to Flink Web Submission Client

2015-12-20 Thread Filip Łęczycki
rong or is this a bug and web client tries to interpret my arguments as flink options? Regards/Pozdrawiam, Filip Łęczycki

Re: Problems with using ZipWithIndex

2015-12-12 Thread Filip Łęczycki
Hi Marton, Thank you for your answer. I wasn't able to use zipWithIndex in a way that you stated as i got "cannot resolve" error. However it worked when i used it like this: val utils = new DataSetUtils[AlignmentRecord](data) val index = utils.zipWithIndex Regards,

Problems with using ZipWithIndex

2015-12-12 Thread Filip Łęczycki
= DataSetUtils.zipWithIndex[AlignmentRecord](data) I receive following error: Type mismatch: expected: DataSet[AlignmentRecord], actual: DataSet[AlignmentRecord] Could you please guide me how to use this function? Pozdrawiam, Filip Łęczycki

Re: Using memory logging in Flink

2015-12-09 Thread Filip Łęczycki
regards, Filip Łęczycki Pozdrawiam, Filip Łęczycki 2015-12-09 11:13 GMT+01:00 Stephan Ewen : > Hi Filip! > > Someone else just used the memory logging with the exact described > settings - it worked. > > There is probably some mixup, you may be looking into the wrong log file, >

Re: Using memory logging in Flink

2015-12-08 Thread Filip Łęczycki
her way to monitor Flink Job's memory usage and GC time, other than looking at web interface? Best regards, Filip Łęczycki Pozdrawiam, Filip Łęczycki 2015-12-08 20:48 GMT+01:00 Stephan Ewen : > Hi! > > That is exactly the right way to do it. Logging has to be at least

Using memory logging in Flink

2015-12-08 Thread Filip Łęczycki
few intervals. Should I change something else in the configuration? Best regards/Pozdrawiam, Filip Łęczycki

Re: Flink - Avro - AvroTypeInfo issue - Index out of bounds exception

2015-04-26 Thread Filip Łęczycki
ING. 04/26/2015 17:13:43 Job execution switched to status FAILED. Best regards, Filip Pozdrawiam, Filip Łęczycki 2015-04-21 19:03 GMT+02:00 Stephan Ewen : > Hi! > > From a quick look at the code it seems that this is a followup exception > that occurs because the task has been shut down

Re: Flink - Avro - AvroTypeInfo issue - Index out of bounds exception

2015-04-21 Thread Filip Łęczycki
advise me on that? If you need more information to determine the issue I will gladly provide. Regards, Filip Łęczycki Pozdrawiam, Filip Łęczycki 2015-04-14 11:43 GMT+02:00 Maximilian Michels : > Hi Filip, > > I think your issue is best dealt with on the user mailing list. > Unfortunately,