Hi Till,

Thank you for you answer however I am sorry to hear that. I was reluctant
to execute jobs with long running Flink cluster due to the fact that
multiple jobs would cloud yarn statistics regarding cpu and memory time as
well as Flink's garbage collector statistics in log, as they would be
stored for the whole Flink cluster, instead of a single job.

Do you know whether is there a way to extract mentioned stats (cpu time,
mem time, gc time) for a single job ran on long running Flink cluster?

I will be very grateful for an answer:)

Best regards,
Filip

Pozdrawiam,
Filip Łęczycki

2016-01-04 10:05 GMT+01:00 Till Rohrmann <till.rohrm...@gmail.com>:

> Hi Filip,
>
> at the moment it is not possible to retrieve the job statistics after the
> job has finished with flink run -m yarn-cluster. The reason is that the
> YARN cluster is only alive as long as the job is executed. Thus, I would
> recommend you to execute your jobs with a long running Flink cluster on
> YARN.
>
> Cheers,
> Till
> ​
>
> On Fri, Jan 1, 2016 at 11:29 PM, Filip Łęczycki <filipleczy...@gmail.com>
> wrote:
>
>> Hi all,
>>
>> I am running filnk aps on YARN cluster and I am trying to get some
>> benchmarks. When I start a long-running flink cluster on my YARN cluster I
>> have an access to web UI and rest API that provide me statistics of the
>> deployed jobs (as desribed here:
>> https://ci.apache.org/projects/flink/flink-docs-master/internals/monitoring_rest_api.html).
>> I was wondering is this possible to get such information about a single run
>> job trigerred with 'flink run -m yarn-cluster ...'? After the job is
>> finished there is no flink client running so I cannot use rest api to get
>> stats.
>>
>> Thanks for any help:)
>>
>>
>> Best regards/Pozdrawiam,
>> Filip Łęczycki
>>
>
>

Reply via email to