Dear Flink Community,
Is there a way of troubleshooting timer service? In the docs, it says that
the service might degrade a job performance significantly. Is there a way
how to expose and see timer service metrics i.e. length of the priority
queue, how many time the service fires etc?
got it. thank you!
On Thu, Dec 6, 2018 at 4:26 PM Chesnay Schepler wrote:
> No this is not possible.
>
> On 06.12.2018 16:04, sayat wrote:
> > Dear Flink community,
> >
> > Does anyone know if it is possible to expose Flink BackPressure number
> > via JMX MBean? The one that shows in Flink UI?
>
>
>
Dear Flink community,
Does anyone know if it is possible to expose Flink BackPressure number via
JMX MBean? The one that shows in Flink UI?
fect.
Old servers:
https://www.hetzner.de/dedicated-rootserver/px91-ssd
New Server:
https://www.hetzner.de/dedicated-rootserver/ax60-ssd
On Mon, Dec 3, 2018 at 8:07 PM Sayat Satybaldiyev
wrote:
> Dear Flink community,
>
> Would anyone give a clue how to debug a job that has a high ba
dware resource for other jobs.
>
> On Wed, Oct 24, 2018 at 2:20 PM Kien Truong
> wrote:
>
>> Hi,
>>
>> How are your task managers deploy ?
>>
>> If you cluster only have one task manager with one slot in each node,
>> then the job should be spread evenly.
&
e job should be spread evenly.
>
> Regards,
>
> Kien
>
> On 10/24/2018 4:35 PM, Sayat Satybaldiyev wrote:
> > Is there any way to indicate flink not to allocate all parallel tasks
> > on one node? We have a stateless flink job that reading from 10
> > partition t
Is there any way to indicate flink not to allocate all parallel tasks on
one node? We have a stateless flink job that reading from 10 partition
topic and have a parallelism of 6. Flink job manager allocates all 6
parallel operators to one machine, causing all traffic from Kafka allocated
to only o
covery.
>
> Piotrek
>
> On 9 Oct 2018, at 15:28, Sayat Satybaldiyev wrote:
>
> After digging more in the log, I think it's more a bug. I've greped a log
> by job id and found under normal circumstances TM supposed to delete
> flink-io files. For some reason, it doesn
responsible for
a5b223c7aee89845f9aed24012e46b7e lost the leadership.
On Tue, Oct 9, 2018 at 2:33 PM Sayat Satybaldiyev wrote:
> Dear all,
>
> While running Flink 1.6.1 with RocksDB as a backend and hdfs as
> checkpoint FS, I've noticed that after a job has moved to a different host
> it leaves quite a
actually, once I wrote my question I've realized that I can do it with
custom metrics and getting easily the size of the state map.
On Wed, Sep 26, 2018 at 11:57 AM Sayat Satybaldiyev
wrote:
> Thank you for this information. @Yun is there an easy way to expose a
> number of records
Thank you for this information. @Yun is there an easy way to expose a
number of records in rockdsdb?
On Wed, Sep 26, 2018 at 9:47 AM Yun Tang wrote:
> Hi Sayat
>
> Before this future is on, you could also find some metrics information,
> such as hit/miss count, file status from Ro
Flink provides a rich number of metrics. However, I didn't find any metrics
for rocksdb state backend not in metrics doc nor in JMX Mbean.
Is there are any metrics for the rocksdb backend that Flink exposes?
yep, they're there. thank you!
On Mon, Sep 24, 2018 at 12:54 PM 杨力 wrote:
> They are provided in taskmanagers.
>
> Sayat Satybaldiyev 于 2018年9月24日周一 下午6:38写道:
>
>> Dear all,
>>
>> While configuring JMX with Flink, I don't see some bean metrics that
&
Dear all,
While configuring JMX with Flink, I don't see some bean metrics that
belongs to the job, in particular, the number in/out records per operator.
I've checked REST API and those numbers provided there. Does flink provide
such bean or there's an additional configuration for it?
Here's a li
that Flink uses RocksDB to store state of the window. Will
Flink use RocksDB to join between windows and not use HashMap for the merge
operation?
Best,
Sayat
15 matches
Mail list logo