dware resource for other jobs.
>
> On Wed, Oct 24, 2018 at 2:20 PM Kien Truong
> wrote:
>
>> Hi,
>>
>> How are your task managers deploy ?
>>
>> If you cluster only have one task manager with one slot in each node,
>> then the job should be spread evenly.
&
e job should be spread evenly.
>
> Regards,
>
> Kien
>
> On 10/24/2018 4:35 PM, Sayat Satybaldiyev wrote:
> > Is there any way to indicate flink not to allocate all parallel tasks
> > on one node? We have a stateless flink job that reading from 10
> > partition t
Is there any way to indicate flink not to allocate all parallel tasks on
one node? We have a stateless flink job that reading from 10 partition
topic and have a parallelism of 6. Flink job manager allocates all 6
parallel operators to one machine, causing all traffic from Kafka allocated
to only o
covery.
>
> Piotrek
>
> On 9 Oct 2018, at 15:28, Sayat Satybaldiyev wrote:
>
> After digging more in the log, I think it's more a bug. I've greped a log
> by job id and found under normal circumstances TM supposed to delete
> flink-io files. For some reason, it doesn
responsible for
a5b223c7aee89845f9aed24012e46b7e lost the leadership.
On Tue, Oct 9, 2018 at 2:33 PM Sayat Satybaldiyev wrote:
> Dear all,
>
> While running Flink 1.6.1 with RocksDB as a backend and hdfs as
> checkpoint FS, I've noticed that after a job has moved to a different host
> it leaves quite a
actually, once I wrote my question I've realized that I can do it with
custom metrics and getting easily the size of the state map.
On Wed, Sep 26, 2018 at 11:57 AM Sayat Satybaldiyev
wrote:
> Thank you for this information. @Yun is there an easy way to expose a
> number of records
7;s db folder.
>
> Best
> Yun
> --
> *From:* Stefan Richter
> *Sent:* Wednesday, September 26, 2018 0:56
> *To:* Sayat Satybaldiyev
> *Cc:* user@flink.apache.org
> *Subject:* Re: Rocksdb Metrics
>
> Hi,
>
> this feature is tracked here
> https://issues.apache
Flink provides a rich number of metrics. However, I didn't find any metrics
for rocksdb state backend not in metrics doc nor in JMX Mbean.
Is there are any metrics for the rocksdb backend that Flink exposes?
yep, they're there. thank you!
On Mon, Sep 24, 2018 at 12:54 PM 杨力 wrote:
> They are provided in taskmanagers.
>
> Sayat Satybaldiyev 于 2018年9月24日周一 下午6:38写道:
>
>> Dear all,
>>
>> While configuring JMX with Flink, I don't see some bean metrics that
&
Dear all,
While configuring JMX with Flink, I don't see some bean metrics that
belongs to the job, in particular, the number in/out records per operator.
I've checked REST API and those numbers provided there. Does flink provide
such bean or there's an additional configuration for it?
Here's a li
Hello!
I'm trying to do a simple DataStream to DataStream join. Have two kafka
topics that has common field. I'm trying to join by via
keyBy-join-where-equalTo-TumblingWindow API in Flink 1.4.1.
My tumbling window size is 1 day. There will be more data than machine has
memory. I know that Flink u
11 matches
Mail list logo