Hi community,
I have a Flink job running with RichMapFunction that uses keyed state.
Although the TTL is enabled, I wonder if there is a way that I can monitor
the memory usage of the keyed state. I'm using RocksDB as the state backend.
Best regards,
Mu
Hi, Jake,
Thanks for offering help.
I didn't find anything related to kafka in my tm log.
Is there a way to enable the logging, or am I just looking into the wrong
place?
Thanks in advance.
Best regards,
Mu
Hi, Song, Guo,
We updated our cluster to 1.10.1 and the cluster.evenly-spread-out-slots
works pretty well now.
Thanks for your help!
Best regards,
Mu
On Wed, Jul 8, 2020 at 9:35 PM Mu Kong wrote:
> Hi Song, Guo,
>
> Thanks for the information.
> I will first upgrade our flin
nd after setting `cluster.evenly-spread-out-slots`, can they be
> stably reproduced?
> - How many TMs do you have? And how many slots does each TM has?
>
>
> Thank you~
>
> Xintong Song
>
>
> [1] https://issues.apache.org/jira/browse/FLINK-12122
>
> On
rk very well because there could be insufficient task managers when
> request slot from ResourceManager. This has been discussed in
> https://issues.apache.org/jira/browse/FLINK-12122 .
>
>
> Best,
> Yangze Guo
>
> On Tue, Jul 7, 2020 at 5:44 PM Mu Kong wrote:
> >
> > Hi
Hi community,
I'm running an application to consume data from kafka, and process it then
put data to the druid.
I wonder if there is a way where I can allocate the data source consuming
process evenly across the task manager to maximize the usage of the network
of task managers.
So, for example,
td-p/122046
>[2]
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/quot-Slow-ReadProcessor-quot-warnings-when-using-BucketSink-td9427.html
>
>
> --原始邮件 --
> *发件人:*Mu Kong
> *发送时间:*Fri May 22 11:16:32 2020
> *收件人:*user
join the data.
>
> Best,
> Jark
>
> [1]:
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/joins.html#join-with-a-temporal-table
> [2]:
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/joins.html#join-with-a-temporal-table-fu
Hi community,
I have a stream of traffic data with a service_id in it.
I'm enriching this data with a map of (service_id, service_name), which
only has 10 ~ 20 pairs and is read from config file.
The problem I'm facing now is, this map changes from time to time, and I
don't want to redeploy the a
Hi community,
I'm glad that in Flink 1.8.0, it introduced cleanupInRocksdbCompactFilter
to support state clean up for rocksdb backend.
We have an application that heavily relies on managed keyed store.
As we are using rocksdb as the state backend, we were suffering the issue
of ever-growing state
Hi Gordon,
Thanks for your response.
I think I've misspoken about the failure after "n/a" exception.
The behavior after this exception would be:
switched from RUNNING to CANCELING
switched from CANCELING to CANCELED
Try to restart or fail the job "X" () if no
longer po
> the code and solely used during cancelling the fetcher.
>
> I don't know whether this is possible, but I suppose there could be more
> than one marker and we should call removeAll() instead - @Gordon, can
> you elaborate/check whether this could happen?
>
>
> Nic
Hi,
I have encountered a wired problem.
After I start the job for several days, Flink gave me the following error:
*java.lang.RuntimeException: Unable to find a leader for partitions:
[Partition: KafkaTopicPartition{topic='n/a', partition=-1},
KafkaPartitionHandle=[n/a,-1], offset=(not set)]*
*
>>>> You should only have these dangling pending files after a
>>>> failure-recovery cycle, as you noticed. My suggestion would be to
>>>> periodically clean up older pending files.
>>>>
>>>> Best,
>>>> Aljoscha
>&g
Hi Vishal,
I have the same concern about save pointing in BucketingSink.
As for your question, I think before the pending files get cleared in
handleRestoredBucketState .
They are finalized in notifyCheckpointComplete
https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-fi
Ah, I think I can just use ./bin/jobmanager.sh
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/cluster_setup.html#adding-a-jobmanager
Thanks!
On Thu, Feb 1, 2018 at 4:00 PM, Mu Kong wrote:
> Hi Tony,
>
> Thanks for your response!
> I would defi
be responsible for recovering the dead JM.
>
> Best,
> Tony Wei
>
> [1] http://supervisord.org/
>
> 2018-02-01 14:11 GMT+08:00 Mu Kong :
>
>> Hi all,
>>
>> I have a Flink HA cluster with 2 job managers and a zookeeper quorum of 3
>> nodes.
>>
>
Hi all,
I have a Flink HA cluster with 2 job managers and a zookeeper quorum of 3
nodes.
My failed job manager didn't get recovered after I killed it.
Here is how I didn't it and what I've observed:
1. I started the HA cluster with start-cluster.sh
2. Job manager A got elected.
3. I killed job m
18 matches
Mail list logo