songxiaosheng commented on issue #2189:
URL: 
https://github.com/apache/shardingsphere-elasticjob/issues/2189#issuecomment-1643771608

   > > I should have found the problem: my service is deployed on k8s, each 
application deployment will register a podIP in the zk node and these ip 
instances will not be deleted with the redeployment of the application, which 
leads to the console Every time the task status is queried, all IPs will be 
traversed in full. Regardless of whether these IPs exist or not, this operation 
takes a long time, and I have seen this problem still exist in the console of 
version 3.x. I am now considering a mechanism to delete invalid information in 
zk after the pod goes offline.
   > > 
我应该是发现了问题:是我的服务是部署在k8s上,每一次应用部署都会在zk节点中注册一个podIP且这些ip实例不会随着应用的重新部署而被删除,这就导致控制台每一次查询任务状态时会全量遍历所有IP,不管这些IP是否存在,这个操作耗时很长,且我看了3.x版本的控制台依然会存在这个问题。我现在在考虑做到一种机制在pod下线后删除在zk中的无效信息。
   > 
   > 我们也是k8s部署的,也是/servers下面很多历史的ip在里面,想问下你是怎么解决的?如果确定哪些IP是不用的,不可能去看k8s的ip吧
   
   持久ip节点是用来存储状态的(启用/禁用) 其实可以直接删除了就可以  instance下面的是用来分片的要保留 
你可以通过判断是否存在instance来保证当前ip是否不用了来删除server下的节点,我这边已经实现了具体细节可以问


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to