Hi, Yubiao, > It took us a long time to > remove these topics. We delete these topics in this way: > ``` > cat topics_name_file | awk '{system("bin/pulsar-admin topics delete "$0)}' > ) > ``` > It deletes topics one by one.
The main overhead here is the frequent creation and release of the JVM. Maybe we can use `pulsar-shell` here. It will save a lot of time. Zike Yang Zike Yang On Mon, Apr 24, 2023 at 5:54 PM Yubiao Feng <yubiao.f...@streamnative.io.invalid> wrote: > > Hi Girish Sharma > > > What additional advantage would one get by using that approach > > rather than simply using a one liner script to just call delete > > topic for each of those topics if the list of topics is known. > > If users enabled `Geo-Replication` on a namespace in mistake(expected > only to enable one topic), > it is possible to create many topics on the remote cluster in one second. > > Not long ago, 10,000 topics were created per second because of this > mistake. It took us a long time to > remove these topics. We delete these topics in this way: > ``` > cat topics_name_file | awk '{system("bin/pulsar-admin topics delete "$0)}' > ) > ``` > It deletes topics one by one. > > We conclude later that stress test tools such as `Jmeter` or `ab` should be > used to delete so many topics. > > If Pulsar could provide these APIs, it would be better. > > Thanks > Yubiao Feng > > > > > On Wed, Apr 19, 2023 at 3:29 PM Girish Sharma <scrapmachi...@gmail.com> > wrote: > > > Hello Yubiao, > > > > What additional advantage would one get by using that approach rather than > > simply using a one liner script to just call delete topic for each of those > > topics if the list of topics is known. > > > > Regards > > > > On Wed, Apr 19, 2023 at 12:54 PM Yubiao Feng > > <yubiao.f...@streamnative.io.invalid> wrote: > > > > > In addition to these two, It is recommended to add a method to batch > > delete > > > topics, such as this: > > > > > > ``` > > > pulsar-admin topics delete-all-topics <topic_1>, <topic_2> > > > > > > or > > > > > > pulsar-admin topics delete-all-topic <a file contains a topic_name lists> > > > ``` > > > > > > Thanks > > > Yubiao Feng > > > > > > On Sat, Apr 15, 2023 at 5:37 PM Xiangying Meng <xiangy...@apache.org> > > > wrote: > > > > > > > Dear Apache Pulsar Community, > > > > > > > > I hope this email finds you well.I am writing to suggest a potential > > > > improvement to the Pulsar-admin tool, > > > > which I believe could simplify the process of cleaning up tenants and > > > > namespaces in Apache Pulsar. > > > > > > > > Currently, cleaning up all the namespaces and topics within a tenant or > > > > cleaning up all the topics within a namespace requires several manual > > > > steps, > > > > such as listing the namespaces, listing the topics, and then deleting > > > each > > > > topic individually. > > > > This process can be time-consuming and error-prone for users. > > > > > > > > To address this issue, I propose the addition of a "clear" parameter to > > > the > > > > Pulsar-admin tool, > > > > which would automate the cleanup process for tenants and namespaces. > > > Here's > > > > a conceptual implementation: > > > > > > > > 1. To clean up all namespaces and topics within a tenant: > > > > ``` bash > > > > pulsar-admin tenants clear <tenant-name> > > > > ``` > > > > 2. To clean up all topics within a namespace: > > > > ```bash > > > > pulsar-admin namespaces clear <tenant-name>/<namespace-name> > > > > ``` > > > > > > > > By implementing these new parameters, users would be able to perform > > > > cleanup operations more efficiently and with fewer manual steps. > > > > I believe this improvement would greatly enhance the user experience > > when > > > > working with Apache Pulsar. > > > > > > > > I'd like to discuss the feasibility of this suggestion and gather > > > feedback > > > > from the community. > > > > If everyone agrees, I can work on implementing this feature and submit > > a > > > > pull request for review. > > > > > > > > Looking forward to hearing your thoughts on this. > > > > > > > > Best regards, > > > > Xiangying > > > > > > > > > > > > > -- > > Girish Sharma > >