redlsz opened a new issue, #6751: URL: https://github.com/apache/rocketmq/issues/6751
### Before Creating the Bug Report - [X] I found a bug, not just asking a question, which should be created in [GitHub Discussions](https://github.com/apache/rocketmq/discussions). - [X] I have searched the [GitHub Issues](https://github.com/apache/rocketmq/issues) and [GitHub Discussions](https://github.com/apache/rocketmq/discussions) of this repository and believe that this is not a duplicate. - [X] I have confirmed that this bug belongs to the current repository, not other repositories of RocketMQ. ### Runtime platform environment centos7 ### RocketMQ version 5.1.0 ### JDK Version JDK 8 ### Describe the Bug In rocketmq 5.0 pop consume mode, retry topic will be created automatically by broker when consume failed and need retry. Every pop subscription has a retry topic named in format: "%RETRY%" + group + "_" + topic. But after deleting topic and group, pop retry topic has not been cleand up. When responding client in PopMessageProcessor, retry message topic is set to original topic. So the pop retry topic is transparent to the client. <img width="1290" alt="image" src="https://github.com/apache/rocketmq/assets/103550934/59a35c94-f9d1-41df-b2b6-e2727efcc2b1"> The broker should keep close-loop pop retry topic resource management. ### Steps to Reproduce 1. Create a topic and send a few messages. 2. Consume this topic in pop mode but not ack any messages. 3. Keep runing for longer than pop invisible time, pop retry topic will be created. 4. Stop consumption and delete topic and group. ### What Did You Expect to See? Pop retry topic will be cleaned up after deleting pop subscription. ### What Did You See Instead? The pop retry topic has not been cleaned up. ### Additional Context _No response_ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
