sqtce opened a new issue, #6967: URL: https://github.com/apache/rocketmq/issues/6967
### Before Creating the Bug Report - [X] I found a bug, not just asking a question, which should be created in [GitHub Discussions](https://github.com/apache/rocketmq/discussions). - [X] I have searched the [GitHub Issues](https://github.com/apache/rocketmq/issues) and [GitHub Discussions](https://github.com/apache/rocketmq/discussions) of this repository and believe that this is not a duplicate. - [X] I have confirmed that this bug belongs to the current repository, not other repositories of RocketMQ. ### Runtime platform environment Centos7.6 ### RocketMQ version 4.7.0 ### JDK Version 1.8.0 ### Describe the Bug During a significant downsizing of the business, a large number of message duplicates occurred. Additionally, there was an unstable registration of consumers on another broker, leading to a backlog of consumer consumption. The hardware watermarks were normal, and network connectivity was verified as normal by Alibaba Cloud. The problematic broker generated the following log entries: 2023-06-25 22:19:38 WARN NettyServerNIOSelector_16_15 - send a request command to channel </10.17.17.9:43606> failed. The problematic broker is printing the following log entries in large quantities: 2023-06-25 22:19:40 WARN NettyServerCodecThread_10 - event queue size[10001] enough, so drop this event NettyEvent [type=CONNECT, remoteAddr=10.13.10.51:59658, channel=[id: 0x4e4915b9, L:/10.11.1.1 6:10911 - R:/10.13.10.51:59658]], Restarting the broker resolves the issue. ### Steps to Reproduce What was observed instead? ### What Did You Expect to See? Large message backlog and duplicated message consumption. ### What Did You See Instead? How to resolve it. ### Additional Context _No response_ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
