[MM3-users] Re: Hyperkitty performance problem

2024-12-28 Thread monochromec via Mailman-users
Following the root cause analysis which Tobias started all those weeks ago we (the admin team behind the installation) are still struggling with the following phenomenon: messages on average take more than 24 hours to be processed, more precisely, the average lifetime of a pickled message object

[MM3-users] Re: Hyperkitty performance problem

2024-12-30 Thread monochromec via Mailman-users
In addition I have the following questions: - How to turn on verbose logging for the pipeline runner (as the changes in mailman3.cfg didn't have any effects as outlined above)? - Is there any alternative means to monitor the pipeline runner activities other than logs and monitoring the ` `/var/li

[MM3-users] Re: Hyperkitty performance problem

2024-12-30 Thread monochromec via Mailman-users
Thanks again for the speedy support. The only (occasional) error message is that archiving failed ("archiving failed, re-queuing (mailing-list ...") but none of the above Mark mentioned. What is suspicious though is that uwsgi is at the top of the list of CPU slice consumers. Which may point at

[MM3-users] Re: Hyperkitty performance problem

2024-12-31 Thread monochromec via Mailman-users
Further RCA reveals the fact that the archiver runner tries to archive the same message over and over again - hence the overhead during pipeline processing. The total # of attempts reaches well into five digits for a single message. Before I dive into the Core's CB: Any idea what's causing this?

[MM3-users] Re: Hyperkitty performance problem

2025-01-01 Thread monochromec via Mailman-users
Thanks for your input and happy new year to you! The logs set debug level prove to be inconclusive: the runner is handing the message over to the archive runner which in turn forwards to the message to HyperKitty. HyperKitty, not checking if a message with the same ML name and msgid is already