We run Artemis in HA configuration under Kubernetes, with a primary and backup
instance that have shared storage. Recently, we updated our pod definitions
from version 2.30 to 2.39. And... something went wrong. I think what happened
is one of the two pods upgraded first, and our clients start
Generally speaking I'd say the following is recommended:
1. Kill the primary (causing all clients to fail-over to the backup).
2. Upgrade the primary.
3. Restart the primary. If fail-back has been configured then all the
clients will fail-over to the primary at this point.
4. Kill the back